Moralizing AI opposition
◀ Prev | 2026-03-16, access: Public
- Download video (MP4, 418 MiB)
- Slides (PDF)
- Transcript (txt)
- Link to the paper: https://osf.io/preprints/psyarxiv/5mwre_v9
psychology evaluation meta In this video I'm looking a psychology preprint. Rather than how artificial intelligence works, the question of the moment is how humans react to it.
It's easy to observe that some people object to AI, and that the literal objections raised do not seem to fully explain the behaviour of the persons who raise the objections. In particular, if someone raises an objection like "AI data centres consume vast quantities of water!" and then the factual claim is shown not to actually be true, the person who raised it doesn't change their mind about AI. If we could build one that didn't use much water, or show that existing data centres don't actually use much water in the first place, the objection wouldn't change. It's easy to guess that that wasn't really the reason for objecting to AI.
In the present paper, the researchers suggest that many objections to AI are actually moral (and I might say deontological) in nature: despite talking about claimed negative consequences of the technology, opponents of AI actually consider AI to be morally wrong in itself, without reference to its consequences. Talk of consequences is only the guise in which the moral objection moves, not the objection itself; and what superficially appear to be technical objections will spin off into silliness, like the bridge across Avon Gorge, because they were never really about technical issues to begin with.
They test that hypothesis with several experiments that look at how AI technology is reported in the news; what individual survey respondents think about the technology and their reasons to like or dislike it; and whether respondents who have expressed a moral objection to AI, also decline to use it in completing tasks.
The answers are not really surprising, but what's important about doing the study is to actually have hard evidence instead of just guesswork to say that yes, many opponents of AI see the entire situation in moral terms rather than merely as a debate about practical consequences.
Because it could be relevant to the talk subject matter, I'm willing to accept a limited amount of discussion of environmental and job market impacts here on this entry; but please remember that we are here to talk about the psychology of opposition to AI and not the specific object-level points opponents raise.
Access level to read comments: Free account (logged in)
Access level to write comments: Free account (logged in)