Ethics

2021. “Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.” In Proceedings of the ACM on Human-Computer Interaction Vol. 5, CSCW2, Article 363. https://doi.org/10.1145/33363

While philosophers hold that it is patently absurd to blame robots or hold them morally responsible, a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents. In this paper, we explore one of the potential underlying reasons for robot blame, namely the folk’s willingness to ascribe inculpating mental states or “mens rea” to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question – also explored in the experiment – whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot “knew” rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook. .

2021. “Playing the Blame Game with Robots.” Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. DOI: 10.1145/3434074.3447202 (with M. Kneer).

Recent research has shown that people are quite willing to ascribe moral blame to AI-driven artificial agents. In an experiment with 347 participants, we manipulated the explicitly specified capacities of such artificial agents, and explored the extent to which people are willing to ascribe potentially inculpating mental states to them and blame them for their actions. Moreover, we investigated whether the different capacities of the artificial agents or AI systems have an influence on the moral assessment of human agents who own and use them. Our results show that the more sophisticated an AI system is, the more participants will blame it when it puts human lives at risk, and the less they are willing to blame the human agents using it. Furthermore, the findings suggest that an AI system only begins to be perceived as blameworthy once it obtains a “theory of mind,” that is, once it obtains some knowledge and experience of how humans generally think and feel.

2019. “Everyday Scientific Imagination: A Qualitative Study of the Uses, Norms, and Pedagogy of Imagination in Science.”  Science & Education 28(6), 711-730. DOI: 10.1007/s11191-019-00067-9.

Imagination is necessary for scientific practice, yet there are no in vivo sociological studies on the ways that imagination is taught, thought of, or evaluated by scientists. This article begins to remedy this by presenting the results of a qualitative study performed on two systems biology laboratories. I found that the more advanced a participant was in their scientific career, the more they valued imagination. Further, positive attitudes toward imagination were primarily due to the perceived role of imagination in problem-solving. But not all problem-solving episodes involved clear appeals to imagination, only maximally specific problems did. This pattern is explained by the presence of an implicit norm governing imagination use in the two labs: only use imagination on maximally specific problems, and only when all other available methods have failed. This norm was confirmed by the participants, and I argue that it has epistemological reasons in its favour. I also found that its strength varies inversely with career stage, such that more advanced scientists do (and should) occasionally bring their imaginations to bear on more general problems. A story about scientific pedagogy explains the trend away from (and back to) imagination over the course of a scientific career. Finally, some positive recommendations are given for a more imagination-friendly scientific pedagogy.