2023. “The future won’t be pretty: The nature and value of ugly, AI-designed experiments.” In M. Ivanova and A. Murphy (eds). The Aesthetics of Scientific Experiments. London: Routledge.
Can an ugly experiment be a good experiment? Philosophers have identified many beautiful experiments and explored ways in which their beauty might be connected to their epistemic value. In contrast, the present chapter seeks out (and celebrates) ugly experiments. Among the ugliest are those being designed by AI algorithms. Interestingly, in the contexts where such experiments tend to be deployed, low aesthetic value correlates with high epistemic value. In other words, ugly experiments can be good. Given this, we should conclude that beauty is not generally necessary or sufficient for epistemic value, and increasing beauty will not generally tend to increase epistemic value.
2022. “Sharpening the Tools of Imagination.” Synthese. https://doi.org/10.1007/s11229-022-03939-w
Thought experiments, models, diagrams, computer simulations, and metaphors can all be understood as tools of the imagination. While these devices are usually treated separately in philosophy of science, this paper provides a unified account according to which tools of the imagination are epistemically good insofar as they improve scientific imaginings. Improving scientific imagining is characterized in terms of epistemological consequences: more improvement means better consequences. A distinction is then drawn between tools being good in retrospect, at the time, and in general. In retrospect, tools are evaluated straightforwardly in terms of the quality of their consequences. At the cutting edge, tools are evaluated positively insofar as there is reason to believe that using them will have good consequences. Lastly, tools can be generally good, insofar as their use encourages the development of epistemic virtues, which are good because they have good epistemic consequences.
2021. “Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.” In Proceedings of the ACM on Human-Computer Interaction Vol. 5, CSCW2, Article 363. https://doi.org/10.1145/33363 (with M. Kneer)
While philosophers hold that it is patently absurd to blame robots or hold them morally responsible, a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents. In this paper, we explore one of the potential underlying reasons for robot blame, namely the folk’s willingness to ascribe inculpating mental states or “mens rea” to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question – also explored in the experiment – whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot “knew” rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook. .
2021. “Playing the Blame Game with Robots.” Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. DOI: 10.1145/3434074.3447202 (with M. Kneer).
Recent research has shown that people are quite willing to ascribe moral blame to AI-driven artificial agents. In an experiment with 347 participants, we manipulated the explicitly specified capacities of such artificial agents, and explored the extent to which people are willing to ascribe potentially inculpating mental states to them and blame them for their actions. Moreover, we investigated whether the different capacities of the artificial agents or AI systems have an influence on the moral assessment of human agents who own and use them. Our results show that the more sophisticated an AI system is, the more participants will blame it when it puts human lives at risk, and the less they are willing to blame the human agents using it. Furthermore, the findings suggest that an AI system only begins to be perceived as blameworthy once it obtains a “theory of mind,” that is, once it obtains some knowledge and experience of how humans generally think and feel.
2019. “Peeking Inside the Black Box: A New Kind of Scientific Visualization.” Minds and Machines 29: 87–107. (With N. Nersessian). DOI: 10.1007/s11023-018-9484-3.
Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization (observed in a qualitative study of a systems biology laboratory) that was developed to address just this sort of epistemic opacity. The visualization is unusual in that it depicts the dynamics and structure of a computer model instead of that model’s target system, and because it is generated algorithmically. Using considerations from epistemology and aesthetics, we explore how this new kind of visualization increases scientific understanding of the content and function of computer models in systems biology to reduce epistemic opacity.
2019. “The Role of Imagination in Social Scientific Discovery: Why Machine Discoverers Will Need Imagination Algorithms.” Pp. 49-66 in M. Addis et al. (eds.) Scientific Discovery in the Social Sciences. Springer: Heidelberg. DOI: 10.1007/978-3-030-23769-1_4.
When philosophers discuss the possibility of machines making scientific discoveries, they typically focus on discoveries in physics, biology, chemistry and mathematics. Observing the rapid increase of computer-use in science, however, it becomes natural to ask whether there are any scientific domains out of reach for machine discovery. For example, could machines also make discoveries in qualitative social science? Is there something about humans that makes us uniquely suited to studying humans? Is there something about machines that would bar them from such activity? A close look at the methodology of interpretive social science reveals several abilities necessary to make a social scientific discovery, and one capacity necessary to possess any of them is imagination. For machines to make discoveries in social science, therefore, they must possess imagination algorithms.