Ethics

Forthcoming. “Inclusivity in the Education of Scientific Imagination.” In E. Hildt, K. Laas, C. Miller, and E. Brey (eds.). Building Inclusive Ethical Cultures in STEM. Routledge (with H. Sargeant).

Scientists imagine constantly. They do this when generating research problems, designing experiments, interpreting data, troubleshooting, drafting papers and presentations, and giving feedback. But when and how do scientists learn how to use imagination? Across six years of ethnographic research, it has been found that advanced career scientists feel comfortable using and discussing imagination, while graduate and undergraduate students of science often do not. In addition, members of marginalized and vulnerable groups tend to express negative views about the strength of their own imaginations, and the general usefulness of imagination in science. After introducing these findings and discussing the typical relationship between a student and their imagination across a career in science, we argue that reducing the number or power of active imaginations in science is epistemically counterproductive and finally suggest a number of ways to bring imagination back into science in a more inclusive way, especially through courses on imagination for scientists, role models, and exemplar-based learning.

2022. “Holism and Reductionism in the Illness/Disease Debate.” In Wuppuluri & Stewart (eds.), From Electrons to Elephants and Elections: Saga of Content and Context. Springer (with M. Buzzoni and L. Tesio).

In the last decades it has become clear that medicine must find some way to combine its scientific and humanistic sides. In other words, an adequate notion of medicine requires an integrative position that mediates between the analytic-reductionist and the normative-holistic tendencies we find therein. This is especially important as these different styles of reasoning separate “illness” (something perceived and managed by the whole individual in concert with their environment) and “disease” (a “mechanical failure” of a biological element within the body). While the demand for an integrative view has typically been motivated by ethical concerns, we claim that it is also motivated, perhaps even more fundamentally, by epistemological and methodological reasons. Evidence-based bio-medicine employs experimental and statistical techniques which eliminate important differences in the ways that conscious humans evaluate, live with, and react to disease and illness. However, it is precisely these experiences that underpin the concepts and norms of bio-medicine. Humanistic disciplines, on the other hand, have the resources to investigate these experiences in an intersubjectively testable way. Medicine, therefore, cannot afford to ignore its nature as a human science; it must be concerned not only with disease and illness, but also with the ways in which patients as persons respond to malady. Insofar as attitudes and expectations influence the criteria of illness and disease, they must be studied as part of the genuine subject matter of medicine as a human science. In general, we urge that this is a necessary step to overcome today’s trend to split evidence-based and clinical medicine.

2022. “Science Funding Policy and the COVID-19 Pandemic.” The Journal of Risk and Safety in Medicine 33(3):1-6. DOI: 10.3233/JRS-227015 (with V. Sikimić and J. Shaw).

Science funding policy is constantly evolving as a result of geopolitical, technological, cultural, social, and economic shifts. The last major upheaval of science funding policy happened in response to a catastrophic series of events: World War II. The newest worldwide catastrophe, the COVID-19 pandemic, has prompted similar reflections on fundamental questions about the roles of the sciences in society and the relationships between governments, private industry, public bodies, and the broader public. This is the introduction to a special issue on science funding policy, containing a series of reflections and insights which urge drastic and urgent changes that ought to be made.

2021. “Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents.” In Proceedings of the ACM on Human-Computer Interaction Vol. 5, CSCW2, Article 363. https://doi.org/10.1145/33363 (with M. Kneer).

While philosophers hold that it is patently absurd to blame robots or hold them morally responsible, a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents. In this paper, we explore one of the potential underlying reasons for robot blame, namely the folk’s willingness to ascribe inculpating mental states or “mens rea” to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question – also explored in the experiment – whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot “knew” rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook.

2021. “Playing the Blame Game with Robots.” Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. DOI: 10.1145/3434074.3447202 (with M. Kneer).

Recent research has shown that people are quite willing to ascribe moral blame to AI-driven artificial agents. In an experiment with 347 participants, we manipulated the explicitly specified capacities of such artificial agents, and explored the extent to which people are willing to ascribe potentially inculpating mental states to them and blame them for their actions. Moreover, we investigated whether the different capacities of the artificial agents or AI systems have an influence on the moral assessment of human agents who own and use them. Our results show that the more sophisticated an AI system is, the more participants will blame it when it puts human lives at risk, and the less they are willing to blame the human agents using it. Furthermore, the findings suggest that an AI system only begins to be perceived as blameworthy once it obtains a “theory of mind,” that is, once it obtains some knowledge and experience of how humans generally think and feel.

2019. “Everyday Scientific Imagination: A Qualitative Study of the Uses, Norms, and Pedagogy of Imagination in Science.”  Science & Education 28(6), 711-730. DOI: 10.1007/s11191-019-00067-9.

Imagination is necessary for scientific practice, yet there are no in vivo sociological studies on the ways that imagination is taught, thought of, or evaluated by scientists. This article begins to remedy this by presenting the results of a qualitative study performed on two systems biology laboratories. I found that the more advanced a participant was in their scientific career, the more they valued imagination. Further, positive attitudes toward imagination were primarily due to the perceived role of imagination in problem-solving. But not all problem-solving episodes involved clear appeals to imagination, only maximally specific problems did. This pattern is explained by the presence of an implicit norm governing imagination use in the two labs: only use imagination on maximally specific problems, and only when all other available methods have failed. This norm was confirmed by the participants, and I argue that it has epistemological reasons in its favour. I also found that its strength varies inversely with career stage, such that more advanced scientists do (and should) occasionally bring their imaginations to bear on more general problems. A story about scientific pedagogy explains the trend away from (and back to) imagination over the course of a scientific career. Finally, some positive recommendations are given for a more imagination-friendly scientific pedagogy.