Description of This Research Line
In this research line, we focus particularly on new biomedical and digital technologies. We work on the following clusters of questions.
- In which ways and to what extent are the SDTs really disrupting the human condition and our self-understanding as human beings?
- How do these disruptions challenge existing conceptualisations of ‘the human’, moral and anthropological theories, and corresponding legal frameworks?
- Which ethical theories and normative frameworks are better equipped to provide normative guidance in responding to those challenges?
The ultimate goal is to develop new ethical frameworks that integrate theories from both ethics and philosophical anthropology. In doing so, we aim to make a substantial and new contribution to already ongoing philosophical discussions about the ‘human being’, ‘humanity’, and the human condition in relation to both technology and ethics.
Related Projects
Brain-computer interfaces & the disruption of the concept of personhood [2022-2026]
PhD candidate: Bouke van Balen (b.j.v.balen@tue.nl)
Daily supervisors: Dr. Janna van Grunsven (j.b.vangrunsven@tudelft.nl) and Dr. Mariska Vansteensel (m.j.vansteensel@umcutrecht.nl)
Co-supervisors: Prof. Dr. Wijnand Ijsselsteijn (w.a.ijsselsteijn@tue.nl) and Prof. Dr. Nick Ramsey (n.ramsey@umcutrecht.nl)
Project Description
In light of thorny ontological and ethical issues, the aim of this project is to answer the following research question: how do BCIs disrupt assumptions about where and how we can (or even should) demarcate something as ontologically and ethically significant as personhood? As a secondary objective, the project will contribute to new interdisciplinary approaches and methods at the intersection of STEM disciplines and ethics/philosophy. Moreover, insights gained from the project are likely to ethically inform the ongoing design of current and future BCI technologies.
This PhD position represents a unique interdisciplinary collaboration between medical neuro-technologies and psychology of human-technology interactions. The innovative potential of this research project is enormous, since the PhD candidate is embedded within the research group at UMC Utrecht, working directly with those responsible for developing a world-wide first, fully implanted BCI for home use.
Philosophical anthropology research [2022-2025]
Post-doc: Dr. Anna Puzio (a.s.puzio@utwente.nl)
Daily supervisor: Dr. Julia Hermann (j.s.hermann@utwente.nl)
Co-supervisor: Prof. Dr. Joel Anderson (j.h.anderson@uu.nl)
Project Description
This project will study how new and emerging technologies – biotechnologies, digital technologies, robots, and/or climate technologies – have implications for our ontological and ethical understanding of the human being. The project deals with the question of how the understanding of the human being changes in the context of new technologies. Therefore, the project is dedicated to philosophical anthropology against the backdrop of technological advancements, which can be referred to as anthropology of technology. A particular focus is placed on a relational approach and the inclusion of nonhuman entities such as animals and technology, thereby connecting ethics of technology and environmental ethics. Emphasis is also placed on diversity, such as gender and intercultural approaches, within the field of anthropology.
Emerging technologies and the moral character of the human being [2021-2025]
PhD candidate: Kristy Claassen (k.claassen@utwente.nl)
Daily supervisor: Dr. Julia Hermann (j.s.hermann@utwente.nl)
Supervisors: Prof. Dr. Ciano Aydin (c.aydin@utwente.nl), Prof. Dr. Wijnand IJsselsteijn (w.a.ijsselsteijn@tue.nl) and Prof. Dr. Peter-Paul Verbeek (p.p.c.c.verbeek@uva.nl)
Project Description
A defining characteristic of the African Philosophy of Ubuntu is that we become human through others. As the proverb goes, ‘I am, because we are’ (Mbiti 1990). How, then, do technologies that claim to be socially disruptive fit into this moral framework? The aim of this project is to investigate how the moral character of the human being is affected by emerging technology within the ontological (Ramose 1999) and ethical (Metz 2007) framework of Ubuntu. The secondary aim is to explore the way in which human-technology relations are redefined within Ubuntu parameters. Concepts central in Ubuntu, such as autonomy, agency, responsibility and being (socially) human are being reconfigured through the emerging technology of AI. As this technology is poised to play an increasingly important role in the African context, the task of understanding how human moral character is shaped by it becomes urgent. At its core Ubuntu is a way of being that emphasises becoming human through values such as compassion, connectedness and inclusivity. The proposed research will focus not only on how these values are reconfigured in relation to technology but also how these values could shape the way in which AI is developed and employed in society. The Ubuntu framework for being (socially) human could undoubtedly inform a more humane approach to emerging technologies.
The Ethics of Humanoid Robots [2021-2025]
PhD candidate: Cindy Friedman (c.friedman@uu.nl)
Daily supervisors: Prof. Dr. Sven Nyholm (sven.nyholm@lrz.uni-muenchen.de) and Dr. Lucie White (l.a.white@uu.nl)
Co-supervisors: Prof. Dr. Ingrid Robeyns (I.A.M.Robeyns@uu.nl) and Dr. Lily Frank (L.E.Frank@tue.nl)
Project Description
Robots that look and act like humans are currently being created for a variety of purposes: care robots for children with autism and as companions for the elderly; so called sex robots; display model show robots like “Sophia” (who was awarded honorary citizenship in Saudi Arabia), and so on. Meanwhile, AI systems are also increasingly making decisions previously made by humans: for example, expert systems advise doctors about the resuscitation of coma patients, and autonomous vehicles make decisions about life and death. Soon these trends may converge, and we may start seeing humanoid robots with humanlike AI. The prospect of humanlike qualities in robots and AI is a both deeply fascinating and controversial prospect. It has the potential to create social and moral confusion and disruption. The aim of this PhD project is to investigate the ethics of “humanlikeness” in robots and AI, with a special focus on how this might (re)shape our ideas of our own humanity and especially the moral status of our own humanity. If some robots start looking and acting like humans and some AI systems take on humanlike roles, what does this mean for the moral importance of humanity as an end in itself (as in, e.g. Kantian ethics, but also in ethical thinking in general)? What might it mean for who we regard as agents and patients in the moral domain? If looking and acting like a human being is no longer reserved for human beings alone, but also something we start associating with robots and AI, should this make us reassess the ethical uniqueness that we commonly associate with being human?
Ethics of Data-Driven Mental Health Diagnostics [2021-2025]
PhD candidate: Anna van Oosterzee (a.m.vanoosterzee@uu.nl)
Daily supervisor: Dr. Sander Werkhoven (s.werkhoven@uu.nl)
Co-supervisor: Prof. Dr. Joel Anderson (j.h.anderson@uu.nl)
In collaboration with Leiden University, Leiden Institute of Advanced Computer Science
Dr. Anna Kononova (a.kononova@liacs.leidenuniv.nl) & Prof. Dr. Thomas Back (T.H.W.Baeck@liacs.leidenuniv.nl)
Project Description
As part of the Esdit research line on “The Human Condition,” this project addresses a complex and multi-faceted range of issues raised by recent and expected developments in machine learning and predictive data-driven analytics. Through the development of ever-more complex computational techniques to analyse large data sets, translational bioinformatics and other computational approaches increasingly enable medical researchers and clinicians to develop diagnostic approaches that are more fine-grained, personalized, and predictively accurate than those based on current categories of disease. Although “precision medicine” is relatively well established, parallel approaches in psychiatry are quite new. For the data scientists, research psychiatrists, and mental health professionals developing these approaches, these emerging technologies raise vexing ethical issues – issues that are also central to research in philosophy of psychiatry, disability studies, philosophy of science, data ethics, and philosophy of technology.
The aim of this PhD project is to investigate the following cluster of questions: What ethical concerns are raised by integrating data-driven analytics and translational bioinformatics into psychiatric diagnoses? What implications does the highly personalized character of these computational approaches have for reconceptualizing what is “normal” for human beings? How should these concerns shape these emerging technologies' regulation and ongoing design in this highly contested domain?
Behaviour change technologies for moral improvement [2021-2023]
Post-doc: Dr. Matthew Dennis (m.j.dennis@tue.nl)
Daily supervisor: Dr. Lily Frank (l.e.frank@tue.nl)
Supervisor: Prof. Dr. Wijnand IJsselsteijn (w.a.ijsselsteijn@tue.nl)
Project Description
This project aims to explore the morally disruptive potential of behavior change technologies for moral behaviour that is, technologies that can be used to improve moral cognition or moral decision making. They include bio or neuro-enhancement, robotic nudges, nudge-designed environments, and ambient persuasive technologies, which may help people behave more consistently with their deeply held moral convictions or aid people in overcoming cognitive and affective limitations that prevent them from appreciating a situation’s moral dimensions, or they may simply make it easier for them to make the morally right choice by helping them to overcome sources of weakness of will. This project will focus, first, on the possibility that use of such technology will lead to moral deskilling or atrophy of moral “muscle”. It will investigate whether or not, and in what ways, such deskilling is problematic from a moral point of view.
Answering these questions can allow us to rethink fundamental ethical debates on for example, the value of moral action itself; the relative importance of consequences and intentions; and the nature of character. Secondly, it will investigate how moral technologies mediate or lessen our experience of moral distress, conflict, and struggle. If moral struggle is offloaded to a system of technological nudges, persuasions, and e-choice architecture, does this undermine or shift the meaningfulness of engaging with and potentially overcoming moral challenges? How might this change impact individual levels of well-being and self-assessments of control, character, and authenticity?
Empathy, communication technologies, and neurodiversity [2020-2024]
PhD candidate: Caroline Bollen (c.j.m.bollen@tudelft.nl)
Daily supervisor: Dr. Janna van Grunsven (J.B.vanGrunsven@tudelft.nl)
Supervisor: Prof. Dr. Sabine Roeser (S.Roeser@tudelft.nl)
Project Description
Currently, there exists no robust account of empathy as technologically mediated. This is striking, since numerous empirical studies suggest that VR, Social Media Platforms, and other new digital technologies can undermine empathy or precisely extend its scope. One of our main research-questions is ‘how does empathy, qua concept, need to be reconsidered in light of these new digital technologies?’ The answer to this question not only has the potential to change current philosophical debates on the nature and scope of empathy, it is also needed to confront a practical-evaluative lacuna: empirical studies proclaim the empathy-promoting or distorting effects of various digital technologies, thereby seemingly validating or questioning their ethical and social (un)desirability. But since we lack a robust up-to-date philosophical account of empathy as technologically mediated, these assessments are founded on an unexamined notion of empathy. Furthermore, current understandings of the concept empathy often exclude autistic empathetic experiences. Recent knowledge from and acknowledgement of neurodivergent perspectives challenges the way empathy is conceptualized in research.
In this PhD project, a new concept of empathy is being developed that that is inclusive to autistic empathic experiences, and one that can be used to (normatively) reflect on the impact of technology on the way we relate to one another. This is being done on different levels: empathy as a concept in moral theory, empathy as mediated by communication technologies, and the specific case study of empathy as mediated by Augmentative and Alternative Communication (AAC) Technologies.
The techno politics of the climate movement [2020-2024]
PhD candidate: Patricia Reyes Benavides (p.d.reyes@utwente.nl)
Daily supervisor: Dr. Nolen Gertz (n.gertz@utwente.nl)
Co-supervisors: Prof. Dr. Peter-Paul Verbeek (p.p.c.c.verbeek@uva.nl) and Prof. Dr. Ingrid Robeyns (i.a.m.robeyns@uu.nl)
Project Description
My research analyses the use and appropriation of Internet platforms by environmental activists. By bridging philosophy of technology to political theories, I draw connections between Internet-enabled Climate Activism Networks (e.g. Extinction Rebellion, Fridays for Future, Futuros Indígenas) to new technopolitical regimes. With this approach, I aim to show the political significance of technologies in the evolution of social and ecological movements.
Within the ESDiT consortium, I contribute to the research lines of 'The Future of a Free and Fair Society' and 'The Human Condition. I am also an active member of the Intercultural ethics track, which underscores the relevance of philosophical traditions outside of the Western canon to understand and address global challenges.
Related Publications
Mind the Relationship: A Multi-Layered Ethical Framework for Citizen Science in Health Journal Article
In: Etica & Politica/Ethics & Politics , vol. XXV, iss. 2, pp. 171-196, 2023.
Socially Disruptive Technologies and Moral Certainty Book Chapter
In: Eriksen, Cecilie; Hermann, Julia; O'Hara, Neil; Pleasants, Nigel (Ed.): pp. 19-34, Routledge, 1, 2022.
Introduction on Moral Certainty Book Chapter
In: Eriksen, Cecilie; Hermann, Julia; O'Hara, Neil; Pleasants, Nigel (Ed.): pp. 1-18, Routledge, 1, 2022.
Technomoral Resilience as a Goal of Moral Education Journal Article
In: Ethical Theory and Moral Practice, 2022.
Related Events
ESDiT Research Day
The ESDiT Research Day is only accessible for ESDiT fellows.
Workshop “Technologies of Prospection”
This workshop is only accessible for ESDiT fellows.
Related News & Media
The Ethics of Artificial Wombs: an interview with Julia Hermann
Reimagining Digital Well-Being – Report for designers & policy makers
How Can Attention-Seeking Be Good? Seminar Recording Now Online
People Involved
Coordination team
Participants
Prof. dr. Joel Anderson |
Dr. Dina Babushkina |
Dr. Caroline Bollen |
Dr. Gunter Bombaerts |
Prof. dr. Jan Broersen |
Kristy Claassen |
Dr. Matthew Dennis |
Dr. Anna Puzio |
Dr. Lily Frank |
Cindy Friedman |
Dr. Nolen Gertz |
Dr. Julia Hermann |
Prof. dr. Wijnand IJsselsteijn |
Dr. Naomi Jacobs |
Prof. dr. Catholijn Jonker |
Dr. Annemarie Kalis |
Dr. Bart Kamphorst |
Steven Kraaijeveld |
Dr. Olya Kudina |
Dr. Daniel Lakens |
Dr. Sven Nyholm |
Dr. Giulia Perugia |
Prof. dr. Nicolas Ramsey |
Dr. Filippo Santoni de Sio |
Prof. dr. Floortje Scheepers |
Prof. dr. Chris Snijders |
Dr. Andreas Spahn |
Prof. dr. Stefano Stramigioli |
Ans Tummers-Heemels |
Dr. Janna van Grunsven |
Anna van Oosterzee |
Dr. Birna van Riemsdijk |
Prof. dr. Peter-Paul Verbeek |
Dr. Christopher Wareham |
Dr. Sander Werkhoven |
Dr. Lucie White |