2023

Löhr, Guido
If conceptual engineering is a new method in the ethics of AI, what method is it exactly? Journal Article
In: AI and Ethics, 2023.
Abstract | Links | BibTeX | Tags: AI Ethics, Artificial intelligence, Conceptual engineering, Conceptual ethics, Pragmatism, Representationalism
@article{nokey,
title = {If conceptual engineering is a new method in the ethics of AI, what method is it exactly?},
author = {Guido Löhr},
doi = {10.1007/s43681-023-00295-4},
year = {2023},
date = {2023-05-16},
urldate = {2023-05-16},
journal = {AI and Ethics},
abstract = {Can a machine be a person? Can a robot think, be our friend or colleague? These familiar questions in the ethics of AI have recently become much more urgent than many philosophers anticipated. However, they also seem as intractable as ever. For this reason, several philosophers of AI have recently turned their attention to an arguably new method: conceptual engineering. The idea is to stop searching for the real essence of friendship or our ordinary concept of the person. Instead, ethicists of AI should engineer concepts of friend or person we should apply. But what exactly is this method? There is currently no consensus on what the target object of conceptual engineers is or should be. In this paper, I reject a number of popular options and then argue for a pragmatist way of thinking about the target object of conceptual engineering in the ethics of AI. I conclude that in this pragmatist picture, conceptual engineering is probably what we have been doing all along. So, is it all just hype? No, the idea that the ethics of AI has been dominated by conceptual engineers all along constitutes an important meta-philosophical insight. We can build on this insight to develop a more rigorous and thorough methodology in the ethics of AI.},
keywords = {AI Ethics, Artificial intelligence, Conceptual engineering, Conceptual ethics, Pragmatism, Representationalism},
pubstate = {published},
tppubtype = {article}
}
Can a machine be a person? Can a robot think, be our friend or colleague? These familiar questions in the ethics of AI have recently become much more urgent than many philosophers anticipated. However, they also seem as intractable as ever. For this reason, several philosophers of AI have recently turned their attention to an arguably new method: conceptual engineering. The idea is to stop searching for the real essence of friendship or our ordinary concept of the person. Instead, ethicists of AI should engineer concepts of friend or person we should apply. But what exactly is this method? There is currently no consensus on what the target object of conceptual engineers is or should be. In this paper, I reject a number of popular options and then argue for a pragmatist way of thinking about the target object of conceptual engineering in the ethics of AI. I conclude that in this pragmatist picture, conceptual engineering is probably what we have been doing all along. So, is it all just hype? No, the idea that the ethics of AI has been dominated by conceptual engineers all along constitutes an important meta-philosophical insight. We can build on this insight to develop a more rigorous and thorough methodology in the ethics of AI.
2022
Jorem, Sigurd; Löhr, Guido
Inferentialist conceptual engineering Journal Article
In: Inquiry, 2022.
Links | BibTeX | Tags: Conceptual engineering, Conceptual ethics, Inferential role semantics, Inferentiali, Inferentialism, Representationalism
@article{nokey,
title = {Inferentialist conceptual engineering},
author = {Sigurd Jorem and Guido Löhr},
doi = {10.1080/0020174X.2022.2062045},
year = {2022},
date = {2022-01-27},
journal = {Inquiry},
keywords = {Conceptual engineering, Conceptual ethics, Inferential role semantics, Inferentiali, Inferentialism, Representationalism},
pubstate = {published},
tppubtype = {article}
}