research

I am affiliated to the Sorbonne Université and Centre National de Recherche Scientifique mixed research laboratory “Sciences, Normes, Démocratie”, under the supervision of Anouk Barberousse.

I conduct my research in philosophy on an empirical basis, following a methodological approach that aims to closely study a particular branch of the scientific field of Artificial Intelligence: Natural Language Processing. The branch of philosophy I am investigating is descriptive ethics and applied ethics, which allows me to ask research questions such as: what ethical approach can we adopt for Natural Language Processing? What contribution can empirical research make to normative ethics? What are the advantages and disadvantages of ethical charters in this field? How to do interdisciplinary research between Machine Learning engineering and philosophy?

My empirical research takes the following form: thanks to the work conducted within the company Les petits bots, I had the opportunity to set up and run a field experiment. From January 2021 to January 2022, I followed the development, deployment, and monitoring of the first chatbot targeting a population of more than 70000 inhabitants in a French region. Its goal is to communicate to citizens the information they are looking for and help them easily find the public services starting with an everyday problem. For example, when a user says, “I can’t afford to pay the rent of my apartment anymore,” the conversational agent is trained to respond with proposals for social housing and the detailed process for applying for it.

Another fundamental basis for my empirical research is based on my experiences in writing ethical charters. On the one hand, there is the one for the company Les petits bots, and on the other hand, there is the one written for the open science project BigScience. Two different results for two different experiences allowed for a comparative study of ethical charters adopted in one case in a business ethics environment, while the second was for more open research-oriented purposes. The experience with BigScience was also an opportunity to field test interdisciplinary research with the same goals: to establish core values that would serve as a pivot for the articulation of more specific documents. The resulting moral exercise was of fundamental importance to philosophical research in this field.

Last but not least, thanks to the experiences related to ethical charters, I have been getting closer and closer to studies in value theory, specifically in value pluralism. I’m fascinated by issues involving conflicts between values, between their definitions and applications. This has led me to do extensive research on how value systems can be integrated within Artificial Intelligence systems, especially those dealing with human languages, such as Natural Language Processing. Can moral values be embedded in language models? Is this embedding done implicitly or explicitly, intentional or unintentional? In case it is implicit and unintentional, how to monitor and verify that there is only one main value system incorporated and subsequently reproduced during the use of the AI system? These research questions push me to look for the potential conflicts that may exist between values, their hierarchies, and contextualizations between different populations with different value systems.