Elizabeth O'Neill

Evaluating whether AI systems are ethical

Elizabeth O’Neill is assistant professor at the department of Industrial Engineering and Innovation Sciences, where she is part of the Philosophy & Ethics Group. She is also involved in the work of EAISI, the new institute of TU/e in the field of Artificial Intelligence. Artificial Intelligence plays a crucial role in Elizabeth’s research. A short introduction in five questions and answers.

What is your key research question?

My research is in philosophy; my current project looks at whether there are any conditions under which an individual should trust moral judgments from an advanced AI system. If an AI system told you that you ought to be vegetarian, for instance, what would that system need to be like for you to think its advice was worth considering?

This raises many traditional philosophical questions, such as what kind of evidence is relevant for answering moral questions, how one can evaluate processes that generate moral judgments, and under what conditions one should defer to the advice of another agent.

But the project is also closely connected to urgent practical problems. Already we have not-so-advanced AI systems influencing life-changing human decisions, such as decisions about who should be hired, paroled, receive government benefits, and so on. These systems pose parallel questions — how should such systems be evaluated, and under what conditions is it ok to defer to the judgments of such systems? In my view, many algorithmic decision systems have been rolled out without adequate evaluation and without sufficient attention to the conditions under which particular kinds of decisions should be handed off to AI systems.

What is the main challenge in your work?

When one attempts to create an AI system that meets some ethical requirement (for instance, a system that is fair, privacy-preserving, or trustworthy), one of the most important challenges is how one should interpret the relevant ethical concepts in the specific context one faces. Take fairness — the concept incorporates a multitude of human concerns, and there is variation in how it is interpreted by different individuals and cultures. Computer scientists have operationalized the concept in dozens of ways so far. Figuring out which interpretation of fairness ought to be used in a given case is a crucial task.

To create an AI system that not only fulfills ethical requirements like being fair but that can reason using ethical concepts, we face the same sort of challenge — which interpretations of which ethical concepts should the AI system possess? I am optimistic that philosophy — in combination with a deep empirical understanding of human values and contexts — can help us make headway on this kind of question.

What are the practical applications of your research? How does it benefit society?

The approach I’m taking might initially make my project seem distant from practical problems, but the project is motivated by practical concerns, and it will have practical payoffs. My approach is to treat the futuristic technology of artificial general intelligence, (understood as an AI system that can act autonomously across diverse contexts and modify itself in pursuit of its goals), as a thought experiment.

This involves thinking about how such a technology could conceivably learn moral concepts and reason about moral questions, and how we could evaluate its judgments, especially when we don’t fully understand how it works and when we sometimes disagree about whether its judgments are right. I believe that using artificial general intelligence as a thought experiment can help us make progress on fundamental philosophical questions and better understand human moral psychology. In addition, it points us to general strategies for evaluating moral judgments and figuring out when to defer to others’ judgments — strategies that we can apply to AI systems that are already making or influencing important decisions.

How do your see the development of AI in the future?

Presumably, in the best-case AI scenario, we make great strides in research, acquiring a better understanding of the world and greater ability to achieve goals. One major risk even in this scenario comes from obtaining, processing, and using the large quantities of data that advanced AI systems require. When formerly innocuous pieces of information can be used to infer whether an individual is more likely than others to be a bad parent, develop a debilitating disease, quit a job early, etc., there is unprecedented potential for discrimination and exploitation.

To a large extent, how AI develops will depend on what kinds of problems — especially whose problems — AI is used to address; how careful we are when conceptualizing those problems and when training systems to perform tasks; and how much is done to systematically anticipate and track the consequences of these technologies and to address problems that they produce.

Why should any AI researcher want to work at the TU/e?

EAISI’s leadership has committed to putting the ethical aspects of AI at the center of the institute’s research. To create AI systems that actually align with human values — in fact, to create AI systems that are intelligent in any sense that matters — this orientation is essential. Success on this front requires research teams with diverse expertise and experiences.

Fortunately, even though it is a technical university, TU/e also has many researchers working in the social sciences and humanities. Within Philosophy & Ethics, for instance, we have a great group of philosophers whose work relates to AI, including Vincent Müller, Sven Nyholm, and Lily Frank, among others. So, for AI researchers who are motivated to create AI systems that are reliably safe and beneficial, that protect values such as fairness and privacy, that promote social good, and so on, TU/e may be especially interesting.

Elizabeth O’Neill is an assistant professor of Philosophy & Ethics at the department of Industrial Engineering and Innovation Sciences.

She describes herself as “driven by very basic questions about what we ought to do.”

She is a recipient of a VENI (2018) personal research grant from the Dutch Research Council (NWO), a former research fellow at the Digital Life Initiative at Cornell Tech, and co-editor of a book about experimental philosophy.

You can check out Elizabeth’ scientific profile here. For more info, visit her personal website.

 

 

Help us make AI ethical

Are you interested in the work of EAISI? Want to join Elizabeth in her work on an ethical AI? Either as a student or an academic? Check out what TU Eindhoven has to offer.