Part of the

Homepage » News » News archive » NWO Veni grant for 4TU.Ethics member Elizabeth O’Neill

TU Eindhoven
Posted on July 26, 2018

NWO Veni grant for 4TU.Ethics member Elizabeth O’Neill

Elizabeth O’Neill will receive a Veni grant awarded by the Netherlands Organisation for Scientific Research (NWO) to outstanding researchers who have recently obtained their Ph.D.

Elizabeth’s project, entitled “The Artificial Ethicists”, investigates what we should do with the advice of AI that can reason about ethics. This personal award will allow Elizabeth to conduct independent research and develop her ideas for a period of three years.


Some more information from the Project abstract:

Recent advances in artificial intelligence (AI)—in particular, the increasing plausibility of artificial general intelligence (AGI)—raise fascinating possibilities for the future of ethical reasoning and decision-making. In particular, the prospect of AGIs that can reason about ethics supplies a new angle from which to consider foundational questions about human morality.

Imagine an AGI that could advise you on what to do—e.g., whether to become a pacifist; when to react to injustice; how to balance courage and caution. Drawing on machine learning and vast datasets on human actions, reactions, judgments, and theories, it has acquired, among other things, concepts of morality and obligation. It could tell you what you should do, given your values and principles, or what you should do, given the set of values and principles that it thinks you should have. I call this AI an artificial ethicist (AE). An AE would likely disagree with you sometimes—even about what your most fundamental values should be. In some cases, it might be able to explain why you should modify your values, but given the gulf between its abilities and yours, it is unlikely that you—or any human—could understand all its arguments.

This scenario raises a question: under what conditions, if any, should one defer to the moral expertise of an artificial ethicist? Drawing on analytic moral epistemology and recent work from AI researchers, Elizabeth will address this question in three steps:

  1. to investigate what features AEs would likely have and in what ways AEs would be different from humans offering moral testimony or expertise;
  2. to examine our options for evaluating the reliability of the AE’s judgments; and
  3. to assess whether there are epistemic or moral reasons for declining to defer to the expertise of an AE, even if one has good reasons to think the AE’s moral judgments are more reliable than one’s own.


For further information on the NWO Veni Awards and how to apply please click here.


Scroll to top