Minding the Moral Gap in Human-Machine Interaction

Minding the Moral Gap in Human-Machine Interaction

Join us for a virtual conversation with Dr. Shannon Vallor on Minding the Moral Gap in Human-Machine Interaction.

By CARE-AI

Date and time

Wed, Feb 24, 2021 9:00 AM - 10:30 AM PST

Location

Online

About this event

Biography: Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute at the University of Edinburgh, where she is Director of the Centre for Technomoral Futures. She is also appointed to the Department of Philosophy. Her research addresses the ethical implications of emerging science and technology, especially AI, robotics and new media, for human character and institutions. Professor Vallor received the 2015 World Technology Award in Ethics from the World Technology Network. She has served as President of the Society for Philosophy and Technology, as a former consulting AI Ethicist at Google, and currently chairs Scotland’s Data Delivery Group. In addition to her many articles and published educational modules on the ethics of data, robotics, and artificial intelligence, she is the author of the book Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (Oxford University Press, 2016) and the forthcoming Lessons from the AI Mirror: Rebuilding Our Humanity in an Age of Machine Thinking.

Abstract: Given the enduring challenges of interpretability, explainability, fairness, safety, and reliability of machine learning systems, as well as expanding legal and ethical constraints imposed on such systems by regulators and standards bodies, it will be the case for the foreseeable future that AI/ML systems deployed in many high-stakes decision contexts will be required to operate under human oversight, what is often called ‘meaningful human control.’ Oversight is increasingly demanded in a broad range of application areas, from medicine and banking to military uses. However, this reassuring phrase conceals grave difficulties. How can humans control or provide effective oversight for ML system operations or machine outputs for which human supervisors lack deep understanding—an understanding often precluded by the very same causes (speed, complexity, opacity and non-verifiability of machine reasoning) that necessitate human supervision in the first place? This quandary exposes a gap in AI safety and ethics governance mechanisms that existing methods are unlikely to close. In this talk I explore two dimensions of this gap which are frequently underappreciated in research on AI safety, explainable AI, or ‘human-friendly AI’: the absence of a capacity for ‘moral dialectic’ between human and machine experts, and the absence of an affective dimension to machine reasoning.

Organized by

Postponed