Embedding Values in Artificial Intelligence (AI) Systems

Ibo van de Poel: Embedding Values in Artificial Intelligence (AI) Systems. In: Minds and Machines, 2020.

Abstract

Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. This account understands embodied values as the result of design activities intended to embed those values in such systems. AI systems are here understood as a special kind of sociotechnical system that, like traditional sociotechnical systems, are composed of technical artifacts, human agents, and institutions but—in addition—contain artificial agents and certain technical norms that regulate interactions between artificial agents and other elements of the system. The specific challenges and opportunities of embedding values in AI systems are discussed, and some lessons for better embedding values in AI systems are drawn.

BibTeX (Download)

@article{vandePoel2020,
title = {Embedding Values in Artificial Intelligence (AI) Systems},
author = {Ibo van de Poel},
url = {https://link-springer-com.tudelft.idm.oclc.org/article/10.1007/s11023-020-09537-4},
doi = {DOI: 10.1007/s11023-020-09537-4},
year  = {2020},
date = {2020-09-01},
urldate = {2020-09-01},
journal = {Minds and Machines},
abstract = {Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody certain values. This account understands embodied values as the result of design activities intended to embed those values in such systems. AI systems are here understood as a special kind of sociotechnical system that, like traditional sociotechnical systems, are composed of technical artifacts, human agents, and institutions but—in addition—contain artificial agents and certain technical norms that regulate interactions between artificial agents and other elements of the system. The specific challenges and opportunities of embedding values in AI systems are discussed, and some lessons for better embedding values in AI systems are drawn.},
keywords = {artificial agent, artificial intelligence, ethics, institution, multi-agent system, norms, sociotechnical system, value embedding, values},
pubstate = {published},
tppubtype = {article}
}