Research line 5

Artificial Intelligence

Home > Project > Research lines > …

Objective

To apply the developing theory of value change to empirical cases in the realm of robot systems and artificial intelligence (objective 4); to study potential proactive strategies for better dealing with value change in the domain of robot systems and artificial intelligence (project objective 5).

How to Deal with Value Change in AI?

In research line 5, we will investigate value change in robot systems. One more specific question is under what conditions it is morally acceptable to design sociotechnical systems that can autonomously (i.e. without human intervention) adapt to value change through artificial intelligence. This research line combines insight and approaches from robot ethics, which focus on the value-sensitive design of computer systems and robotics, with machine ethics, an area that focuses on the design of artificial moral intelligence. This combination is innovative.

Artificial agents are computer and robot systems that are autonomous, interactive and adaptive. This would make it possible, at least in principle, to design artificial agents that autonomously act on certain values and can adapt them on basis of interactions with the environment. James Moor distinguishes between four ways in which artificial agents may be or become moral agents:

  1. Ethical impact agents are robots and computer systems that ethically impact their environment; this is probably true of (all) robots.
  2. Implicit ethical agents are robots and programs that have been programmed (by humans) to act according to certain values (for example by employing value sensitive design).
  3. Explicit ethical agents are machines that can represent ethical categories and that can reason (in machine language) about these.
  4. Full ethical agents in addition also possess some characteristics we often consider crucial for human agency, like consciousness, free will and intentionality.

It might perhaps never be possible to design robots as full ethical agents (and if it would become possible it may be questionable whether it is morally desirable to do so). However, explicit ethical agents may be good enough to build the capacity to adapt to new values into robot systems. There are today hardly any successful examples of explicit ethical artificial agents. As Wallach and Allen point out, the main problem might not be to design artificial agents that can act autonomously (and that can adapt themselves in interaction with the environment), but rather to build enough, and the right kind of, ethical sensitivity into such machines.

Experiences with adaptable algorithms suggest that adaptability is not only an asset but also a potential risk, in particular if it is opaque how robot systems learn and adapt their own working or if they do so in a way that seems undesirable, for example because it is done on the basis of too limited or biased ‘experiences’ [103]. This raises the question whether it is morally desirable to build the ability to autonomously adapt to value change into sociotechnical systems through artificial intelligence.

This question will be approached in this research line by looking for possible meta-values that any acceptable artificial agent or robot system should meet. The idea is that meta-values are to be built in the artificial agent so that they are immutable or can only be changed by humans, while other values can be autonomously adapted by the artificial agent itself. Possible candidates for such meta-values are transparency, accountability, ‘meaningful human control’ and reversibility. Such a set of meta-values would make it possible for humans to monitor when the artificial agent has changed its values (due to transparency), to understand why it did so (due to accountability), and to be able to turn back this adaptation if necessary (due to reversibility and meaningful human control).

Two cases studies will be carried out. The first will focus on self-driving cars and the use of artificial intelligence in transportation systems. These technological innovations may make the transportation system safer and more sustainable, but they have also lead to ethical debates about how self-driving cars should be programed to behave in case of an accident and whether they should be programmed to make ethical choices in such cases. These debates will be interpreted in terms of the question to what extent the capacity to (autonomously) apply, weigh and adapt values should be programmed into self-driving cars or whether these capacities should remain under human control, be it the designers, the users (drivers) or the operators of the system.

The second case study will focus on socially adaptive electronic partners (SAEP). These are artificial agents or systems, like smart homes appliances that support humans, and in which certain values and norms are built. These values and norms are adaptive so that SAEP can adjust their behavior to the context. A crucial question in the developments of SAEP is who should be allowed to adapt the values on which their functioning is based and whether under certain circumstances or for certain values the artificial agent itself should also be able to change its values.

Researchers

Tom Coggins, M.Sc.

Tom Coggins, M.Sc.

PhD Candidate

t.n.coggins@tudelft.nl
Dr. Aimee Robbins-van Wynsberghe

Dr. Aimee Robbins-van Wynsberghe

Assistant Professor

a.l.robbins-vanwynsberghe@tudelft.nl

google scholar logo

Dr. Olya Kudina

Dr. Olya Kudina

Postdoctoral Researcher

olya_kudina@yahoo.com

ORCID logo

Dr. Michael Klenk

Dr. Michael Klenk

Postdoctoral researcher

M.B.O.T.Klenk@tudelft.nl

Logo academia.edu  logo research gate

Related events

Workshop ‘Machines of Change: Robots, AI and Value Change’

Start date: February 1, 2022
End date: February 3, 2022
Time: 12:00 am - 12:00 am
Location: Delft, the Netherlands
Artificial Intelligence | Project workshop
robot-g999220136_640

During the Machines of Change: Robots, AI and Value Change workshop, we will explore how the deployment of Artificial Intelligence and robots leads to value change and how we can study value change, as a phenomenon, via these technologies. The workshop will center on three themes:
1) How do AI and /or robotics contribute to value change,
2) How can we study value change via AI and/or robotics?
3) How should AI and / or robotics deal with value change?

Related Publications

2023

Coggins, Tom N.; Steinert, Steffen

The seven troubles with norm-compliant robots Journal Article

In: Ethics and Information Technology, vol. 25, no. 2, pp. 29, 2023, ISSN: 1572-8439.

Abstract | Links | BibTeX

2022

Coggins, Tom N.

More work for Roomba? Domestic robots, housework and the production of privacy Journal Article

In: Prometheus, vol. 38, no. 1, 2022.

Links | BibTeX

2021

Umbrello, Steven; van de Poel, I R

Mapping value sensitive design onto AI for social good principles Journal Article

In: AI and Ethics, 2021, ISSN: 2730-5961.

Abstract | Links | BibTeX

2020

van de Poel, Ibo

Embedding Values in Artificial Intelligence (AI) Systems Journal Article

In: Minds and Machines, 2020.

Abstract | Links | BibTeX

Muishout, Chantal E; Coggins, Tom N; Schipper, Roel H

More Than Meets the Eye? Robotisation and Normativity in the Dutch Construction Industry Proceeding

Springer, 2020, ISBN: 978-3-030-49915-0, (Accepted Author Manuscript; Digital Concrete 2020 - 2nd RILEM International Conference on Concrete and Digital Fabrication ; Conference date: 06-07-2020 Through 08-07-2020).

Abstract | Links | BibTeX

Hayes, Paul; van de Poel, Ibo; Steen, Marc

Algorithms and Values in Justice and Security Journal Article

In: AI&Society: the journal of human-centered systems and machine intelligence, vol. 35, no. 3, pp. 533–555, 2020, ISSN: 0951-5666.

Abstract | Links | BibTeX

van de Poel, Ibo

Three philosophical perspectives on the relation between technology and society, and how they affect the current debate about artificial intelligence Journal Article

In: Human Affairs, vol. 30, no. 4, pp. 499, 2020, ISSN: 1210-3055.

Abstract | Links | BibTeX

Designing for Changing Values

Other Research Lines in Our Project