Objective
How to Deal with Value Change in AI?
In research line 5, we will investigate value change in robot systems. One more specific question is under what conditions it is morally acceptable to design sociotechnical systems that can autonomously (i.e. without human intervention) adapt to value change through artificial intelligence. This research line combines insight and approaches from robot ethics, which focus on the value-sensitive design of computer systems and robotics, with machine ethics, an area that focuses on the design of artificial moral intelligence. This combination is innovative.
Artificial agents are computer and robot systems that are autonomous, interactive and adaptive. This would make it possible, at least in principle, to design artificial agents that autonomously act on certain values and can adapt them on basis of interactions with the environment. James Moor distinguishes between four ways in which artificial agents may be or become moral agents:
- Ethical impact agents are robots and computer systems that ethically impact their environment; this is probably true of (all) robots.
- Implicit ethical agents are robots and programs that have been programmed (by humans) to act according to certain values (for example by employing value sensitive design).
- Explicit ethical agents are machines that can represent ethical categories and that can reason (in machine language) about these.
- Full ethical agents in addition also possess some characteristics we often consider crucial for human agency, like consciousness, free will and intentionality.
It might perhaps never be possible to design robots as full ethical agents (and if it would become possible it may be questionable whether it is morally desirable to do so). However, explicit ethical agents may be good enough to build the capacity to adapt to new values into robot systems. There are today hardly any successful examples of explicit ethical artificial agents. As Wallach and Allen point out, the main problem might not be to design artificial agents that can act autonomously (and that can adapt themselves in interaction with the environment), but rather to build enough, and the right kind of, ethical sensitivity into such machines.
Experiences with adaptable algorithms suggest that adaptability is not only an asset but also a potential risk, in particular if it is opaque how robot systems learn and adapt their own working or if they do so in a way that seems undesirable, for example because it is done on the basis of too limited or biased ‘experiences’ [103]. This raises the question whether it is morally desirable to build the ability to autonomously adapt to value change into sociotechnical systems through artificial intelligence.
This question will be approached in this research line by looking for possible meta-values that any acceptable artificial agent or robot system should meet. The idea is that meta-values are to be built in the artificial agent so that they are immutable or can only be changed by humans, while other values can be autonomously adapted by the artificial agent itself. Possible candidates for such meta-values are transparency, accountability, ‘meaningful human control’ and reversibility. Such a set of meta-values would make it possible for humans to monitor when the artificial agent has changed its values (due to transparency), to understand why it did so (due to accountability), and to be able to turn back this adaptation if necessary (due to reversibility and meaningful human control).
Two cases studies will be carried out. The first will focus on self-driving cars and the use of artificial intelligence in transportation systems. These technological innovations may make the transportation system safer and more sustainable, but they have also lead to ethical debates about how self-driving cars should be programed to behave in case of an accident and whether they should be programmed to make ethical choices in such cases. These debates will be interpreted in terms of the question to what extent the capacity to (autonomously) apply, weigh and adapt values should be programmed into self-driving cars or whether these capacities should remain under human control, be it the designers, the users (drivers) or the operators of the system.
The second case study will focus on socially adaptive electronic partners (SAEP). These are artificial agents or systems, like smart homes appliances that support humans, and in which certain values and norms are built. These values and norms are adaptive so that SAEP can adjust their behavior to the context. A crucial question in the developments of SAEP is who should be allowed to adapt the values on which their functioning is based and whether under certain circumstances or for certain values the artificial agent itself should also be able to change its values.
Researchers

Tom Coggins, M.Sc.
PhD Candidate
Related events
Workshop ‘Machines of Change: Robots, AI and Value Change’

During the Machines of Change: Robots, AI and Value Change workshop, we will explore how the deployment of Artificial Intelligence and robots leads to value change and how we can study value change, as a phenomenon, via these technologies. The workshop will center on three themes:
1) How do AI and /or robotics contribute to value change,
2) How can we study value change via AI and/or robotics?
3) How should AI and / or robotics deal with value change?
Related Publications
2023
The seven troubles with norm-compliant robots Journal Article
In: Ethics and Information Technology, vol. 25, no. 2, pp. 29, 2023, ISSN: 1572-8439.
2022
More work for Roomba? Domestic robots, housework and the production of privacy Journal Article
In: Prometheus, vol. 38, no. 1, 2022.
2021
Mapping value sensitive design onto AI for social good principles Journal Article
In: AI and Ethics, 2021, ISSN: 2730-5961.
2020
Embedding Values in Artificial Intelligence (AI) Systems Journal Article
In: Minds and Machines, 2020.
More Than Meets the Eye? Robotisation and Normativity in the Dutch Construction Industry Proceeding
Springer, 2020, ISBN: 978-3-030-49915-0, (Accepted Author Manuscript; Digital Concrete 2020 - 2nd RILEM International Conference on Concrete and Digital Fabrication ; Conference date: 06-07-2020 Through 08-07-2020).
Algorithms and Values in Justice and Security Journal Article
In: AI&Society: the journal of human-centered systems and machine intelligence, vol. 35, no. 3, pp. 533–555, 2020, ISSN: 0951-5666.
In: Human Affairs, vol. 30, no. 4, pp. 499, 2020, ISSN: 1210-3055.
Designing for Changing Values