Share this blog

How to incorporate changing values in new technology?

Article type
How to incorporate changing values in new technology?
Published on

The world is ever-changing, and so are human values. Social media, for example, illustrates why we should take value change seriously. Initially, platforms like Twitter and Facebook were seen as ways to facilitate the value of freedom of speech. Nowadays, due to the rise of fake news and political polarization on these platforms, ‘truth’ and ‘nonmaleficence’ emerged as new values guiding decisions about what messages to allow on these platforms. How can designers of new technology proactively include value change in their design? A team of TU Delft researchers from the 4TU.Ethics Center has written a white paper and a commentary publication at IEEE with practical examples to help designers tackle this issue. This output is based on the outcomes of the ERC-funded research project ‘Design for Value Change’. 

 

Technologies and value change 

Generative AI is now taking flight, with such a speed, that there is hardly any time to think about incorporating values, let alone changing values, because companies want to stay ahead of the competition. ‘A worrying development’, says Ibo van de Poel, professor in Ethics and Technology and one of the initiators of the white paper. ‘In order to avoid unintended negative consequences of new technology, such as generative AI, it is imperative to keep societal and ethical values in mind when designing them. For example, with ChatGDP it is not transparent how texts are being generated. Moreover, the system may be unreliable, for example, because it makes up non-existing scientific references. We also have seen examples in which generative AI may have harmful effects, as in the case of chatbots that advise or suggest people to commit suicide.[1] 

Fortunately, many new technologies are increasingly designed with values such as justice, privacy, sustainability and welfare. ‘However, existing approaches for embedding values in design pay insufficient attention to value change, while real-world examples show that values are not static and change over time. Think of the Internet that was designed in such a way to store information forever, but nowadays people rather be forgotten, especially when it comes to sensitive information that can impact your life, but the system is not geared up for that.’ Accounting for value change is still largely unexplored territory for designers and engineers. ‘This white paper intends to provide guidance and introduces the issue of value change to engineers and designers’, explains Van de Poel.

 

Five types of value change

The researchers have identified five types of value change concerning technology. New values can emerge over time because of technological or social developments, or a combination of these two. An example is ‘sustainability’ which has emerged as a new value in the past century and is now becoming more and more embraced by technology developers. Changes in the relevance of values for a specific technology is another kind of value change. Think of the continued use of wind turbines that have provided new information about their impact on (mental) health. Previously this value was not considered relevant for wind turbine design, but it is now. Besides the emergence of new values and irrelevant values becoming relevant for technology, values can change in priority. In the past, driver and passenger safety was key in car design, now the safety of other traffic users has become (relatively) more important than it used to be. Values are often expressed as abstract ideals or principles that people need to interpret to make them meaningful. In the past, privacy was understood as having to do with one’s own space, now it is mainly understood as being about one’s personal data.

Finally, changes in value specification is another kind of value change relevant to the design of technology. For example, in the past, the EU has changed the law regarding animal welfare and outlawed battery cages. Thus, the value of animal welfare needs to be translated into design requirements for housing animals that meet these new regulatory requirements for more living and movement space.

 

How to account for value change?

These examples clearly show that values are changing over time. The researchers suggest three ways in which value change can be accounted for in the design of new technology. A first possibility is improving the anticipation of possible future value change by developing scenarios and, for example, using simulation tools. Think for example of the value changes that may be induced by the introduction of self-driving cars. In one scenario, car drivers are increasingly assisted by technology, but cars remain under the control of the individual driver and are individually owned. In such a scenario, responsibility for car accidents may become a more important value to account for in the design, but other value changes may be limited. A second scenario may be the introduction of fully self-driving cars that are publicly owned and which are offered as a form of public transport from door to door. In such a scenario, the value of individual freedom and control may get less emphasis and values like accessibility and transport justice become more important. Finally, one may think of a scenario in which cars remain individually owned but in which some form of central traffic control is introduced, that allows addressing values like safety, sustainability and accessibility.

A second possibility to address value change in design is through expanding design for values approaches to the full life cycle of new technologies by experimentation and monitoring when a new technology is introduced in society. Take, for example, ride-sharing apps like Uber. These need to be designed for values like privacy and transparency. However, it was only during the use of these apps that it was found out that they unexpectedly contributed to violence against women, with cases of sexual assault, rape and even murder. It took a while after this had become clear before the corresponding value (‘safety for women’) was also addressed through additional measures and redesign, like better screening of drivers, the ability to share rides with trusted contacts, and an emergency button. As this example suggests, it is important to discover values that are unexpectedly affected by a design as soon as possible, ideally in pilot testing before a technology is introduced on a large scale in society.

A third approach is to apply certain design strategies that make it easier to deal with future value change, for example by designing adaptable technology or for flexibility in use. One may for example think of buildings that are designed in a modular fashion so that they later be reconfigured to meet new values that were not foreseen initially. Another possibility is to design technologies so that they are robust under different value priorities or understandings of a value. For example, one may design a system for optimal performance, because this is now considered the most important value, but one may also decide to design for perhaps a bit less performance but also for values like privacy and sustainability as these might well get more important in the future.  

 

Last but not least…

Of course, it would be impossible for designers and engineers to engage with all three methods for each and every design process. Also, how extensively value change needs to be considered may depend on the design process and the kind of technology that is developed. This is why we propose to collaborate with experts on these methods’, says Van de Poel. Relevant expertise and experts may for example be found through the 4TU.Ethics Center and the Delft Design for Values Institute.

 

We hope to motivate designers to proactively consider the possibility of value change in their projects and better anticipate values of our future society.

 

 

Note 1: The work on this blog, the white paper, and the commentary publication is part of the project ValueChange that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation Program under Grant Agreement 788321.

Note 2: This blog had initially published in Dutch at  TU Delft news  and is published here with the official permission on Ibo van de Poel. 

 

[1] https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

See also: https://www.artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/

 

About the author

Ibo van de Poel is Antoni van Leeuwenhoek Professor in Ethics and Technology  at the School of Technology, Policy and Management at Delft University of Technology. He did a master in Philosophy of Science, Technology and Society (with a propaedeutic exam in Mechanical Engineering) and obtained a PhD in Science and Technology Studies from  the University of Twente before he came to Delft in 1998. He is a member of Dutch Royal Academy of the Sciences (KNAW).

Tanja Emonts is a communication advisor at TU Delft.

Share this blog