4TU > 4TU.Ethics bi-annual Conference Programme

4TU.Ethics bi-annual Conference Programme

Accepted abstracts

Abstracts for a complete symposium, a discussion paper, or a poster presentation have now been peer-reviewed.

  • Symposia A symposium is a 75 minutes session around one theme, with multiple short presentations and a general discussion.
  • Panel discussions A panel discussion is a 75 minutes session in which two separate discussion papers are discussed in depth. The papers to be discussed will be available to the audience to read in advance, and the authors are committed to submit this paper (4000 words maximum) by 15th September at the latest. The author does not present their paper, and instead has ample opportunity to respond to a critique of a designated commentator.
  • Poster presentations Posters will be publicly available online during the conference. Authors will be given timeslots in which they give a short online presentation (5 minutes) next to their poster after which the audience can ask questions.

 

 

Symposia

It’s out of the box!

Shaked Spier, Rosalie Waelen, Leon Rossmaier, Isaac Oluoch, Mayli Mertens, Sage Cammers-Goodwin, Patricia Reyes, Jana Misic, Margoth González Woge (UT) and Mandi Astola (TU/e)

Novel technologies, including speech analytics, smartphones, and social media, are enabling as well as manipulating current forms of communication. This duality of enabling and manipulating is given a lot of attention in research agendas, but not in the communication of research itself. While Twitter nudges for conciseness and LinkedIn promotes creating a digital research life, the focus of this symposium is to explore how such novel communication forms are (re)shaping the sharing of academic ideas.To the so-called “digital natives”, the boundaries between the way we communicate in “real life” and using novel ICTs are becoming less and less clear. In contrast to the baby-boomer generation/Generation X, (digital) technologies do not change how digital natives interact with the world, but rather construct it from the very beginning of their socialization. As a result of being accustomed to these modern means of communication, young scholars can have new, out of the box approaches to the philosophy and ethics of technology. To express these out of the box conceptions of technology, we need to break the boundaries of the language that we use to talk about technology. Or in the words of Marshall McLuhan: “the medium is the message”.We propose a symposium in which young scholars are given the opportunity to present the aspects of their research that they consider to be most out of the box. We will do it in communication forms that do not adhere to discursive boundaries and power relations present in the contemporary academic practice. Following the symposium’s theme of breaking disciplinary and discursive boundaries as well as the digital natives’ taught talent and liking for compact messages, each presentation focuses on sharing concise rather than elaborate messages. Therefore, we suggest giving 10 instead of 5 short presentations. Each participant will give a 5-minute presentation (or ‘pitch’), which is out of the box in both content and style, leaving approximately 25 minutes for a discussion with the audience. The discussion will focus on the ways that digital native scholars (can) change research approaches and communication in the philosophy and ethics of technology; for better or for worse.

Technology and value change

Ibo van de Poel, Steffen Steinert, Michael Klenk and Anna Melnyk (TUD)

Speakers:

  • Helen Nissenbaum (Cornell Tech)
  • Tsjalling Swierstra (Maastricht University)
  • Ibo van de Poel (TU Delft)
  • Anna Melnyk (TU Delft)

The aim of the symposium is to present and discuss research on the interrelations of values and technology. Specifically, we aim to explore how novel technological developments lead to changes in moral values and, conversely, how changing moral values affect the developments of new technologies.

Some of the main questions to be discussed during the symposium are:

  • What is value change? How it is different from, or similar to, related phenomena such as moral progress, moral revolutions and techno-moral change?
  • How can technology contribute to changes in (moral) values? What are typical examples and cases of such changes? What mechanisms are relevant for these changes? What other sources of value change are relevant? What is the interrelation between value change and technological change?
  • What are the implications of value change for approaches that aim to proactively address values in design, like value-sensitive design and responsible innovation?
  • How can we anticipate value change? How can we use, for example, agent-based models to understand the dynamics of value change better and to anticipate value changes.

This is symposium is organized as part of the research program *Design for Changing Values* (www.valuechange.eu), funded by the European Research Council (ERC).

Citizen-centered approaches for personalized and preventive medicine

Erik Laes (TU/e) and Nathalie Lambrechts (VITO)

Recent scientific and technological innovations in medicine and preventive health, boosted by the digitalization of society, have stimulated the growth of personalized and preventive healthcare (PPH) initiatives, whose aim is to detect, diagnose and mitigate health risk factorsin individuals to prevent disease from appearing or worsening. The stratification of disease risk profiles based on individual data collected in an easy, reliable, reproducible, and affordable manner at a large scale is a crucial enabling factor for PPH.Collecting vast amounts of individual health-related data from truly diverse sources creates research and innovation opportunities, but implies at the same time that those data should be handled with extreme care. Governments and civil society are increasingly becoming aware of the risks of misuse of personal health data (PHD) especially in view of their huge economic value. In this context, timely action towards establishing socially acceptable, legally effective, and morally justified solutions to enable the sharing and reuse of data for biomedical research and healthcare are essential. The symposium brings together 5 speakers from the Netherlands and Flanders from academia and practice who present their views on how the ethical challenges raised by PPH can be tackled, focusing in particular on the aspect of citizen empowerment and responsible data sharing practices.

  • Nathalie Lambrechts (VITO, WeAre partnership, Belgium) will present BIBOPP: a proof of concept of an innovation ecosystem for personalized and preventive health in Flanders.
  • Heidi Mertes(University of Gent, Metamedica, Belgium) discusses the challenges facing a data ownership model in terms of remaining ethical concerns.
  • Gaston Remmers(Holland Health Data Cooperative, the Netherlands) will present the cooperative model as a countervailing power to a support individual citizens in decision making on data sharing.
  • Tinne Vandesande (King Baudouin Foundation, Belgium) will present eight principles for caring technology to guide action in a complex healthcare transition. Together, they form a framework to guide the development and use of technological innovations aimed at improving people’s health, well-being and quality of life in their daily lives.
  • Petra Verhoef (Rathenau Institute, the Netherlands) discusses a recent publication of the Rathenau institute on showing solidarity with existing health data.
Connecting human flourishing to sustainable values: prospects for the ecological use of behaviour change technologies

Lily Frank, Andreas Spahn, Minha Lee, Matthew Dennis (TU/e) and Ben Hofbauer (TUD)

Discussion of behaviour change technologies (BCTs) does not typically focus on their potential to tackle long-term ecological challenges. This is both puzzling and a missed opportunity. Scientific research consistently identifies mass human behaviour as the leading cause of environmental degradation and future ecological challenges. From over-fishing to species extinction, from the rise of ozone-depleting chemicals to the scarcity of natural resources, many common human behaviours are strikingly at odds with a sustainable future.Part of the problem is that it is easy to be ignorant about the impact our collective lifestyles on the natural world. While we all know that taking a flight or eating meat increases greenhouse emissions, it is difficult to envision this concretely, or to convert abstract data into meaningful metrics. This is one way that BCTs stand to improve everyday decision-making. BCTs are ideally suited to allowing users to visualise the effect of their behaviours, both on their own future well-being and on that of the planet.Empirical research confirms that when consumers know the number of square kilometres of forest required to neutralise the CO2 emitted from a flight, say, or how much water it takes to produce a steak, they are less likely to fly or eat meat. Many BCTs use a similar logic. Step-counters, for example, encourage walking by recording users’ progress to their daily goal. These kind of visible metrics motivate us, increasing behaviours that improve individual well-being. BCTs could promote ecological goals in the same way.

Furthermore, BCTs can show how ecological values and values associated with human well-being are closely entwined. This introduces a potential crossover in the aims of BCTs that aim to foster human well-being and those that aim to encourage behaviours that promote ecological values. If BCTs can help us see how these values are aligned, then they offer a way to simultaneously improve our own lives and the health of our planet.

We will discuss themes relating to:

  1. The perils and possibilities of using BCTs to tackle ecological challenges.
  2. Who should we hold accountable for sustainability? On the limits of BCTs.
  3. Do BCTs allow individuals to abdicate their collective ecological responsibilities?
  4. The dangers of false consciousness and virtue signalling in BCTs for sustainability.
  5. The problems of user demotivation or apathy in using BCTs to tackle macro ecological challenges.
  6. Who are the best actors to promote sustainability? (individuals, institutions, communities, movements).
  7. How to use BCTs to catalyse collective ecological action?
Biosafety by design: mere control or room for serendipity?

Britte Bouchaut, Lotte Asveld (TUD) and Laurens Landeweerd (Radboud University)

Research and development in the field of biotechnology has brought knowledge about controlling and manipulating nature and natural systems. The field of synthetic biology in specific looks promising for health, food/feed, materials, energy, and waste management. At the same time, a responsible approach to such research and innovation is demanded. Such an approach is Safe-by-Design (SbD), a concept that has gained attention in the fields of biotechnology and synthetic biology over the last decade (Asin-Garcia, Kallergi, Landeweerd, & Martins dos Santos, 2020). SbD could, in theory, help us anticipate emerging risks already in the research and development stage of a technology (Robaey, 2018) thereby ensuring that (re)shaping natural systems remains safe. However, the current biotech risk management is one of compliance in which a precautionary culture is strongly embedded (Bouchaut & Asveld, 2021; Kuiken, Barrangou, & Grieger, 2021). In this context, safety has become a political matter where only absolute certainty can legitimise research. Not only does this lead to a perception of uncertainty from a negative angle; the precondition of total control also hampers unexpected positive findings and outcomes of research and puts us in a deadlock for an innovative field (Mampuys, 2021). As many innovations with great (societal) benefits tend to be a result of serendipity, how risk-averse do we want to be? In this symposium, researchers from both the natural sciences and humanities will share their perspectives and ideas on the extent to which ensuring (technical) safety can be a positive trigger for innovation. It will discuss that risk assessments should not only entail technical or quantitative but also qualitative factors. The symposium will focus how SbD can avoid mere compliance, and further a culture where there is room to accept uncertainty in relation to potentially beneficial yields.

Speakers from different domains will present their view on control and serendipity in SdB:

  • Dr. Laurens Landeweerd – Institute for Science in Society, Radboud Universiteit Nijmegen
  • Prof.dr.ir. Vitor Martins dos Santos – Systems and Synthetic Biology, Wageningen University & Research
  • Ir. Britte Bouchaut – Biotechnology & Society, TU Delft
  • Drs. Kyra Delsing – Rathenau Instituut, Den Haag
  • Dr. Ruth Mampuys – Netherlands Scientific Council for Government Policy (WRR)

After the presentations, Dr. Dirk Stemerding will provide comments and insights from his perspective, and discuss these with the audience in a moderated discussion that will include online tools.

References

  • Asin-Garcia, E., Kallergi, A., Landeweerd, L., & Martins dos Santos, V. A. P. (2020). Genetic Safeguards for Safety-by-design: So Close Yet So Far. Trends in Biotechnology, 38(12), 1308–1312. https://doi.org/10.1016/j.tibtech.2020.04.005
  • Bouchaut, B., & Asveld, L. (2021). Responsible Learning About Risks Arising from Emerging Biotechnologies. Science and Engineering Ethics, 1–20. https://doi.org/10.1007/s11948-021-00300-1
  • Kuiken, T., Barrangou, R., & Grieger, K. (2021, February 1). (Broken) Promises of Sustainable Food and Agriculture through New Biotechnologies: The CRISPR Case. CRISPR Journal. Mary Ann Liebert Inc. https://doi.org/10.1089/crispr.2020.0098
  • Mampuys, R. (2021). The Deadlock in European GM Crop Authorisations as a Wicked Problem by Design. Erasmus University Rotterdam. Retrieved from https://repub.eur.nl/pub/134194/
  • Robaey, Z. (2018). Dealing with risks of biotechnology : understanding the potential of Safe-by-Design. Report commissioned by the Dutch Ministry of Infrastructure and Water Management, The Hague, The Netherlands; doi:10.13140/RG.2.2.13725.97769. https://doi.org/10.13140/RG.2.2.13725.97769
Disability, technology, and bodily control

Janna van Grunsven (TUD), Julian Kiverstein (UvA/Amsterdam UMC), Nick Ramsey (UMC Utrecht), Joel Anderson, Annemarie Kalis, Miguel Segundo Ortin and Josephine Pascoe (Utrecht University)

As recent research from the field of embodied, embedded, extended and enactive (4E) cognition shows, our sense of agency, which is intimately tied to self-control, has pervasive bodily dimensions (e.g., Gallagher 2005). These bodily dimensions can, in turn, be shaped by the socio-technological environment within which they are embedded (Clark 2001). Various emerging technologies are transforming human embodiment, raising important new questions about bodily self-control, as well as the forms of agency that are bound up with our embodiment. Consider, for instance technologies that have recently been developed with the goal of augmenting agency and self-control in contexts of various disabilities – technologies such as brain-computer interfaces, deep brain stimulating technologies, and augmentative and high-tech alternative communication technology. At the same time, disability studies has been insightfully highlighting the importance of acknowledging the diversity of embodiment, which further highlights the complexity of the relationship between embodiment and agency. Relatively little attention, however, has been devoted to impairments of self-control, as they relate to 4E approaches and assistive technology.The aim of our symposium is to initiate an investigation of these developments and explore how self-control, agency, embodiment, and disability are framed and rethought as a result of these emerging technologies. We will examine this by drawing on phenomenological insights, disability studies, action theory, and neuroscience.

Specific themes and questions explored during our symposium include:

  • the extent to which self-control is embodied, i.e. not exercised over the body but through the body, as an embodied, enacted “doing”;
  • how the design of assistive technologies in the domain of self-control
    impairments, can reflect the diversity of embodiments, as well as the
    implication that the ways in which self-control is exercised is also likely to be diverse
  • whether and how to rethink the idea that autonomous agency must always include an element of “control over the body,” particularly in light of 4E approaches;
  • the recurring concern that idealized expectations of bodily self-control are perniciously normalizing or overdemanding, and how this concern can be balanced against an acknowledgement of the frustrations of those who experience their diminished bodily self-control as a threat to their autonomous agency?
Diversifying ethical perspectives on new and emerging technologies: exploring the contribution and challenges of intercultural ethics

Olya Kudina, Elena Ziliotti (TUD), Patricia Reyes, Peter-Paul Verbeek (UT), Ingrid Robeyns (Utrecht University) and Matthew Dennis (TU/e)

In an anticipation of the development and adoption of technologies that might affect societies across the globe, it is worthwhile to consider a broad engagement of the ethical frameworks. To date, ethics of technology has been predominantly drawing on the Western traditions and knowledge systems that help to both frame the problems and provide avenues for solutions in specific ways. The increasing interconnection across the world increases the visibility of the different epistemic traditions and creates an opportunity—and responsibility—to learn from this heterogeneity.

This symposium will provide several viewpoints when working with intercultural ethics regarding new and emerging technologies. What can we learn from non-Western epistemic and ethical traditions? What are some of the practical and institutional challenges when considering such perspectives? How should the novel insights be incorporated without the risk of appropriating contributions from other cultures? To this end, the presentations in the symposium will discuss the angles of Ubuntu, Confucian and Indigenous ethics, as well as an intersectional account of human diversity in discussing specific technological cases or value constellations.

Speakers:

  1. Elena Ziliotti (TU Delft) & Matthew Dennis (TU/e): Drawing on Confucianism to understand digital well-being
  2. Peter-Paul Verbeek (University of Twente): Exploring the synergy between Ubuntu ethics and the technological mediation approach through the value of time
  3. Olya Kudina (TU Delft): Exploring (technologically induced) value change with Ubuntu philosophy
  4. Patricia Reyes (University of Twente): Reconceptualizing politics through Indigenous philosophies
  5. Ingrid Robeyns (Utrecht University): Introducing an account of human superdiversity for technological design and policy making
HI/ESDiT collaboration on AI, human values and the law

Sven Nyholm, Cindy Friendman, Pnar Yolum Birbil (Utrecht University), Bart Verheij (University of Groningen) and Matthew Dennis (TU/e)

Researchers affiliated with two new NWO gravitation (zwaartekracht) projects have decided to collaborate on shared themes. The first project, “Hybrid Intelligence” (HI), investigates how artificial intelligence (AI) and human intelligence can be combined to form “hybrid intelligence”. The second project, “Ethics of Socially Disruptive Technologies” (ESDiT), explores how emerging technologies challenge our understanding of ethics and morally important concepts. AI is a key area of interest for both projects; it is an emerging technology that is showing every sign of fundamentally changing how we think about human beings and the world. Not only does AI offer human beings unique opportunities, it creates novel ethical challenges – e.g., ethical dilemmas related to how to assess human-machine connection and synergy. Accordingly, both gravitation projects stand to complement and challenge each other in key ways.This symposium seeks to explore the overlapping research interests of those working in the HI & ESDiT research projects – specifically, it aims to examine how AI relates to human values and legal and moral norms. This symposium will start with five short presentations on AI as a “socially disruptive technology” (as defined by the ESDiT project), one that creates opportunities for “hybrid intelligence” (as defined by the HI project). The presentations will then be followed by a discussion in which audience members are encouraged to help us explore how AI creates “hybrid intelligence” and can potentially cause social disruption. Each presentation will focus on the various different ways that we can approach the topic of AI, human values, and law.Sven Nyholm will discuss AI and responsibility (and potential responsibility gaps). Bart Verheij will discuss how AI affects law. Pinar Yolum Birbil will discuss AI and the value of privacy. Matthew Dennis will discuss AI and the good life. Cindy Friedman will discuss how AI ethics might benefit from lessons from African perspectives on ethics and personhood. The aim of this symposium will be to illustrate the diverse ways in which AI can disrupt longstanding ideas on ethics while creating new forms of “hybrid intelligence” involving human and artificial intelligence. A further goal will be to explore how the two gravitation projects (HI and ESDiT) might complement each other in the future.

The two gravitation projects:
https://www.nwo.nl/projecten/024004022
https://www.nwo.nl/en/cases/interaction-between-ethics-and-technology

 

Panel discussions

Recognizing energy justice: revising the concept of recognition justice

Nynke van Uffelen (TUD)

Due to the intermittency of renewable energy sources, the transition to a low-carbon economy depends to a large extent on energy storage. The energy transition must be just, in the sense that vulnerable groups are protected from injustice. In the field of energy research, energy justice is often divided in three tenets: distributional justice, procedural justice and recognition justice. As part of my PhD research which started in January 2021 I will focus on the third tenet of justice. The richer and more inclusive our concept of recognition, the higher the chances of more and better recognition for groups or individuals that are somehow misrecognised. Based on a critical assessment of the energy justice literature, the often-used conceptions of recognition justice appear to be too narrow in three dimensions: recognition justice and procedural justice are conceptually blended into one; it is mainly used in a descriptive and explanatory way; and it is seen as a means to an end. Given the three dimensions where mainstream conceptions of recognition justice can be amended, I see three ways in order to enrich the definition by going back to the roots of the concept in critical theory and looking at recognition philosophers Nancy Fraser and Axel Honneth. I will provide a revised concept of recognition justice that provides a better and more inclusive tool in our search for misrecognition and injustices in the real world. I will discuss the application of these insights by discussing misrecognition in the case of hydrogen storage technologies.

How cognitive pairing with technology gives rise to artificial identity

Dina Babushkina and Athanasios Votsis (UT)

The current state of human-technology interaction has set forth a process of hybridization of human personhood. Technology is used as an effective cognitive extender, which enables the extension of human personhood to include artificial elements, leading to the emergence of artificial identity. There is a need to acknowledge the investment of the user’s personality (and often life) into a piece of technology, the unique psychosynthesis that this creates, and the disruptive effect that certain types of alternations of such a piece of technology has on its user. The scope and quality of frameworks in which the hybridization of human identity occurs and evolves has significant ethical implications that pose very pragmatic challenges to users, the industry, and regulators. This paper puts forth a few main principles upon which such a discussion should evolve. We illustrate why disruptiveness can easily turn into human harm when the frameworks facilitating it overlook the human vulnerabilities that arise from hybrid identity, notably the asymmetric relationship between the human and artificial counterparts. We claim that the types of vulnerabilities, to which a person is exposed due to the intimate degree of pairing with technology, justifies introducing and protecting artificial identity as well, granting it a non-derivative right to persist.

Group privacy in the age of big data

Haleh Asgarinia (UT)

New applications of data technologies have increasingly been threatening the privacy of groups rather than individual privacy. Group profiling technologies are applied at the level of the group to formulate types, not tokens. These kinds of technologies are employed to target people as members of specific groups, not as individuals. Does a group have the privacy that is not reducible to the privacy of the individuals forming such a group? In this research, I will focus on one of the most recent approaches to characterizing privacy, which considers privacy as a matter of actual access in the sense that information needs to be understood to a loss of privacy occurs. I will improve this account of privacy by arguing that gaining new information about an entity, which is perfectly reliable, by accessing his/her/its private information, leads to a loss of privacy. Regarding the improved account of privacy, it is important to investigate whether the information derived from a group profile is reliable only for the group as such or for its members. I will claim that the information ascribed to a group is perfectly reliable when derived from a certain type of group profiling, namely non-distributive group profiling. Nevertheless, due to the use of non-monotonic reasoning, the information linked to the individual member is less-than-perfectly reliable. In other words, a certain type of group profiling represents a group and reveals attributes that may (or may not) apply to the individuals in such a group, while it is only applicable to the group as such. In this sense, the property ascribed to the group is not ascribed to its members. As a result, a certain aspect of groups is held for the plausibility of group privacy which is over and above the collection of the privacies of the members constituting that group. The philosophical exploration of group privacy regarding the insights obtained via big data analytics will change the current guidelines in the field of privacy and data protection, which assume that group privacy can be achieved by protecting the individual privacy associated with each member of a group.

The virtues and vices of life `close to the machine

Lani Watson and Matthew Kuan Johnson (Oxford)

Much has been written, speculated and predicted about artificial intelligence (AI) and its capacity to replicate, simulate or amalgamate human intelligence. Much less has been said about causation in the other direction. While the advent of humanoid robots and machine learning algorithms modeled on human neurological systems exemplify AI and digital technologies formed in the image of their creators, it is also possible that working on these technologies can form creators – programmers, engineers, data analysts – in the image of their technological creations. Ullman (1997), for example, relates how computer programmers can experience a shift in their thinking to more closely resemble the programs they work on. Reflecting this, computer programming skills are recognized as an integral part of “computational thinking” (CT), a skillset that is gaining traction in educational settings (Denning, 2010; Lye and Koh, 2014). Such are the effects of life ‘close to the machine’ (Ullman, 1997).

This paper examines the characterological effects of human-machine proximity from a virtue-theoretical perspective. We investigate both the virtues and vices that may be cultivated in those creating and working closely with AI and digital technologies. In the first instance, computer programming has been found to have a significant impact on originality, a facet of creative thinking that involves selective encoding and combining, leading to the cultivation of creativity and divergent thinking (Clements and Merriman, 1988; Clements, 1995). Such an impact may plausibly be tied to the development of intellectual virtues, such as inquisitiveness and open-mindedness. In turn, these thinking processes may be viewed as key skills required for the cultivation of intellectual virtues such as attentiveness, rigour and intellectual perseverance.

Despite these promising correlations, there are also possibilities for moral and intellectual deterioration from life ‘close to the machine’. Programmers, for example, may develop cognitive and perceptual patterns that are more algorithmic, more localized and constrained, and less affective. These shifts may, in turn, inhibit the cultivation of virtues, particularly with respect to their motivational and affective components. Indeed, affect is a key factor in moral perception (Blum, 1991) suggesting that impeding the capacity for affectivity may impede the capacity for moral perception. This is plausibly true, for example, in the case of data scientists involved in the harvesting and analyzing of personal data, who are positioned to see humans in overtly reductive terms, as clusters of data points. Viewing others like this constitutes a failure of virtue involving the proper perception of others (Bommarito, 2017), most notably of ‘loving attention’ (Murdoch, 1970).

As the potential impact of AI and digital technologies on their creators is mixed, we conclude by offering an assessment of the trade-offs involved in the development of both virtues and vices resulting from life ‘close to the machine’, considering the possible advantages and disadvantages for both technological and human progress. Ultimately, we issue a call for more empirical work on the characterological effects of human-machine proximity (echoing a broader call by Scherer 2016), which would provide greater insight into the urgency and scope of this concern.

References:

  • Blum (1991). Moral Perception and Particularity. Cambridge: Cambridge University Press.
  • Bommarito (2017). Inner Virtue. Oxford: Oxford University Press.
  • Clements, D. H. (1995). Teaching creativity with computers. Educational. Psychology Review. 7, 141–161.
  • Clements, D. H., and Merriman, S. (1988). “Componential developments in LOGO programming environments,” in Teaching and Learning Computer Programming: Multiple Research Perspectives, ed R. E. Mayer (Hillsdale: Lawrence Erlbaum Associates, Inc.), 13–54.
  • Denning, P. J. (2010). Great principles of computing. American Scientist. 98, 369–372.
  • Dyck, J. L., and Mayer, R. E. (1989). Teaching for transfer of computer program comprehension skill. Journal of Educational Psychology. 81, 16–24.
  • Lye, S. Y., and Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: what is next for K-12? Computer Human Behavior. 41, 51–61.
  • Murdoch, Iris. (1970). The Sovereignty of Good. London: Routledge.
  • Pardamean, B., Suparyanto, T., and Evelyn. (2015). Improving problem-solving skills through Logo programming language. New Education Review. 41, 52–64.
  • Scherer, R. (2016) Learning from the Past–The Need for Empirical Evidence on the Transfer Effects of Computer Programming Skills. Frontiers in Psychology. 7, 1390. doi: 10.3389/fpsyg.2016.01390
  • Ullman, Ellen. (1997) Close to the Machine: Technophilia and its Discontents. New York: Picador.
mHealth apps and exploitative value trade-offs

Leon Rossmaier (UT)

Mobile health (mHealth) apps are becoming progressively important for primary care, disease prevention, and public health interventions. They promise to empower their users by offering more independence, better access to health services, and more insight into the users’ health status resulting in better informed medical decision-making and lifestyle changes. Disadvantages of mHealth apps often include a lack of privacy protection, a decrease in personal attachment, and the acceptance of a normative conception of health as well as becoming subject to data-driven choice architectures challenging the user’s self-determination.

Privacy, attachment, and self-determination are, alongside health, linked to fundamental dimensions of human well-being. Users of mHealth apps can either accept those disadvantages, thereby negatively impacting those dimensions, or abstain from using this technology entirely and renounce the promised health benefit. Users, in a way, must trade-off certain moral goods which are closely linked to fundamental dimensions of well-being to gain a certain health benefit if they want to use commercial mHealth apps.

This paper is going to clarify the moral goods most relevant in this context, focusing on privacy, self-determination, and attachment. I claim that these values are necessary for human well-being that should not be undermined, especially in the context of health care. I will argue that the value trade-offs users must engage in are an instance of mutually advantageous agreements by which the provider of the app takes unfair advantage of the user. This renders such agreements exploitative. I will discuss the notion of exploitation that I think applies in this case and explain under what circumstances exploitative agreements accompanying the use of commercial mHealth apps oppose the empowerment narrative around mHealth.

The main point of my argument will be that there are seemingly two cumulative effects. First, users usually have to engage in multiple value trade-offs when it comes to mHealth, rendering this technology different from other technical products. Second, those who already suffer from disadvantage in health care and public health, are especially vulnerable to those trade-offs and thus run the risk of sliding down the slippery slope of clustering disadvantage as opposed to being empowered.

Out of control?

Ibo van de Poel (TUD)

We are collectively confronted by a control paradox, or so it seems. Both individually as well as collectively, due to e.g. progress in science and engineering, humans have drastically increased their control over the (natural) environment. Terms like the Anthropocene have been used to describe the situation in which the natural environment is increasingly or even predominantly the result of actions controlled by humans. At the same time, it is clear that we – not just individually but also collectively – can hardly control a number of human-induced natural hazards like climate change, environmental degradation, and pandemics. There is also the worry that some new technologies like geo-engineering, synthetic biology, and artificial intelligence may get out of control. The control paradox then seems to be that by increasing our control over nature, we in effect create new risks and hazards that are beyond our control.

My aim is to disambiguate the notion of control as it is often used in this connection and to argue that the control paradox may not be a real paradox. Still, I argue that there are some real and important (normative) choices about what types of control (over nature) we should aim for. I propose to distinguish forms of control along two dimensions. The first dimension distinguishes between the ability to initiate certain processes, to intervene in these processes, and to control their outcomes. The second dimension follows a distinction made by Fischer and Ravizza in their book Responsibility and Control between what they call guidance control and regulative control. Guidance control requires that an outcome is the result of a reason-responsive process that is the agent’s own, regulative control additionally requires that an agent can achieve alternative outcomes.

I argue that when we apply these distinctions, it becomes clear that the so-called control paradox is basically a disbalance in types of control. By analyzing control in a number of cases (traditional engineering, climate change, synthetic biology), I argue that what we seem to witness in these cases is an increase in certain types of control and a decrease in others. The most problematic seems those cases in which an increase in guidance control (and hence in responsibility) seems to go hand with a problematic lack in regulative outcome control. In order to adequately deal with such cases, we should not so much debate how much control (over nature) is desirable but rather what type of control is desirable and feasible.

Putting cats back into bags – On the reversibility of solar geoengineering

Benjamin Hofbauer (TUD)

In this paper I explore what role the concept of reversibility plays when it comes to evaluating the research and deployment of Solar Geoengineering in the form of Stratospheric Aerosol Injection (SAI). Following the literature on the Precautionary Principle and Technology Assessment, I define reversibility as the capacity to stop current technological trajectories and development, and subsequently undo the impacts they had hitherto (Bergen 2016; Hartzell-Nichols 2012; Trouwborst 2009). In this sense, assuring reversibility can be seen as a morally prudent measure to allow for adaption and changes when deploying technologies whose consequences are uncertain, while also being able to reverse unwanted impacts. I broadly distinguish between two kinds of reversibility, namely socio-political and environmental. Socio-political reversibility describes any reversibility connected to institutional or political avenues to cease and revert the development of SAI research & deployment. Relevant aspects here are the issue of path-dependency and lock-in (Cairns 2014), as well as adaptive and reflexive governance approaches (Lee and Petts 2013; Dryzek and Pickering 2019). Environmental reversibility describes any physical, chemical, or other impacts that SAI, both as a research project and through potential deployment might have. Relevant aspects here are the concrete impacts SAI might have on the earth system as a whole, such as precipitation patterns, stratospheric chemistry, acidity of the oceans, etc. This distinction allows for a differentiated evaluation of what role the concept of reversibility should play in terms of assessing SAI. Specifically, the context within which reversibility is invoked matters. In this sense I conclude by pointing towards how assuring reversibility might become nullified or even morally problematic in the case of SAI.

What are socially disruptive technologies?

Jeroen Hopster (UT)

Scholarly discourse on “disruptive technologies” has been strongly influenced by disruptive innovation theory. This theory is geared towards analyzing disruptions in markets and business. It is of limited use, however, in analyzing the broader social, moral and existential dynamics of technosocial disruption. Yet these broader dynamics should be of great scholarly concern, both in coming to terms with technological disruptions of the past and with those of our current age. Technologies can disrupt social relations, institutions, epistemic paradigms, foundational concepts, values, and even the nature of human cognition and experience – domains of disruption that are largely neglected in the existing discourse on disruptive technologies. Accordingly, this paper diverges from existing discourse and seeks to reorient scholarly discussion around a broader notion of technosocial disruption. Addressing this broader notion raises several foundational questions, three of which are addressed in the paper. First, how can the notion of technosocial disruption be conceptualized in a way that clearly sets it apart from the disruptive innovation framework, accords with colloquial usage and is conducive to further theorizing? Secondly, how does the notion of technosocial disruption relate to the concordant notions of “Socially Disruptive Technologies” and “disruptiveness”? Thirdly, what grounds a technology’s social disruptiveness? More specifically, can we advance criteria to assess the “degree of social disruptiveness” of different technologies? This paper clarifies these questions and proposes an answer to each of them. In doing so, it advances “technosocial disruption” as a key analysandum for future scholarship on the interactions between technology and society.

The ethics of online environments as cognitive environments: not for knowledge alone

Lavinia Marin (TUD)

This presentation inquires into the conditions of possibility for online environments to function as cognitive environments – specifically to foster thinking. Drawing inspiration from social epistemology studies of the inter-related nature of knowledge, namely that we become knowers by becoming embedded in certain networks of knowers, whereby we need both access to information but also to the right cognitive tools to process that information, I will pursue a similar but slightly different analysis of how we can develop ourselves as thinkers, not as knowers. Our thinking does not happen in a vacuum, we need certain socio-technical conditions for it to happen: tools, interactions, relations and modalities to publicise the result of our thinking. Starting from Arendt’s remark that thinking is not about leading to knowledge, and, furthermore, that thinking is a worthwhile pursuit in itself, I will ask how can the socio-technical conditions that make thinking possible be replicated online. Under what conditions can we engage in thinking while being online? Thinking, like knowledge, starts from interpreting information and there is plenty of information to be found online. But information conducive to thinking has a different function, as inspiration and occasion for making connexions rather than as the endpoint of forming a belief. The presentation will have two parts: in the first one, I will outline the socio-technical conditions needed to foster thinking – analysed through the idea of the discipline of the senses (askesis) and the discipline of the self. In the second part, I will look into how the discipline of the self and of the senses can be achieved online through design for thinking. The presentation will end with recommendations for several design principles that should lead to richer cognitive environments online.

The innovator’s moral dilemma: a case-study of disruption and responsibility in the bioeconomy

Zoë Robaey, Julia Rijssenbeek, Vincent Blok (WUR) and Jeroen Hopster (UT)

At SynbioBeta 2020, a major synthetic biology industry conference, applications of cell-factories were praised for disrupting existing means of production. They do so in favor of chemical-free, animal welfare minded, fast, circular and efficient processes, which constitute the core of industrial biotechnology’s contributions to the bioeconomy. Hence, the disruption provoked by cell-factories is valued in this community because it contributes to the new bioeconomy. This echoes the understanding of disruption heralded by “disruptive innovation theory”, according to which successful entrepreneurs’ will be able to disrupt old means of production with their innovations. Recent scholarship, however, calls for a broader understanding of disruption (Hopster, forthcoming), which is sensitive to its social, existential, conceptual and ethical aspects. A number of such aspects have been identified in relation to industrial biotechnology, like its pressure on the concept of “naturalness”, issues regarding ownership, and more (Veraart and Blok, 2021; Asveld et al. 2019). With cell-factories taking a more dominant role in the bioeconomy, more is disrupted than only markets and means of production.This broader approach towards disruption raises two questions. What is the extent of disruption brought about by cell-factories? And once we can identify and qualify this disruption, how should the moral responsibility of disruptors be understood?In this paper we address these two questions. We first analyze discourses at SynBioBeta 2020 on disruption to identify which broader disruptive impacts and ethical issues are recognized within the community. Subsequently, armed with a broader understanding of disruption, we ask: What are disruptors morally responsible for? We argue that responsibility for uncertainties needs to be discussed (Robaey, 2016a; Robaey 2016b, van de Poel and Robaey, 2017), otherwise responsibility gaps arise (Robaey and Timmermann, forthcoming). We evaluate whether different conceptions of responsibility might be best suited for the different types of uncertainty that disruption engenders.

This paper adds to the literature on the ethics of socially disruptive technologies by applying a broader understanding of disruption to the case of cell-factories, and by discussing innovators’ responsibilities in the face of uncertainty. This research also directly contributes to a constructive discussion on practices in the synthetic biology community.

Moral education in the light of techno-moral change

Julia Hermann (UT) and Katharina Bauer (University of Rotterdam)

Given the dynamic character of morality, which co-evolves with technology, the goals of moral education are not static (see van der Burg 2003; Swierstra 2013). What a moral agent needs to know and be able to do is subject to change. For instance, interaction with social robots and the resulting human-robot relationships require a refinement of existing moral skills and sensibilities and perhaps even novel moral skills. Empathy and compassion might take different forms in contexts of human-robot interaction. In the future, interaction with advanced social robots might ask for novel moral skills that are specifically related to the kinds of human-robot interactions and relationships that have emerged. Since we do not know what elements of morality will change and how they will change (see van der Burg 2003), and thus what competent moral agency will require in the future, moral education should aim at fostering what has been called “moral resilience” (Swierstra 2013). Moral resilience involves different capacities, including “capacities for techno-moral imagination” (ibid., p. 216). We argue that philosophical accounts of moral education need to do justice to the importance of the skills relevant for moral resilience. In order to make the first step towards an account of how moral resilience can be fostered in moral education, we look at the literature on mechanisms of moral learning (e.g. Hoffman 2000 and 2008; Hogarth 2001; Musschenga 2009), asking how some of these mechanisms can be translated into ways of fostering the development of moral resilience. In particular, we ask what psychological research on those mechanisms tell us about how capacities for techno-moral imagination can best be trained.

References:

  • Hoffman, M. L. 2008: “Empathy and Prosocial Behavior”, in M. Lewis et al. (eds.), Handbook of Emotions (pp. 440–455). New York: The Guilford Press.
  • Hoffman, M. L. 2000: Empathy and Moral Development: Implications for Caring and Justice. Cambridge: Cambridge University Press.
  • Hogarth, R. 2001: Educating Intuition. Chicago and London: University of Chicago Press.
  • Musschenga, A. W. 2009: “Moral Intuitions, Moral Expertise and Moral Reasoning”, Journal of Philosophy of Education, 43, pp. 597-613.
  • Van der Burg, W. 2003: “Dynamic Ethics“, The Journal of Value Inquiry, 37, pp. 13-34.
  • Swierstra, T. 2013: “Nanotechnology and Technomoral Change”, Etica & Politica, XV, pp. 200-219.
Towards a human-centred design of work scheduling algorithms

Charlotte Unruh and Charlotte Haid (TU München)

Algorithmic management tools are increasingly used to hire, schedule, monitor, and evaluate workforces, as well as set and control performance targets [1]–[3]. Despite this development, the ways in which algorithms change the nature and organization of work have received relatively little attention in the ethics of artificial intelligence at work, with current debates focusing on the risks of automation-induced unemployment [4]. In this paper, we present the first results of an ongoing research project which aims to fill this gap. The research project investigates the optimization of human-centred processes from an interdisciplinary perspective (connecting philosophy and mechanical engineering).In this paper, we outline a framework for a human-centred design of scheduling algorithms. By “human-centred”, we mean that technology is designed to further the value of human wellbeing [5]. By “scheduling algorithms”, we mean algorithms that determine the time of work and the tasks to be performed by each employee (i.e. shift and task allocation) [6].

In the first part, we derive an initial framework for requirements for scheduling algorithms from philosophical theories of meaningful work (drawing especially on [7], [8]). Meaningful work has been linked to higher job satisfaction and motivation, suggesting a link between meaningfulness and wellbeing [9]. Moreover, some authors have argued that there is even a right to meaningful work and a corresponding responsibility to provide people with access to meaningful work [10]–[12]. We argue that human-centred scheduling algorithms, in order to increase wellbeing and further potential moral rights, should aim to increase meaningful work along five dimensions (adapted from [13]). We also include constraints on increasing meaningful work by drawing on human rights frameworks [14] and decent work [15].

In the second part, we use this framework to discuss the risks and chances of scheduling algorithms for meaningful work. We argue that scheduling algorithms have the potential to increase the meaningfulness of work along at least three dimensions: autonomy, development, and relationships.We elaborate on these dimensions, and tentatively note challenges for the implementation of such systems.

  • Autonomy: Algorithms can strengthen autonomy by giving workers the possibility to edit their schedules and assigned tasks, e.g. through the use of human-in-the-loop approaches. This also supports the transparency of decisions.
  • Development: Algorithms can strengthen skill development by integrating a worker’s preferences and goals, for example by assigning varied tasks and integrating on-the-job learning. It is important that such integration is handled fairly, for example by including fairness criteria and measures in the algorithm.
  • Relationships: Algorithms can free up time that managers can use to connect with workers. Algorithms should support, not replace, managers. This also ensures the accountability for decisions, since managers remain available in case of difficulties.

In the third part, we relate these results to requirements derived from qualitative interviews conducted within the project. These empirical investigations are ongoing and will include interviews with managers and workers. Initial interviews with logistics experts have identified requirements such as leaving the final decision on schedules to a human manager, and ensuring fairness and transparency of the algorithm as well as non-biased decisions. These requirements fit with our theoretical framework. We suggest that this supports the validity of our framework.

Finally, we contend that further development and application of human-centred scheduling algorithms will require the participation of workers to generate design requirements for a given context. In a follow-up project, we plan to develop a matching algorithm for application in the logistics sector, providing a case study for the theoretical framework presented.

References

  1. P. Briône, ‘Algorithmic Management’, UNI Global Union. https://www.uniglobalunion.org/groups/professionals-managers/algorithmic-management (accessed Mar. 30, 2021).
  2. J. Dzieza, ‘How hard will the robots make us work?’, The Verge, Feb. 27, 2020. https://www.theverge.com/2020/2/27/21155254/automation-robots-unemployment-jobs-vs-human-google-amazon (accessed Apr. 01, 2021).
  3. J. Kantor, ‘Working Anything but 9 to 5’, The New York Times, Aug. 13, 2014.
  4. V. C. Müller, ‘Ethics of Artificial Intelligence and Robotics’, in The Stanford Encyclopedia of Philosophy, Winter 2020., E. N. Zalta, Ed. Metaphysics Research Lab, Stanford University, 2020.
  5. P. Brey, ‘Design for the Value of Human Well-BeingWell-being’, in Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains, J. van den Hoven, P. E. Vermaas, and I. van de Poel, Eds. Dordrecht: Springer Netherlands, 2015, pp. 365–382.
  6. J. S. Loucks, ‘A survey and classification of service-staff scheduling models that address employee preferences’, 2018, doi: 10.1504/IJPS.2018.10016592.
  7. A. Gheaus and L. Herzog, ‘The Goods of Work (Other Than Money!)’, J. Soc. Philos., vol. 47, no. 1, pp. 70–89, 2016, doi: 10.1111/josp.12140.
  8. F. Martela and A. B. Pessi, ‘Significant Work Is About Self-Realization and Broader Purpose: Defining the Key Dimensions of Meaningful Work’, Front. Psychol., vol. 9, 2018, doi: 10.3389/fpsyg.2018.00363.
  9. C. Bailey, R. Yeoman, A. Madden, M. Thompson, and G. Kerridge, ‘A Review of the Empirical Literature on Meaningful Work: Progress and Research Agenda’, Hum. Resour. Dev. Rev., vol. 18, no. 1, pp. 83–113, Mar. 2019, doi: 10.1177/1534484318804653.
  10. A. Schwartz, ‘Meaningful Work’, Ethics, vol. 92, no. 4, pp. 634–646, 1982, doi: 10.1086/292380.
  11. E. Anderson, Private Government: How Employers Rule Our Lives. Princeton University Press, 2017.
  12. B. Roessler, ‘Meaningful Work: Arguments From Autonomy’, J. Polit. Philos., vol. 20, no. 1, pp. 71–93, 2012, doi: 10.1111/jopp.2012.20.issue-1.
  13. J. Smids, S. Nyholm, and H. Berkers, ‘Robots in the Workplace: a Threat to—or Opportunity for—Meaningful Work?’, Philosophy and Technology, 2020. https://philpapers.org/rec/SMIRIT-7 (accessed Mar. 03, 2021).
  14. A. Kriebitz and C. Lütge, ‘Artificial Intelligence and Human Rights: A Business Ethical Assessment’, Bus. Hum. Rights J., vol. 5, no. 1, pp. 84–104, Jan. 2020, doi: 10.1017/bhj.2019.28.
  15. ‘Decent work’. https://www.ilo.org/global/topics/decent-work/lang–en/index.htm (accessed Apr. 09, 2021).
Individuals, existential risks, and moral theories

Benedikt Namdar (University of Graz)

Existential catastrophes, caused for example by unaligned AI, cause harm. That harm includes the lives ended by such an event. It also includes potential value that would come into existence in the absence of existential catastrophes. Combining these considerations with a significant overall possibility of existential catastrophes occurring leads to the conclusion that preventing such scenarios should be a major project of humanity.

The philosophical discussion surrounding existential risk prevention is mostly about policy making. What has yet been ignored is the role of individuals in that issue. This presentation contributes to that part of the discussion. The first step is to motivate the claim that individuals should consider existential risk prevention in moral deliberations as among possible consequences of individual acts are existential catastrophes. Even given that such results are unlikely, it is important to consider such consequences as the value at stake is huge.Secondly, I will show the complexity of incorporating existential risk prevention into moral theories. Simple consequentialism leads to overdemanding results. Given the amount of value at stake in existential risk prevention, a consequentialist calculus will assess acts other than the one relevant for existential risk prevention as wrong most times.
Moreover, I discuss if modified consequentialism informed by Scheffler’s agent-centered prerogative can do better. I argue that the problem of defining the extra weight the prerogative allows an agent to assign to personal projects is striking here, resulting either in overdemandingness or moral egoism.Then, I investigate if deontology can do better than both forms of consequentialism. I argue that the intuition-based nature of deontology creates a problem. Due to temporal distance and large numbers at stake, intuitions are not apt to deal with existential risk prevention.

Lastly, I give advice for individuals on how to pursue effective existential risk prevention.

The axiology of cognitive technologies

Mattia Cecchinato (University of St. Andrews)

Technological artefacts play crucial roles in cognitive tasks such as navigating, remembering, planning, reasoning, and communicating. By integrating these devices into our lives, we benefit enormously in terms of well-being. Artefacts are valuable, yet not intrinsically good: they are good only in virtue of their benefits to people, whereas people are valuable for their own sake. Hence, the claim that artefacts are merely instrumentally valuable in relation to their users (Lee 1999). However, what happens when technological artefacts become part of their users?

The hypothesis of extended cognition (henceforth ExC) claims that the architecture of the mind extends beyond the skull to include external devices. Proponents of ExC (e.g., Clark & Chalmers 1998; Menary 2007; Wheeler 2005) argue that technological equipment is — under proper conditions — as constitutive of one’s cognitive processes as neural activity. If ExC is true, then what kind of value is instantiated by technologies which are partly constitutive of one’s cognitive system?

Despite the growing debate on the value and normativity of artefacts (e.g., Fransseen 2013; Sandin 2013), whether cognitive technologies have any distinct value has not been subjected to philosophical scrutiny. The purpose of this talk is to begin conceptualizing the moral value of cognitive technologies in light of ExC, in order to fill a gap in the philosophical literature of extended cognition and the ethics of technology through opening a dialogue between these fields.First, I will introduce a puzzle about the value of cognitive artefacts: if ExC is true, cognitive artefacts are neither instrumentally nor intrinsically valuable. I will then contend that to solve this puzzle we must recognise that these artefacts obtain a distinctive value based on of their cognitive status, namely, a constitutive value. Lewis (1946) noted that a constitutive part of an intrinsically valuable whole can possess value by itself, even when it lacks instrumental or intrinsic value. While instrumental values causally contribute to intrinsic goods, constitutive values amount to such intrinsic goods (Schroeder 2008; Bradley 1998). I will claim that, if ExC is true, cognitive technologies are constitutively valuable.

Ethical and political dilemmas of social media in the age of pandemic, from global to Vietnam – Can confuscianism ethics give good advice?

Hai T. Doan (University of Otago)

Today, social media becomes ubiquitous to the extent that people may forget that it is the product of and operate around new technologies. People behave in social media as if they are in the real world, even with a more laid-back attitude. Unfortunately, social media is not the real world and even is not as it used to be. Today, it is not a mere interface where people communicate within a small circle of friendship: it allows communication worldwide and turns freedom of speech into mass condemnation. Social media benefits human beings by offering free videos, calls, messages, free information, and even free ‘expert’ advice, from legal to medical. Unfortunately, these benefits are gained at the expense of being exposable to fake news, fraud, exploitation: social media is the massive marketplace and the hunting ground of innocent preys and privacy. Social media brings about connections, knowledge, opportunities but is also a source of prejudice, discrimination, and inequality. Despite commitments, a traditional approach of law is possibly not ideal: it can be either that law is ineffective because of limited resources or otherwise, states become leviathans. State(s) can also collude with tech firms to propagate and oppress oppositions. These dilemmas fully express in the age of the Covid-19 pandemic. How to heal miseries caused by social media? For the common good of humanity, ethics, in joining forces with legal and institutional mechanisms, may have the role to promote trust, responsibility, and compassion. Interestingly, this idea finds counterpart to Confucianism – the school that teaches benevolence and self-cultivation, which used to be criticized as obsoleted and animosity against rights (and good). Admittedly, Confucianism is a product of numerous authors and inherently has weakness and contradictions. Nevertheless, its strength is, in addition to its core values, adaptable. This paper proposes (i) presenting some dilemmas that emerge from social media in the age of pandemic from cases and debates in Vietnam and the international context, (ii) seeking ways that ethics and Confucianism ethics as a school of altruism and emotionality may address the dilemmas, (iii) critiquing, interpreting, and re-balancing Confucianism to that end.

Humans law, Neanderthals laws, and robots laws

Kamil Mamak (Jagiellonian University)

We can observe in recent years the increase in academic attention to robot rights, and many important publications were published (cf. Gunkel 2018; Darling 2016; Gellers 2020; Turner 2018; Balkin 2015; Abbott 2020; Nyholm 2020; Smith 2021; Bennett and Daly 2020). Schröder even claims that „Controversies about the moral and legal status of robots and of humanoid robots, in particular, are among the top debates in recent practical philosophy and legal theory.” (Schröder 2020, 191). Discussion of possessing rights by robots is strongly connected with the deliberation on the moral status of robots, and an issue of the moral status of robots is also one of the main topics in ethics of Artificial Intelligence (Gordon and Nyholm 2021). Recently a couple of review works were published concerning those issues (Gordon and Pasvenskiene 2021; Schröder 2020; Harris and Anthis 2021). In my paper, I would like to focus on the potential legal status of robots that share similarities to humans. However, there is already pointed out that the human laws would not be applied directly to robots (cf. Calo 2015; Gunkel 2020; Balkin 2015; Mamak 2021), and I would like to focus on the role of an embodiment for the content of the law. To illustrate, I would like to examine the case of Neanderthals. There is a discussion on whether we should bring that species to life (c.f. Cottrell, Jensen, and Peck 2014; Levy 2013). Despite their biological closeness to homo sapiens sapiens, it is doubtful that Neanderthals would be obliged to obey the law under the same conditions as Humans (c.f. Mamak 2017). The laws are tailored to humans; they are reflecting our imagination of who the human being is. The situation with robots could be even more problematic. In my paper, I want to show that the current law are built in the reflections on humans qualities, which are embedded – not always deliberately – in provisions, and because of that, the potential laws for robots needs to be reinvented.

References:

  • Abbott, Ryan. 2020. The Reasonable Robot: Artificial Intelligence and the Law. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781108631761.
  • Balkin, Jack. 2015. “The Path of Robotics Law.” California Law Review 6. https://digitalcommons.law.yale.edu/fss_papers/5150.
  • Bennett, Belinda, and Angela Daly. 2020. “Recognising Rights for Robots: Can We? Will We? Should We?” Law, Innovation and Technology 12 (1): 60–80. https://doi.org/10.1080/17579961.2020.1727063.
  • Calo, Ryan. 2015. “Robotics and the Lessons of Cyberlaw.” California Law Review 103 (January): 513.
  • Cottrell, Sariah, Jamie L. Jensen, and Steven L. Peck. 2014. “Resuscitation and Resurrection: The Ethics of Cloning Cheetahs, Mammoths, and Neanderthals.” Life Sciences, Society and Policy 10 (1): 3. https://doi.org/10.1186/2195-7819-10-3.
  • Darling, Kate. 2016. “Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior towards Robotic Objects.” In Robot Law, edited by Ryan Calo, A. Michael Froomkin, and Ian Kerr, First Edition. Cheltenham, UK: Edward Elgar Pub.
  • Gellers, Joshua C. 2020. Rights for Robots : Artificial Intelligence, Animal and Environmental Law. Routledge. https://doi.org/10.4324/9780429288159.
  • Gordon, John-Stewart, and Sven Nyholm. 2021. “Ethics of Artificial Intelligence | Internet Encyclopedia of Philosophy.” 2021. https://iep.utm.edu/ethic-ai/.
  • Gordon, John-Stewart, and Ausrine Pasvenskiene. 2021. “Human Rights for Robots? A Literature Review.” AI and Ethics, March. https://doi.org/10.1007/s43681-021-00050-7.
  • Gunkel, David J. 2018. Robot Rights. Cambridge, Massachusetts: The MIT Press.
  • “2020: The Year of Robot Rights.” The MIT Press Reader (blog). 2020. https://thereader.mitpress.mit.edu/2020-the-year-of-robot-rights/.
  • Harris, Jamie, and Jacy Reese Anthis. 2021. “The Moral Consideration of Artificial Entities: A Literature Review.” ArXiv:2102.04215 [Cs], January. http://arxiv.org/abs/2102.04215.
  • Levy, Neil. 2013. “Cave Man Ethics?: The Rights and Wrongs of Cloning Neanderthals.” Living Ethics: Newsletter of the St. James Ethics Centre, no. 91 (Autumn): 12.
  • Mamak, Kamil. 2017. “Czy neandertalczyk byłby człowiekiem w rozumieniu prawa karnego?,” June. http://filozofiawpraktyce.pl/czy-neandertalczyk-bylby-czlowiekiem-w-rozumieniu-prawa-karnego/.
  • “Whether to Save a Robot or a Human: On the Ethical and Legal Limits of Protections for Robots.” Frontiers in Robotics and AI 8. https://doi.org/10.3389/frobt.2021.712427.
  • Nyholm, Sven. 2020. Humans and Robots: Ethics, Agency, and Anthropomorphism. Illustrated edition. London ; New York: Rowman amp; Littlefield Publishers.
  • Schröder, Wolfgang M. 2020. “Robots and Rights: Reviewing Recent Positions in Legal Philosophy and Ethics.” SSRN Scholarly Paper ID 3794566. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3794566.
  • Smith, Joshua K. 2021. Robotic Persons: Our Future With Social Robots. S.l.: Westbow Press.
  • Turner, Jacob. 2018. Robot Rules: Regulating Artificial Intelligence. Palgrave Macmillan.
How smart are smart materials? A conceptual and ethical analysis of smart, life-like implants for tissue regeneration

Anne-Floor de Kanter, Karin Jongsma and Annelien Bredenoord (UMC Utrecht)

Today, a person can receive a hip implant to replace a deformed, swollen hip joint or a pacemaker to sustain the beating rhythm of their heart. Thanks to Regenerative Medicine, soon it may become possible not only to replace,but to re-grow healthy tissues after injury or disease. To this end, tissue engineers are designing synthetic ‘smart’, ‘life-like’ biomaterials to activate the inherent regenerative capacity of the human body. Such biomaterials may be used, for example, to develop a smart valve implant. After implantation in a patient’s heart, this implant not only (temporarily) replaces the diseased valve, but also stimulates re-growth of fresh healthy, living valve tissue. However, the meaning of the smartness and lifelikeness of these synthetic biomaterials is conceptually unclear. Therefore, in this paper, we first aim to unravel the meaning of the terms ‘smart’ and ‘life-like’, and next, analyse what ethical and societal implications are associated with this new generation of biomaterial implants as a result. Our conceptual analysis reveals that the biomaterials are considered ‘smart’ because they can communicate with human tissues, and ‘life-like’ because they are structurally similar to these tissues. Moreover, the biomaterial artefacts are designed to integrate to a high degree with the living tissue of the human body, thus blurring the boundaries between the technological and the corporeal. While this capacity provides the biomaterials with their therapeutic potential, we argue that it complicates a) the irreversibility of the implantation process, b) questions of ownership regarding the biomaterial implant, and c) the sense of embodiment of the receiver of the implant. Moreover, we suggest that in the future smart life-like biomaterials might incorporate digital technologies and synthetic cell technology, thus enacting new types of smartness and life-likeness. This might raise additional concerns regarding d) responsible data governance and e) legitimate control over nature. Overall, timely anticipation and consideration of these ethical challenges will promote responsible development of biomaterials in Regenerative Medicine

Posters

The introduction of AAC technology in the ecosystem of interpersonal communication: an exploration of technology-mediated empathy

Caroline Bollen (TUD)

While spoken word is one of the main communication media used in everyday life by the majority of people, some individuals cannot meet their daily communication needs through the use of speech. This can be due to various reason, including certain manifestations of autism. Alternative and augmentative communication (AAC) technologies have been and are being developed to facilitate atypical communication. This umbrella term refers to a spectrum of technologies, ranging from low tech (for example picture boards) to cutting edge high tech applications for example brain-computer interfaces). They can provide new modes of expression, facilitate interaction in a diverse society and enhance inclusivity and accessibility in various ways. Yet, the actual impact of these technologies is highly dependent on design and implementation. The same technological paradigm can also threaten diversity if used to enforce specific modes of communication, which can even become an implicit prerequisite for someone to be perceived as being meaningfully alive. This demonstrates how AAC technology can both enhance and undermine empathy. However, the potential relationship between AAC technology and empathy does not only depend on the use of the technology itself, but also on how empathy is being understood, considering the extreme ambiguity of the concept.An AAC technology situates itself within the same space that allows empathy to exist: the space of interpersonal communication. Borrowing metaphors from ecology; the question arises what will happen on its introduction in this ecosystem (invasion, symbiosis, competition, coexistence?). In the poster presentation I will theoretically explore how the use of AAC technology could interact with different aspects of empathy as it positions itself within a social interaction, specifically an AAC technology-mediated interaction between an autistic and a neurotypical person. I will structure this exploration using the reflective framework I previously created for analyzing understandings of empathy in autism research and modelling its dimensions as an ecosystem. This exploration reveals the conceptual potentials of AAC technology-mediated empathy in trans-neurotype communication. Ultimately, I will demonstrate how these insights can reveal pitfalls and promises of the innovation, while the technology teaches us pitfalls and promises of empathy.

A relational perspective on autonomy in elderly care: a case study in China

Shuhong Li (TUD)

This paper aims to promote an understanding of autonomy beyond individualistic interpretations in value sensitive design (VSD) and calls for a more comprehensive approach with the consideration of relational autonomy in robot design in elderly care. Autonomy is one of the prevailing values discussed in VSD in elderly care through care robots. However, an empirical study conducted in China sheds lights on the limitations of individualistic conceptions of autonomy in elderly care. Attention to Confucian ethics is proposed as an alternative way to advance the understanding of the connotation of autonomy as relational autonomy in elderly care in the Chinese context. One of the main reasons is that taking care of elderly people is deeply rooted in the Confucian filial duty and Chinese society, other East-Asian societies. Another practical reason is that this perspective can illustrate the actual needs for elderly care and prospectively provide some insights into potential issues in elderly care through robots in China. Also, it broadens the argument about autonomy by critically challenging the standard western approach. In many European and North American cultures, the conception of autonomy tends to emphasize individualism and self-determination while in the Chinese context, the emphasis is on family-determination and social relations. This paper proposes a philosophical comparative analysis of the concept of autonomy and clarifies the differences of the connotations of autonomy in elderly care between liberal individualistic and relational perspectives. Subsequently, it provides a relational angle from Confucianism to benefit the VSD approach by enriching the discussion of autonomy in general.

The Human Exposome: Ethical Considerations. A scoping review of ethics of exposome data collection and analysis

Sammie Jansen, Irene van Kamp (RIVM), Marcel Verweij, Bob Mulder (WUR) and Peter van den Hazel (INCHES)

Exposome data collection and use may contribute to public health, however, may also raise ethical questions. The human exposome concept was introduced in 2005 as a counterpart of the human genome and encompasses “every exposure to which an individual is subjected from conception to death”. The concept captures a wide range of exposures in the physical and social environment, as well as its biological manifestations, that accumulate over the life course. Acquiring a better understanding of the interactions between all these exposures may result in policies and interventions that promote health and reduce health inequalities. Mapping the environment requires the collection, storage, and analysis of large amounts of complex data. Technological innovations in several domains such as data science and machine learning, geospatial modeling, sensor devices, and -omics technologies make this increasingly possible. EU Horizon 2020 has set up several exposome projects, including Equal-Life which is coordinated by the Dutch National Institute for Public Health and the Environment (RIVM).In this poster we present the results of a scoping review, identifying and addressing ethical themes and problems related to the collection, analysis, and use of large amounts of exposure and personal health data. The review is based upon a literature study that combines different domains including the ethics of screening, health justice, and data science. Several themes initially stand out, such as the desirability to combine different health data to a single summary factor that presents an overall risk – for example, the general factor of psychopathology (p-factor); the medicalization and datafication of the environment and the (early life) factors that contribute to health outcomes; the limitations of informed consent for data collections about collective risks; the acceptability of collecting and publishing health risks that may be impossible to prevent or remedy; and concerns about early life labeling and possible stigmatization and discrimination in a society that is already facing large health inequalities. These ethical themes and their implications should be inquired for the responsible development and use of the human exposome concept and research.

School-screenings for Covid-19 via antigen rapid tests: a perspective from the ethics of technology

Felicitas Krämer (Universität Potsdam) and Alexander Bagattini (Karlsruhe Institut für Technologie)

As one of the first European countries, Austria introduced and widely used Covid-19 PoC antigen rapid tests for school screenings. In spring 2021, Germany followed, but the introduction of the tests faced a bumpy road before they finally got implemented on a large scale. In the Netherlands, there was much controversy and hesitancy about the use of rapid tests at school as well.Why is there so much resistance against the introduction of such a helpful bridge-technology that could make the period before full vaccination much safer? We argue that the underlying ethical issues that may partly explain these problems have been underestimated so far and deserve closer examination. Therefore, this poster presentation will deal with ethical issues round Covid-19 school screenings via rapid self-tests. The test kits themselves can be regarded as bio-technological innovations, and as it will turn out, their implementation can be fruitfully analyzed with the tools of the ethics of technology.Comparing the examples of the Netherlands, Austria and Germany, we argue that the use of rapid tests in schools can make them a much safer place, even preventing larger outbreaks that could leak into the community and affect high-risk populations. We sketch a contractualist framework for these reflections that demonstrates that it is in each individual´s interest to adopt and follow a rapid test strategy for school-screenings by, at the same time, showing that schools become a much safer place from a public health point of view.

However, from an ethical perspective, we will shed light on a number of problems: How to deal with the greater degree of uncertainty of rapid tests (false positives and false negatives) compared to PCR tests? The gain of safety has to be weighed against problems such as e.g. the potential discrimination of children tested positive, the back-lashes of false feelings of security caused by false-negatives, as well as potential conflicts of interests and problems round the privacy of data, especially if private companies are involved. Last but not least, it will be explored how to handle the lack of compliance of parents, students and teachers who sometimes even tend to boycott these new technologies for a variety of reasons.

As the use of the tests becomes more and more established, it becomes clear that there are unforeseen repercussions that are otherwise well-known in philosophy of technology as backfire-effects. As it turns out, rapid antigen tests are increasingly used as means to open the economy – a step which is regarded as risky by some. In contrast to their original function as an extra safety-net for society, rapid tests have increasingly become a door-opener for sometimes risky opening steps.

Last but not least, the planned shift from home self-tests to PCR-pooled tests that are about to be introduced in some German schools has to be re-assessed from a perspective of the ethics of risk: Whereas they promise greater epidemiological safety on a public health level, they will potentially decrease safety for the individual students and their families on a daily basis. This effect will require careful ethical assessment.

Even if the Covid-19 pandemic will lay behind us in not too far future, it will be worth the effort to establish an ethics of pandemic screening right now with the help of the established tools of analysis taken from the ethics of technology This enterprise will hopefully equip us with some more ethical and societal foresight and will leave us better prepared for the next pandemic.

Responsibility gaps in artificial intelligence and neurotechnology – The case of symbiotic brain-computer interfaces

Giulio Mecacci and Pim Haselage (Radboud University)

There is a remarkable parallel between the consequences of progress in Artificial Intelligence (AI) and in neurotechnology (NT). In both domains, human users function in combination with smart technologies that are aimed at restoring and/or improving their cognition and behavior. In this chapter we wish to explore some similarities and differences between ethical and legal debates about the implications of these two types of technology. We will do this by examining the notion of a ‘responsibility gap’ (Matthias, 2004).Within the context of AI, responsibility gaps arise when human control over intelligent machines diminishes or even disappears because of, among other factors, their growing autonomy and the inherent opacity of their functioning. The resulting difficulties in establishing and attributing responsibility to the human users of AI create the gap. In various discussions about the impact of neurotechnology on human identity, agency and responsibility, a similar discussion can be discerned. These two strands of literature tend to remain separate however. Having published in both fields, we here would like to explore the differences and similarities between responsibility gaps in AI and NT. In particular, we would like to make use of the meaningful human control framework that Santoni de Sio & Van den Hoven (Santoni de Sio & van den Hoven, 2018) recently introduced. In the literature about AI, meaningful human control has been isolated as a normative requirement to prevent human responsibility gaps by keeping autonomous machines’ behavior aligned with human controllers’ intentions and values. However, NT introduces the novel possibility that control could come without accompanying intentions, i.e. subconsciously. This is an almost counterintuitive implication of the so called pBCI (passive Brain-computer interfaces, also regularly discussed as symbiotic BCIs). We will argue that the equation “more control = more responsibility”, widely accepted within (the several) meaningful human control theories in the context of AI, might not hold as well in the case of pBCI. We will claim that these particular technologies create responsibility gaps that are particularly challenging for a theory of meaningful human control that aims to fill them. With our analysis we want first of all to show that debates about responsibility gaps in AI and NT can mutually enrich each other. Secondly, we aim to indicate that, from a legal and moral perspective, the consequences of NT on responsibility may be even more significant than previously thought.

References

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183. https://doi.org/10.1007/s10676-004-3422-1
  • Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful Human Control over Autonomous Systems: A Philosophical Account. Frontiers in Robotics and AI, 5, 15. https://doi.org/10.3389/frobt.2018.00015
A scalar approach to vaccination Duties

Steven Kraaijeveld (WUR), Euzebiusz Jamrozik (Oxford) and Rachel Gur-Arie (John Hopkins University)

The ethical debate about whether individual citizens should accept vaccination is often framed in a binary way—one either has a moral duty to get vaccinated, or one does not. A core argument is that vaccination (e.g., against COVID-19) is not a fully private, self-regarding matter: people may even be said to have a duty to accept vaccination for the sake of others. In this paper, we move beyond binary approaches to vaccination duties and introduce a scalar approach in which the strength of moral duties to get vaccinated for others depend on a number of different factors. More specifically, we argue that the weight of the moral duty to get vaccinated for others depends, among other things, on:

  1. the lifetime probability that an Agent A will be infected with pathogen P,
  2. the probability that A, if infected, will infect individual I or individuals Is with P,
  3. the ex-ante probability that, if infection spreads from A to I/Is, this results in severe harm to I/Is through P,
  4. the degree to which I/Is can reduce the risk of contracting P or the risk of severe harm caused by P,
  5. the probability that I/Is would be infected by agents other than A (whether or not infected by A),
  6. the ex-ante probability of onward chains of transmission beyond close contacts of A that would be directly traceable to A,
  7. the sum of costs for Agent A to be vaccinated against P.

By problematizing the idea that there is a single, general moral duty to get vaccinated for the sake of others and by recasting the discussion of moral duties to get vaccinated as scalar rather than binary, we offer a more nuanced and fine-grained approach to vaccination ethics. Our approach avoids overstating moral duties in cases where those duties may, in fact, be weak. We explore subsequent implications for vaccination policy, particularly regarding vaccine mandates.

Scroll to top