Ethics of Socially Disruptive Technologies conference abstracts

The table of contents lists abstract in alphabetical order, sorted on the surname of the first author. Clicking on an the abstract title will take you to the text of that abstract further down this page (you may have to scroll up a little to get to the start of the abstract).

Keynotes

Table of contents

Keynotes

Catriona McKinnon (University of Exeter, UK)
Geoengineering: Fantasies of Control

Many advocates of research into solar radiation management (SRM) have unwarranted confidence that research programmes can be controlled in ways that minimise the risks of unacceptable damage as a result of SRM deployment. In this talk, I will explore two ‘fantasies of control’ indulged in by the geoclique. The first fantasy involves ignoring the ways in which SRM research programmes can lock-in to deployment, or to ethically unpalatable versions of SRM. The second fantasy involves heroically optimistic assumptions about how some of the worst risks can be minimised in a deployment scenario.

Ingrid Robeyns (Utrecht University)
The promises and Limits of the Capability Approach for the Ethics of Technology

In the last two decades, the capability approach has become a widely used normative approach in a wide range of disciplines. Yet until recently, the capability approach was often reduced to the work by Amartya Sen and Martha Nussbaum, and there was a poor understanding of how one could think of the capability approach in more general terms. In order to address this issue, I developed a modular account of the capability approach, which aims at describing the capability approach in its most general terms. In my talk, I will present the modular account, show how it can be applied to the ethics of technology, discuss the strengths of this approach, as well as highlight its limits. The bottom line is that the capability approach has some distinct contributions to make to the ethics of technology, but given its modular structure it generally needs to be supplemented with other theories or claims before it can live up to its potential.

Tamar Sharon (Radboud University, Nijmegen)
The Googlization of health: An empirical-philosophical inquiry

As we move into the digital era, companies like Google, Apple and Amazon are increasingly becoming important facilitators of medical research and healthcare provision. This “Googlization of health” may advance personalized medicine in unprecedented ways, but it is also an encroachment of digital capitalism into the sphere of health and medicine, importing all the risks we know from the online world into the world of digital health and medicine. What is at stake in this phenomenon, more than privacy, is no less than the common good. But securing the common good, I argue in this talk, requires first moving beyond a simple dichotomy of markets vs. morals, to identify a plurality of spheres and orders of worth – each with their own conception of the common good – that are at work in this phenomenon. This is paramount in order to better understand the many risks involved in the Googlization of health, in addition to commodification, and further, to determine which conception of the common good should be the dominant one for digital health and medicine in the future.

Abstracts

Kars Alfrink (Delft University of Technology, The Netherlands)
From smartness to cityness: Towards a holistic notion of urban intelligence

Smart technologies are characterised by the capacity to decide and act independent from human control. Such technologies are transforming the design, construction and maintenance of urban infrastructure. They include generative design, additive manufacturing, and the internet of things.
An example of smart urban infrastructure is the MX3D smart bridge, a footbridge that is manufactured using computer-controlled industrial welding robots able to 3D-print stainless steel structures. The geometry of the bridge is created using a parametric approach, where software produces a shape within a set of human-determined constraints. The physical bridge is equipped with internet-connected sensors. These sensors send real-time data to cloud computing infrastructure to produce a software-based replica or “digital twin”. Finally, machine learning is employed to make predictions about the bridge’s structural health, and to infer the presence and behaviour of people on the bridge. [1,2]

Currently in storage, the bridge is awaiting placement over a canal in De Wallen, the city of Amsterdam’s best known red-light district. Located in the oldest part of the city, De Wallen is a massively popular tourist destination but also still a residential area and home to a range of businesses, not all of which connected to the tourist trade. In its recent history De Wallen has become a focal point of the city-wide social and political debate about the impact of mass tourism on Amsterdam’s liveability.

Smart public infrastructure such as the MX3D bridge can be seen as one of the building blocks of the smart city — urban agglomerations that are embedded with information technology in an effort to enable efficient governance, sustainable growth and to improve quality of life.

The smart city has been criticised using Gilles Deleuze’s notion of the society of control [3] as a project that seeks to condition, constrain and incentivise particular behaviours at the expense of citizens’ agency, preferably in ways that recede from view. [4,5,6,7]

Relatedly, Saskia Sassen has introduced the concept of cityness [8,9,10] to capture the idea that urbanity requires what she calls “the possibility of making” — the ability for citizens to shape their environment. Sassen has also written about how technological control can turn the city into a managed space that prevents this cityness from being produced. [ref]

In this case study, we offer a rich description of the MX3D smart bridge as a prime example of a new emerging category of smart public infrastructure, with a focus on its design and manufacturing and the rationales driving those aspects. Using the concept of smartness we show how the bridge embodies elements of the society of control. Subsequently, using the concept of cityness we critique the bridge as being harmful to urban capabilities. In conclusion, we explore the possibility of marrying the notions of smartness and cityness — suggesting that from a design perspective these ideas may not be as much at odds as one would at first glance think.

References

  1. https://mx3d.com/projects/bridge-2/
  2. https://mx3d.com/smart-bridge/
  3. Deleuze, G. (1992). Postscript on the Societies of Control. October, 59, 3–7.
  4. Iveson, K., & Maalsen, S. (2019). Social control in the networked city: Datafied dividuals, disciplined individuals and powers of assembly. Environment and Planning D: Society and Space, 37(2), 331–349. https://doi.org/10.1177/0263775818812084
  5. Sadowski, J., & Pasquale, F. (2015). The spectrum of control: A social theory of the smart city. First Monday, 20(7). https://doi.org/10.5210/fm.v20i7.590
  6. Shaw, J., & Graham, M. (2017). An Informational Right to the City? Code, Content, Control, and the Urbanization of Information. Antipode, 49(4), 907–927. https://doi.org/10.1111/anti.12312
  7. Krivý, M. (2018). Towards a critique of cybernetic urbanism : The smart city and the society of control. Planning Theory, 17(1), 8–30. https://doi.org/10.1177/1473095216645631
  8. Sassen, S. (2013). Does the city have speech? Public Culture, 25(2 70), 209–221.
  9. Sassen, S. (2010). Cityness. Roaming thoughts about making and experiencing cityness. Ex Aequo, (22), 13–18.
  10. Sassen, S. (2005). Cityness in the urban age. Urban Age Bulletin, 2, 1–3
  11. Sassen, S. (2012). Urbanising technology. Springer.

Joost Alleblas (Delft University of Technology, The Netherlands)
Radical value change in early stages of technology adoption. The historical case of the electrification of Europe in the Interbellum: Paris and Berlin

Much has been said and written about emerging technologies and their supposed disruptive effects on markets, businesses, corporations and ways of life – once these technologies pass a certain threshold of adoption. Although most authors seemingly agree that disruption is a sudden and unpredictable event, few technologies exist that meet the criteria, as for instance outlined by the Harvard Business Review (HBR) – claiming to be the originators of the term. In their assessment of Uber (Christensen, Raynor, & McDonald, 2015), for instance, they claim the taxiapp is not disruptive, because the innovation did not originate in a low-market foothold nor did
mainstream consumers wait with jumping the bandwagon until the quality of the service met their criteria. Rather, the Uber-app is seen as an incremental innovation that improved the overall quality of the taxi-service (Christensen, Raynor, & McDonald, 2015).

Electricity, however, meets both criteria to be called disruptive. True adoption of electricity and its true disruptive moment in consumer markets only took place after WWII, when the quality of the electrical grid could live up to the standards of competitors like oil, gas and coal. Furthermore, as often is the case with disruptive innovations, the innovation itself preceded actual use scenarios. Electricity didn’t solve any design problems so much as it posed a never-ending line of
them, design problems having to do with safety, scalibility, security, affordability, capital attraction, regulation, etc. Early adoption of electricity in city lighting and tramways was for decades the only real use case of electricity in place like Paris. The true disruptive force of electricity irrupted only after and, to a certain extent, because of WWII.

Assuming that the disruption of markets is a more gradual process than people would expect, value change can be both radical and therefore morally disruptive. In this paper I will address how radical value change emerged before electricity became a disruptive commodity and not because it became a disruptive commodity. I will claim that value change can precede market disruption. I will execute a comparative case study of two cities that were once considered cities of light – Paris and Berlin. Through this analysis, I will show how in both cases value change preceded large-scale adoption of electricity and paved the way for the disruptive force to fully
emerge. I will argue that an avant-garde of early adopters highlighted electricity as the modern, progressive force of an urban elite, with its own cosmopolitan ideology that quickly became dominant in urban arts, policies and practices, even though its mass implementation was decades away.

Bibliography (provisional)

  •  Barles, S. (2015). The main characteristics of urban socio-ecological trajectories: Paris (France) from the 18th to the 20th century. Ecological Economics, 118, 177-185.
  • Christensen, C. M., Raynor, M. E., & McDonald, R. (2015). What is disruptive
    innovation. Harvard Business Review, 93(12), 44-53.
  • Flonneau, M. (2016). The Metamorphosis of Public Transport Services in the Paris Region: the Modal and Moral Victory of ‘Automobilism’in the 1920s and 1930s. From Rail to Road and Back Again?: A Century of Transport Competition and Interdependency, 248.
  • Hughes, T. P. (1993). Networks of power: electrification in Western society, 1880-1930. JHU Press.
  • Killen, A. (2006). Berlin Electropolis: shock, nerves, and German modernity (Vol. 38). Univ of California Press.
  • Kim, E., & Barles, S. (2012). The energy consumption of Paris and its supply areas from the eighteenth century to the present. Regional Environmental Change, 12(2), 295-310.
  • Millward, R. (2006). Business and government in electricity network integration in Western Europe, c. 1900–1950. Business History, 48(4), 479-500.
  • Moss, T. (2014). Socio-technical change and the politics of urban infrastructure: managing energy in Berlin between dictatorship and democracy. Urban Studies, 51(7), 1432-1448.
  • O’Brien, C. (2006). A Culture of Light: Cinema and Technology in 1920s
    Germany. Modernism/modernity, 13(2), 397-399.
  • Schivelbusch, W. (1995). Disenchanted night: The industrialization of light in the nineteenth century. Univ of California Press.
  • Ward, J. (2001). Weimar surfaces: Urban visual culture in 1920s Germany (Vol. 27). Univ of California Press.

Mandi Astola (Eindhoven University of Technology, The Netherlands)
The virtues of a co-creator

“Innovators in a broad sense,” rather than specifically engineers, scientists or designers, is a category of actors which has only recently become important. Innovators in a broad sense are a growing group due to new developments in the landscape of innovation. (Sanders and Stappers 2008) A development, which aligns with the emergence of the “prosumer” and the sharing economy, is the co-creation of innovations. (Humphreys and Grayson 2008) Co-creation is the involvement of various parties in the innovation of a product, who usually have some stake in the product, like future users. The existing virtue theories in ethics aimed at professionals do not capture the moral standards of co-creation practices.

The virtues of “innovators in a broad sense” are characterized in this paper as the virtues of those who excel in a co-creation context. Previously I have argued that the ideology of co-creation can only function with a virtue theory which acknowledges the fact that moral goods are not always reducible to the virtues of individuals. Therefore, a virtue theory which takes co-creation logic seriously must either acknowledge that there are collectively and individually possessed virtues, or that there are distinct procedural and teleological virtues.

I will sketch a substantive picture of important virtues in co-creation, showing that they are different from those in professional domains like engineering. Drawing on literature and discussions with co-creators, I will refine how these can be characterized as collective and individual or procedural or consequential. Steen has described the virtues in participatory design: Cooperation, curiosity, creativity, empowerment and reflexivity. Using findings from empirical work and design literature, I will support and supplement this list for the context of co-creation. (Steen 2013)

This paper will discuss in particular the virtue of creativity and show that it can take collective, individual, teleological and procedural forms. Which form the virtue is seen as taking has implications for who is to be held admirable for an act of creativity and therefore, who deserves rights to intellectual property.
The conclusions of this paper can be summarized as:
1. The virtues of a co-creator are different from the virtues in other innovation related professional contexts
2. The virtues of a co-creator can be collective or individual
3. Whether creativity is seen as a collective or individual virtue has profound implications for the ethics of intellectual property

References

  • Davis, Michael, and Kelly Laas. 2014. “‘Broader Impacts’ or ‘Responsible Research and Innovation’? A Comparison of Two Criteria for Funding Research in Science and Engineering.” Science and Engineering Ethics 20 (4): 963–83. https://doi.org/10.1007/s11948-013-9480-1.
  • Fisher, Elizabeth, and Rene von Schomberg. 2006. “Implementing the Precautionary Principle: Perspectives and Prospects.” https://philpapers.org/rec/FISITP-2.
  • Humphreys, Ashlee, and Kent Grayson. 2008. “The Intersecting Roles of Consumer and Producer: A Critical Perspective on Co-production, Co-creation and Prosumption.” Sociology Compass 2 (3). John Wiley & Sons, Ltd (10.1111): 963–80. https://doi.org/10.1111/j.1751-9020.2008.00112.x.
  • Iatridis, Konstantinos, and Doris Schroeder. 2015. “Responsible Research and Innovation in Industry: The Case for Corporate Responsibility Tools.” https://philpapers.org/rec/IATRRA.
  • Limson, Janice. 2018. “Putting responsible research and innovation into practice: a case study for biotechnology research, exploring impacts and RRI learning outcomes of public engagement for science students.” Synthese, December. https://doi.org/10.1007/s11229-018-02063-y.
  • Rip, Arie. 2014. “The past and future of RRI.” Life Sciences, Society and Policy 10 (1): 17. https://doi.org/10.1186/s40504-014-0017-4.
  • Ruggiu, Daniele. 2015. “Anchoring European Governance: Two Versions of Responsible Research and Innovation and EU Fundamental Rights as ‘Normative Anchor Points’.” NanoEthics 9 (3): 217–35. https://doi.org/10.1007/s11569-015-0240-3.
  • Salles, Arleen, Kathinka Evers, and Michele Farisco. 2018. “Neuroethics and Philosophy in Responsible Research and Innovation: The Case of the Human Brain Project.” Neuroethics, June. https://doi.org/10.1007/s12152-018-9372-9.
  • Sanders, Elizabeth B.-N., and Pieter Jan Stappers. 2008. “Co-creation and the new landscapes of design.” CoDesign 4 (1): 5–18. https://doi.org/10.1080/15710880701875068.
  • Schomberg, Rene von. 2006. “The normative challenges of the precautionary principle.” https://philpapers.org/rec/VONTNC.
  • Schomberg, René von. 2008. “From the ethics of technology towards an ethics of knowledge policy: implications for robotics.” AI & SOCIETY 22 (3): 331–48. https://doi.org/10.1007/s00146-007-0152-z. 2013. “A Vision of Responsible Research and Innovation.” In Responsible Innovation, 51–74. Chichester, UK: John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118551424.ch3.
  • Schroeder, Doris, and Miltos Ladikas. 2015. “Towards Principled Responsible Research and Innovation: Employing the Difference Principle in Funding Decisions.” https://philpapers.org/rec/SCHTPR-9.
  • Stahl, Bernd Carsten, Grace Eden, Marina Jirotka, and Mark Coeckelbergh. 2014. “From computer ethics to responsible research and innovation in ICT.” Information & Management 51 (6): 810–18. https://doi.org/10.1016/j.im.2014.01.001.
  • Steen, Marc. 2013. “Virtues in Participatory Design: Cooperation, Curiosity, Creativity, Empowerment and Reflexivity.” Science and Engineering Ethics 19 (3): 945–62. https://doi.org/10.1007/s11948-012-9380-9.
  • Wong, Pak-Hang. 2016. “Responsible Innovation for Decent Nonliberal Peoples: A Dilemma?” https://philpapers.org/rec/WONRIF.
  • Zwart, Hub, Laurens Landeweerd, and Arjan van Rooij. 2014. “Adapt or perish? Assessing the recent shift in the European research funding arena from ‘ELSA’ to ‘RRI’.” Life Sciences, Society and Policy 10 (1): 11. https://doi.org/10.1186/s40504-014-0011-x.

Lotte Asveld and Zoë Robaey (Delft University of Technology, The Netherlands)
Capabilities for Inclusion : building sustainable and inclusive biobased valuechains

In the bio-economy, new avenues for using biomass are leading the energy transition. For instance, recent decisions of the marine sector in the Netherlands to increase their use of biofuels in order to achieve sustainability goals (Goodfuels 2019), are also seen in the jet fuel sector (SkyNRG, 2019). Where will the biomass come from in order to achieve these sustainability goals? An increasing demand in biofuels will mean that farmers will participate in new value chains. This could mean using different crops, or it could mean delivering crops and agricultural residues to new markets.

There are several challenges in doing so which all relate to uncertainty. New crops, or new markets may demand new harvesting practices and building new relationships with actors along the supply chain. These also bring about uncertainty about pricing, how farmers make choices in their communities, and how they relate to their environment. Doing responsible innovation in the energy transition calls for a special attention to farmers, as they become the new providers of feedstock for energy systems. This normative goal can be understood as inclusion. A parallel may be drawn to miners who provided coal that fuelled the industrial revolution. That revolution, besides greatly increasing some of the world’s well-being, was also exploitative and resulted in serious environmental consequences.

Farming practices embody a number of values such as identity, freedom, care for the soil and economic progress (Robaey, Sinha and Asveld, forthcoming). While these practices are heterogenous in a community, farmers observe each other’s choices and their participation in new practices often hinges upon successful demonstration, and successful participation of other farmers, or early-adopters. How we understand inclusion matters for how we assess and implement it. For the bio-economy to continue leading the energy transition, a sustainable input of feedstock is needed, and for this farmer’s inclusion is needed.

For farmers, new value chains mean that uncertainties linked to pricing, to new practices, and to social and environmental impacts will matter in their choices. Understanding what a desirable choice is for a farmer, and a farming community, can in turn inform other actors in the value chain in terms of providing technological options. Building on field work experience looking at possible pathways for the transition of the sugarcane sector in Jamaica (Francke, Robaey and Asveld, forthcoming; Robaey, Bailey, and Asveld, forthcoming), our framework suggest doing that by understanding what technological choices (new crops, new practices, or new processes) 1) add to the capabilities of an individual (Robaey, Asveld and Osseweijer, 2018) and 2) build and add to community capitals, individuals and communities can better deal with uncertainties and make responsible choices in innovation.In turn, we suggest that technology and project developers can learn to adapt by integrating capabilities and community capitals in their goals for them to do responsible innovation. This framework ultimately offers an encompassing and dynamic definition of inclusion.

References

  • https://goodfuels.com/varos-subsidiary-reinplus-fiwado-goodfuels-and-nederlands-loodswezen-develop-partnership-to-supply-more-sustainable-biofuels/
  • https://skynrg.com/press-releases/klm-skynrg-and-shv-energy-announce-project-first-european-plant-for-sustainable-aviation-fuel/
  • Francke, Robaey and Asveld. forthcoming. The Bioeconomy transition: Radical or incremental innovation? Using TIS to do context sensitive bio-refinery design
  • Robaey, Bailey, and Asveld. forthcoming. Towards an inclusive bioeconomy – capitals of farming communities and their normative dimension. Eursafe conference 2019
  • Robaey, Sinha and Asveld. forthcoming. Co-creating futures for an inclusive bio-economy: the case of second generation biofuels in Iowa
  • Robaey. Z., Asveld, L. and Osseweijer, P. 2018. Roles and responsibilities in transition? Farmers’ ethics in the bio-economy. In Svenja Springer and Herwig Grimm (eds.) Professionals in food chains. Wageningen Academic Publishers. pp. 49-54.

David van den Berg and Mirko Schaefer (Utrecht University, The Netherlands
Data Ethics Decision Aid (DEDA) — A bottom-up approach for Ethics by Design

In the half-decade after the Data Revolution (Kitchin, 2014) the academic world and the world of governance are stilling coming to terms with the ethical challenges posed by data science (Floridi, 2016; Van Schie, Westra, Schäfer, 2017). These challenges are now coagulating in the form of guidelines, such as those proposed by the EU which state that, in order to develop trustworthy AI, 7 guidelines should be followed. These guidelines concern AI but similar attempts have been made for (big) data projects or general technological development[2]. These guidelines, principles or handholds are intended to lead to ethics by design (even when this is not made explicit). Yet these methods do not guarantee ethics by design. Ethics by design is not created by developing frameworks intended to superimpose top-down morality on the design process. Ethics by design can only be generated through incorporating personal and organisation values into the design process, by having multidisciplinary development teams with high degrees of ethical awareness work together.

The Data Ethics Decision Aid (DEDA) is a dialogical tool that allows data teams to perform an ethical impact assessment. It also serves a purpose for researchers, as a tool that provides insight into the data practices of (local) governmental organisations. Drawn from two years of participatory observation in municipalities, DEDA has proven to increase data ethical awareness in its participants. It has been licenced out to the coalition of Dutch municipalities (VNG) and is being implemented by the ministry of general affairs and the policy academy in the Netherlands. When using DEDA as a review/assessment tool, it successful points out the ethical pitfalls of the project at hand and any gaps Moore’s strategic triangle (1995) in: public value, operational capacity or legitimacy. DEDA is a tool that gently stimulates values by design while recognizing that a top-down approach to value by design is not enough. Values are dependent on socio-cultural context (Turiel, 2002) and technological development is affected by social systems and the people in it (Friedman, Kahn, & Borning, 2002). A municipality led by a liberal party will want to develop data driven policies based on different values than a municipally lead by a conservative counsel, requiring different technological practise to fuel the data driven governance. Whatever method is used to implement ethics or values by design, they have to be able to take into account the differences in socio-cultural (and political) context. The DEDA is able to take these variances into account by being a bottom-up, dialogical approach to ethics by design that never tries to be prescriptive and reduce ethics to a checklist.

Footnotes

  1. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  2. https://theodi.org/article/data-ethics-canvas/, https://www.ethicscanvas.org/, https://strategyzer.com/canvas/business-model-canvas/, https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework/, https://responsibledata.io/

References

  • Floridi, L., Taddeo, M. What is data ethics? Philos. Trans. A Math. Phys. Eng. Sci. 374, 20160360.
  • Friedman, B., Kahn, P., & Borning, A. (2002). Value sensitive design: Theory and methods. University of Washington technical report, (02–12).
  • Kitchin, R. (2014). The data revolution: Big data, open data, data infrastructures and their consequences. Sage.
  • Moore, M. H. (1995). Creating public value: Strategic management in government. Harvard university press.
  • Turiel, E. (2002). The culture of morality: Social development, context, and conflict. Cambridge: Cambridge University Press.
  • Van Schie, G., Westra, I., & Schäfer, M. T. (2017). Get Your Hands Dirty: Emerging Data Practices as Challenge for Research Integrity. 188 – 300

Bas de Boer (University of Twente, The Netherlands
Turning a lifestyle into a disease: Analysing the scientific promotion of a healthy lifestyle

Currently, several scientific projects are devoted to the promotion of a healthy lifestyle, for example the 4TU Pride and Prejudice Project. The development and use of health-tracking technologies is thought to be a key factor for the success of such projects. The assumption underlying such projects is that an increase in knowledge—mediated by health-tracking technologies—about several health parameters (e.g., food intake, physical activity) will motivate individuals to more actively pursue a healthy lifestyle. For this reason, health-tracking technologies can be understood as disruptive: they are explicitly designed to transform the everyday life of their users and disrupt their existing habits. By attaining knowledge of the relation between one’s lifestyle and one’s biology, individuals are able to take better responsibility for their health status, so it is thought. In this paper, I problematize this assumption from two different perspectives: Firstly, I show that such projects problematically assume that ‘responsibility’ is a static concept, and fail to take into account that what is taken to be responsible is mediated by the health tracking technologies used. Secondly, I show—drawing on the work of Georges Canguilhem—that a scientific perspective on health assumes a rather specific distinction between the ‘normal’ and the ‘pathological’, which does not need to coincide with the way this distinction is drawn by the targeted individuals. To conclude with, I argue that the seemingly objective promotion of a healthy lifestyle by means of scientific knowledge is in fact a strongly normative promotion of a particular form of subjectivity.

Mieke Boon (University of Twente, The Netherlands)
How scientists are brought back into science – The error of empiricism

This paper aims at a contribution to critically investigate whether human-made scientific knowledge and the scientist’s role in developing it, will remain crucial – or can data-models automatically generated by machine-learning technologies replace scientific knowledge produced by humans? Influential opinion-makers claim that the human role in science will be taken over by machines. Chris Anderson’s (2008) provocative essay, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete, will be taken as an exemplary expression of this opinion.

The claim that machines will replace human scientists can be investigated within several perspectives (e.g., ethical, ethical-epistemological, practical and technical). This chapter focuses on epistemological aspects concerning ideas and beliefs about scientific knowledge. The approach is to point out epistemological views supporting the idea that machines can replace scientists, and propose a plausible alternative that explains the role of scientists and human-made science, especially in view of the multitude of epistemic tasks in practical uses of knowledge. Whereas philosophical studies into machine learning often focus on reliability and trustworthiness, the focus of this chapter is on the usefulness of knowledge for epistemic tasks. This requires to distinguish between epistemic tasks for which machine learning is useful, versus those that require human scientists.

In analyzing Anderson’s claim, a kind of double stroke is made. First, it will be made plausible that the fundamental presuppositions of empiricist epistemologies give reason to believe that machines will ultimately make scientists superfluous. Next, it is argued that empiricist epistemologies are deficient because it neglects the multitude of epistemic tasks of and by humans, for which humans need knowledge that is comprehensible for them. The character of machine learning technology is such that it does not provide such knowledge.

It will be concluded that machine learning is useful for specific types of epistemic tasks such as prediction, classification, and pattern-recognition, but for many other types of epistemic tasks —such as asking relevant questions, problem-analysis, interpreting problems as of a specific kind, designing interventions, and ‘seeing’ analogies that help to interpret a problem differently— the production and use of comprehensible scientific knowledge remains crucial.

References

  • Abu-Mostafa, Y.S., Magdon-Ismail, M., and Lin, H-T. (2012). Learning from data. AMLbook.com
  • Alpaydin, E. (2010). Introduction to machine learning. The MIT Press: Cambridge.
  • Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Wired Magazine June 23, 2008. Retrieved from: https://www.wired.com/2008/06/pb-theory/
  • Bogen, J., & Woodward, J. (1988). Saving the Phenomena. The Philosophical Review, 97(3), 303-352. doi:10.2307/2185445
  • Boon, M., & Knuuttila, T. (2009). Models as Epistemic Tools in Engineering Sciences: a Pragmatic Approach. In A. Meijers (Ed.), Philosophy of technology and engineering sciences. Handbook of the philosophy of science (Vol. 9, pp. 687-720): Elsevier/North-Holland
  • Humphreys, P. (2009). The philosophical novelty of computer simulation methods. Synthese, 169(3), 615-626. doi:10.1007/s11229-008-9435-2
    Nersessian, N. J. (2009). Creating Scientific Concepts. Cambridge, MA: MIT Press.
  • Suppe, F. (1974). The Structure of Scientific Theories (1979 (second printing ed.). Urbana: University of Illinois Press.
  • Suppe, F. (1989). The Semantic Conception of Theories and Scientific Realism. Urbana and Chicago: University of Illinois Press.
  • Suppes, P. (1960). A Comparison of the Meaning and Uses of Models in Mathematics and the Empirical Sciences. Synthese, 12, 287-301.
  • Van Fraassen, B. C. (1977). The pragmatics of explanation. American Philosophical Quarterly, 14, 143-150.
  • Van Fraassen, B. C. (1980). The Scientific Image. Oxford: Clarendon Press.

Leonie Bossert, Cordula Brand and Thomas Potthast (University of Tübingen, Germany)
Substituting ecosystem services by AI machines. The case of pollinating robotic bees – an ethical reflection

Technologies based on Artificial Intelligence are already widespread in daily life as well as for supporting societal institutions – and they will become ever more important. it is necessary to reflect on these technologies as a societal practice from an ethical perspective. For this ethical involvement we hold it essential to include the concept of Sustainable Development (SD) (Brundlandt 1987), which intertwines development of just societies with environmental concerns. Without including SD-strategies, AI-technologies have the clear potential of pass over planetary boundaries (Rockström et al. 2009) even faster and absent societies even more from (global and local) peace (WBGU 2019). A prevalent perspective in SD-debates states that ecosystem services (ESS) to a great extent are not substitutable through technology or accumulation of monetary capital. If technologies emerge, which are able to undertake ESS, what does that mean for SD, for societies, for humanities role within “nature”, for the valuation of ecosystems? One example of such technology currently is being developed with robotic bees. Robotic bees are small robots, developed to be able to fly autonomously, able to pick up pollen from one flower and pollinate another flower with it. As such, they are obviously able to carry out ESS (Amador/Hu 2017; Chechetka et al. 2017). With robot bees, biotechnology provides a possibility to work around the problem of global bee mortality and might provide the service even more targeted. If robot-based, artificial pollination will become feasible, this technology clearly has disruptive potential. First of all, the (already fuzzy) demarcation line between naturalness and artificality (Lie 2017) needs to be renegotiated as regards epistemological as well as ethical orientation. Second, the human-nature-relationship can change fundamentally, if humanity develops ESS-substituting technology and makes million years old ESS obsolete. Third, one needs to rethink values of agricultural technologies and practices as well as of nature conservation, which play a crucial role in current ethical debates. Alike, a renegotiation of ethical foundations of SD needs to take place, since many of them assume a certain non-substitutability of ESS. Beyond these essential theoretical aspects, robotic bees would also bring important changes for the practice of food production and food supply. Nearly all globally consumed fruits depend on pollination for growing the edible fruit, a process which then might be performed by AIs changing production patterns in a radical way. In that sense, issues of conditio humana, free & fair society as well as nature, life & human intervention are being at stake.

In this paper we will first work out the disruptive potential of robotic bees as a substitution of ecosystem services. In a second step we will reflect the example from perspectives on ethics of the environment, sustainability, and technology as well as from a global justice point of view. On this basis, the implications of this future technology will be reflected. Finally, we try to offer possible answers to the questions of how to deal with the disruptive potential of such a technology.

Literature

  • Amador, Guillermo; Hu, David L. (2017): Sticky Solution Provides Grip for the First Robotic Pollinator. In: Chem 2, pp. 162-170.
  • Chechetka, Svetlana A.; Yu, Yue; Tange, Masayoshi; Miyako, Eijiro (2017): Materially Engineered Artifical Pollinators. In: Chem 2, pp. 224-239.
  • Lie, Svein Anders Noer (2016): Philosophy of Nature: Rethinking Naturalness. Routledge 2016.
  • Rockström, Johan et al. (2009): Planetary Boundaries: Exploring the Safe Operating Space for Humanity. In: Ecology and Society 14 (2): 32.
  • WBGU – Wissenschaftlicher Beirat der Bundesregierung Globale Umweltveränderungen (2019): Unsere gemeinsame digitale Zukunft. Online available: https://www.wbgu.de/fileadmin/user_upload/wbgu/publikationen/hauptgutachten/hg2019/pdf/WBGU_HGD2019_Z.pdf (06.06.2019)
  • WCED – World Commission on Environment and Development (1987): Our Common Future. Oxford, New York: Oxford University Press.

Britte Bouchaut and Lotte Asveld (Delft University of Technology, The Netherlands)
The thinkers vs. the doers: Product, process and system applications of Safe-by-Design

Advanced gene editing techniques such as CRISPR/Cas have increased the pace of developments in the field of industrial biotechnology. Such techniques mean new possibilities when working with living organisms, possibly leading to more risks, and in particular uncertain risks. A suggested candidate to anticipate these uncertain risks is the Safe-by-Design (SbD) approach. SbD engages different stakeholders throughout a biotechnology’s development process, enabling collectively designing with safety in mind (van de Poel & Robaey, 2017). When many stakeholders are involved, it is important that the expectations, notions and perceptions these stakeholders have are known and aligned while designing for safety. Any mismatches might lead to difficulties in choosing ‘the right’ design options, complicating achieving a collective design where an adequate level of safety is met (Robaey, 2018).

SbD implies both engineered and procedural safety; i.e. product and process applied (Khan & Amyotte, 2003; Stemerding & de Vriend, 2016; van de Poel & Robaey, 2017). Product applied SbD includes measures specifically aimed at the product itself, and usually takes place at the beginning of a biotechnology’s development process, e.g. idea and R&D phase. In that sense, stakeholders that adhere to this perspective seem to put more emphasis on product specifications in terms of what would be acceptably safe. Process applied SbD includes measures that cannot be specifically applied to technical components, adhering more to societal issues and involve decision making at other levels, e.g. policy level. This perspective of SbD is mostly applied later in the development process, i.e. market implementation. Stakeholders applying SbD process-wise put more emphasis on the process itself, and societal issues that come along during this process.

Although SbD finds its strength in its ability to anticipate early on issues identified by a variety of stakeholders, this also creates tension between them. Societal issues identified by stakeholders active in later stages of a biotechnology’s development should ideally already be taken into account during the early stages of development. But, as stakeholders active at the beginning tend to have a different focus, there seems to be a lack of understanding why emphasis should be put already during these early stages of development on safety issues that may not be relevant until later. This early enactment of possible societal issues and accounting for them is often perceived as a burden to researchers, suggesting a mismatch between the different applications of SbD.

This paper explores the two approaches of SbD, product and process applied, and whether there is actually a mismatch between stakeholders perspectives active at the beginning and at the end of a biotechnology’s development process. In addition, it is explored to what extent these perspectives affect stakeholders’ decision making with regard to risks and safety, and what this might mean for applying the SbD approach as a risk governance instrument.

Bibliography

  • Khan, F. I., & Amyotte, P. R. (2003). How to Make Inherent Safety Practice a Reality. The Canadian Journal of Chemical Engineering, 81(1), 2–16.
  • Robaey, Z. (2018). Dealing with risks of biotechnology : understanding the potential of Safe-by-Design.
  • Stemerding, D., & de Vriend, H. (2016). Nieuwe risico’s , nieuwe aanpak – Synthetische Biologie en Safe-by-design.
  • van de Poel, I., & Robaey, Z. (2017). Safe-by-Design: from Safety to Responsibility. NanoEthics, 1–10.

Sage Cammers-Goodwin (University of Twente, The Netherlands)
Negotiating the right to privacy on a smart bridge

“Right to be let alone”

This is the definition of privacy that Louis Brandeis and Samuel Warren conceptualized in the seminal article “The Right to Privacy” published in 1890 by the Harvard Law Review.

While people have experienced a growing constriction of their privacy in order to fully participate in the digital age there remains a sense that one could abandon the virtual world (and all the rights and privileges that come with it).

However, this dynamic changes when the built environment becomes one with personalized digital infrastructures. This can take place bidirectionally. First, digitalization may be necessary to participate in and navigate the built environment – imagine a bus card being needed to take public transport or facial recognition being used to enter a building. Second, the built environment may automatically digitalize interactions – CCTV might capture and apply facial recognition to passer-by.

Using the case study of a sensor embedded bridge to be installed in Amsterdam’s Red Light District and feedback from a workshop series in Enschede, the Netherlands, the question is posed: what is the right to privacy in public space?

Does it matter the type of information that is gathered and for what purpose, or who is doing the gathering and has access? Can privacy still be invaded if one is anonymous or part of a much larger group? The implications of the answers to these questions are explored first on a micro level in relation to the bridge and then the benefits of best practices universally are briefly investigated.

Marianna Capasso (Sant’ Anna School of Advanced Studies, Italy)
Science and society. Socially Disruptive Technologies in genetic engineering

In this paper, I will discuss the meaning and possible regulations of CRISPR-Cas9, a specific technique of genetic engineering, in its application on human genome. I will argue that CRISPR can be included in the list of new socially disruptive technologies (SDT), focusing on three of its interrelated characteristics.

First, CRISPR claims to realize almost complete control over the genome. Indeed, I will show how both CRISPR supporters and detractors use the same naïve deterministic account of genes function, creating a complex and unexamined bias in public discourse on genes and genetic engineering. I will disentangle what I call “playing God metaphor” and its three false assumptions on CRISPR from the actual functioning of this SDT. I will do that with the help of critics of genetic determinism, such as Lewontin (1984; 2003), and with the results of recent scientific studies on genetic engineering techniques.

Second, as a SDT, CRISPR blurs the lines between the biological/natural and artificial. I will analyse its peculiarity amongst the other snip-and-fix methods of genetic engineering, focusing on the fact that it does not leave tracks of the intervention after the organism has been modified. Thus, in a posteriori analysis it is quite impossible to distinguish between a modified-CRISPR organism and one that is not.

Third, CRISPR emerges as a response to a societal challenge: the need to prevent and try to eradicate genetic disease with no cure.
After having characterised CRISPR as a SDT, in the second section of my paper I will set out to lay down the epistemological status of this technology and its relation to science and society. I will argue that both empiricist epistemology and social constructivism fail to give an adequate conceptual framework to new scientific objects, i.e. GMOs and modified embryonic cells. However, I will sustain that the pragmatist transactional conception of knowledge and reality expressed by Dewey could provide a framework able (a) to conceptualize new entities such as CRISPR-modified organism; and, moreover, (b) to show how this new entities are changing science and society simultaneously; finally, (c) to demonstrate how science in its practices and premises are laden with moral values and objectives. I will do that with the help of Dewey’s works and Barrotta (2018).

Based on this conceptual refinement, in the last section I will argue that denouncing science as ideology – as Lewontin did, we have seen in the first section – because it is laden with values could be also counterproductive, because values always accompany scientific research. I will argue that the issue here is not the adoption of values, but the selection of values that can better sustain research. Finally, I will attempt a reflective analysis on values in the case of CRISPR, proposing an operationalization of the concept of Meaningful Human Control (MHC), used in the case of Autonomous Weapon System, to the domain of genetic engineering. I will include in the concept three values-guidelines: safety-oriented design, responsiveness, collective aim.

Short Bibliography

  • Barrotta, P. (2018), Scientists, Democracy and Society. A community of inquirers. Cham: Springer.
  • Dewey, J. (1908). Does reality possess practical character? In Dewey (1998). The essential Dewey (L. Hickman & T. Alexander Eds.). Bloomigton/Indianapolis: Indiana University Press. (Vol. 1,124–133).
  • Dewey, J. (1909). The influence of Darwinism on philosophy. In Dewey (1998). The essential Dewey (Vol. 1, 39–45).
  • Dewey, J. (1925). Experience and nature. In Dewey (1969–91). The collected works. (The later works, Vol. 1).
  • Lewontin, R., Rose, S.; Kamin, L. (1984). Not in Our Genes: Biology, Ideology and Human Nature. London: Penguin Books.
  • Lewontin, R. (2003), The DNA era: Cellular Complexity and the Failures of Genetic Engineering, GeneWatch 16, 2003, 3-7.

Donna Champion (Nottingham Business School, United Kingdom
Ethical design for planetary-scale systems

This paper addresses the question: how do we design planetary-scale systems that support inclusive participation, whilst ensuring equitable access to resources? New technologies such as cryptocurrencies, connected products, Internet of Things (IoT) and artificial intelligence all rely on intelligent, decentralised systems, which operate across planetary-scale distributed networks. Our current theories of technology (and its multifaceted impacts) have not been constructed to take account of the degree of complexity, or the scale and speed of operation that is now commonplace for many intelligent systems and connected products.

Sociotechnical theory regards the social and the technical as interdependent, but essentially different realms and work in this field has focused on finding ways of designing social and technological systems so they work in harmony (Mumford 1995; Ostrom, 1990). Sociotechnical design offers end-users a position as key-players, stemming from a belief that workers must control productive assets to develop the most effective and efficient way of operating and producing goods in a local context. Similarly, the work on socio-materiality has developed ideas of how the social and technical interconnect. Leonardi (2013) makes the case for an approach where “the social and material are separate but are put into relationship with each other” (p. 69), and Orlikowski and Scott (2008) adopt an ‘agential realist’ philosophy (from Barad, 2007), and so regard the social and material as being inherently inseparable and argue non-human actors are participants in the production of knowledge. However, all these theories work on an assumption that the design and operation of ‘the system’ will occur within a reasonably bounded organizational context, with an identifiable set of end-users. This assumption is no longer valid for many emerging technologies.

This paper argues, that to design planetary-scale systems that support human rights and human dignity, we need to reconceptualise them as cyber-social collectives. These collectives are constructed from people, software algorithms, peer-to-peer networks and global communications infrastructure, and they act in the world in ways that have political intentionality. Through time, for any cyber-social collective, power and intentionality will be continuously constructed and expressed in different ways, and because these collectives are networked on a planetary-scale and have dynamic complexity, beginning their design from a description of their make-up (as required in the sociotechnical design approaches discussed above, and in network analysis and Actor Network Theory, Latour, 1990) is inadequate. We need to think about our designs in terms of the action we wish to see implemented and
the political intentionality of that action, this approach also offers a way of holding the collective to account.

Increasingly, we are allowing our planet’s fragile social, cultural and natural resources to be controlled and managed through new disruptive technologies, intelligent systems and networked organizations. By reconceptualising these systems as cyber-social collectives that have political intentionality, we can begin to design global, critical systems in a way that keeps them accountable to the citizenry.

References

  • Barad, K. (2007). Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Duke University Press, Durham and London. Latour, B. (1990). “Technology is society made durable”, The Sociological Review, Vol. 38, S1, pp. 103-131.
  • Leonardi, P.M. (2013). Theoretical foundations for the study of materiality. Information and Organization. 23(2), 59-76.
  • Mumford. E. (1995). Effective Systems Design and Requirements Analysis: the ETHICS Approach. Palgrave, Basingstoke.
  • Orlikowski, W. & Scott, S. (2008). Sociomateriality: Challenging the separation of technology, work and organization. The Academy of Management Annals. 2(1), 433-474.
  • Ostrom, E. (1990). Governing the commons: the evolution of institutions for collective action. Cambridge, New York, Cambridge University Press

Anjan Chamuah and Rajbeer Singh (Jawaharlal Nehru University, India)
Responsibility in embedding values in Indian agriculture insurance: A civilian UAV innovations

Emerging technology like civilian unmanned aerial vehicle (UAV) is an innovation in Indian agriculture, which has the potential to disrupt the mainstream technology(Adner, 2002) like satellite imageries used in crop insurance. In a democratic country like India with diverse socio-culture aspects; technology affects the culture-specific values of the agriculture community. For technology not only shapes the physical world but also the ethical, legal and social environment in which we live in(Jasanoff, 2016); and has social, political and cultural implications(Winner, 2001). Promoting the values and taking care of new technology can only make the assimilation easier since it is loaded with many uncertainties and risk(J. Gonzalez, 2015). Responsible Innovation (RI) approach(Hoven, 2013; Owen et al., 2013; Setiawan & Singh, 2015; Setiawan, Singh, & Romijn, 2017; Stilgoe, Owen, & Macnaghten, 2013; Von Schomberg, 2013), which talks about taking care of emerging technology while adhering to ethical, social, economic and environmental viability is adopted as a theoretical framework for the study. The study would address as research question how the dimension of responsibility helps in embedding values into civilian UAV innovations in Indian agriculture insurance?

Prospective interviewees for in-depth interviews are explored through the snowball technique to get an overview of the civil UAV and the current values. Interviews are conducted in the state of Rajasthan and Delhi from January 2017 to May 2019 aided by an interview schedule. The interviewees are agriculturalists, scientists, researchers, engineers, government employees, technology developers, policy analysts, farmers and consultants associated with organisations like ministry of agriculture, director general of civil aviation, crop insurance companies, agriculture universities, research institutes and non-government organisations; and, are the main actors of governance and deployment of civil UAV in Indian agriculture insurance. Verbatim notes were gathered during the interviews supported by voice recording and analysed according to the research objectives of the study.

Analysis from the literature and in-depth interviews it is evident that the civil UAV is new to Indian agriculture; not yet fully implemented in all the states. Though RI is deployed in the European countries, in Indian agriculture, it is a new phenomenon. In countries like India where 50% of the population is dependent on agriculture for livelihood (Sunder, 2018); innovative technologies like UAV can make a significant contribution to development and improvement of agriculture insurance provided the basic requirements for flying the technology be fulfilled. Findings reflect that for strong governance and deployment of the technology; promoting responsibility and accountability of all the actors and institutions, adhering to the culture-specific values of the society is essential. Values like privacy, security, trust, transparency and affordability assume high significance in new and emerging technology like UAV in agriculture applications. Embedding these values can only help in fulfilling the sustainability dimensions (socio, economic and environmental).

Keywords: Values; civilian UAV; responsibility; innovation; technology; agriculture insurance

References

  • Adner, R. (2002). When are technologies disruptive? a demand-based view of the emergence of competition. Strategic Management Journal, 23(8), 667–688. https://doi.org/10.1002/smj.246
  • Hoven, J. van den. (2013). Value Sensitive Design and Responsible Innovation. In Responsible Innovation (pp. 75–83). https://doi.org/10.1002/9781118551424.ch4
  • J. Gonzalez, W. (2015). New perspectives on technology, values, and ethics: theoretical and practical. New York, NY: Springer Berlin Heidelberg.
  • Jasanoff, S. (2016). The Ethics of Invention: Technology and the Human Future. W. W. Norton & Company.
  • Owen, R., Stilgoe, J., Macnaghten, P., Gorman, M., Fisher, E., & Guston, D. (2013). A framework for responsible innovation. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, 31, 27–50.
  • Setiawan, A. D., & Singh, R. (2015). Responsible Innovation in Practice: The Adoption of Solar PV in Telecom Towers in Indonesia. In B.-J. Koops, I. Oosterlaken, H. Romijn, T.
  • Swierstra, & J. van den Hoven (Eds.), Responsible Innovation 2 (pp. 225–243). https://doi.org/10.1007/978-3-319-17308-5_12
  • Setiawan, A. D., Singh, R., & Romijn, H. (2017). Embedding Accountability Throughout the Innovation Process in the Green Economy: The Need for an Innovative Approach. In T. Taufik, I. Prabasari, I. A. Rineksane, R. Yaya, R. Widowati, S. A. Putra Rosyidi, … P. Harsanto (Eds.), ICoSI 2014 (pp. 147–158). https://doi.org/10.1007/978-981-287-661-4_17
  • Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580.
    Sunder, S. (2018, January 29). India economic survey 2018: Farmers gain as agriculture mechanisation speeds up, but more R&D needed – The Financial Express. Retrieved 15 October 2018, from https://www.financialexpress.com/budget/india-economic-survey-2018-for-farmers-agriculture-gdp-msp/1034266/
  • Von Schomberg, R. (2013). A vision of responsible research and innovation. Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, 51–74.
  • Winner, L. (2001). Autonomous technology: technics-out-of-control as a theme in political thought (9. printing). Cambridge, Mass.: MIT Press

Bartek Chomanski (Northeastern University, United States)
Should there be a right to build AI servants?

It is possible that artificial general intelligence (AGI) will be built. If it is going to be built, it is reasonable to expect private companies to be among the pioneers of the technology. What principles should govern the specifics of the ethical design of machines equipped with AGI? Here I will focus on just one aspect of this multifaceted question: Assuming that a machine with AGI will have human-level capacities (to reason, to have phenomenal experience, to communicate and act on its own) – at least some of which are plausibly jointly sufficient for humanlike moral status – should there be a right to build such machines (henceforth, “artificial persons”) and, especially, to build them for commercial purposes?

There is a particular class of artificial persons that has exercised the minds of some philosophers and the general public alike in this context: the “willing slave,” programmed to desire to serve human needs above all else (for simplicity, I assume the AGIs will only desire to serve ethically unobjectionable needs in ethically unobjectionable ways). While a plausible case can be (and has been) made that building servile AGI is unethical, there is, I claim, no good reason for banning the practice. There should be a legal right to design AGI servants. Whatever wrongs one can plausibly be said to inflict on the artificial persons in programming them with servile desires, the wrongs do not deserve legal sanction.

The most popular reasons for thinking that building a servile AGI is unethical have to do with two kinds of considerations: sometimes it is claimed that designing AGIs to be servile violates their autonomy in that it dictates a life plan to them; other times it is claimed that in building servile AGIs one deprives them of the capacity to realize their full potential: e.g. the servile AGI will likely use its abilities to perform necessary but boring or dangerous tasks that humans will not want to perform, rather than spend their lives pursuing more fulfilling or worthy projects. (One could also argue that designing servile AGIs indicates a deficient moral character on the part of the programmers.)

Whatever view one takes on this issue, however, one should not take the immorality of such design to be a decisive reason for legally prohibiting companies and individuals from building servile AGIs. This is because either the particular ethical wrongdoing provides no reason at all for criminalizing the action (for example, by Millian liberalism – and even some versions of legal moralism – the fact that building servile AGI merely evinces an objectionable character trait is no reason to ban it), or the reason for criminalization is weak. I argue for this further claim by considering analogous acts of comparable severity (dictating another’s life-plan or consigning them to a life of unfulfilled potential) that, intuitively, it would be wrong to criminalize. I conclude on the basis of these analogies that there is a right to build servile AGIs, even though companies and individuals shouldn’t do it

Tom Coggins (Delft University of Technology, The Netherlands)
Roomba-900 and the hidden curricula

This paper raises the question: why do Roomba-900 users allow these robots to gather sensitive data about their homes? Whereas Roomba-900’s manufacturer iRobot have asserted that users agree to these practises consensually, the paper develops an alternative explanation that acknowledges and examines the normalisation of data-collection. By expanding on the work of Joseph Turow, the paper argues that wide-spread data collection has a disciplinary effect on targeted individuals and populations, persuading them to perceive practises that compromise their ability to control the flow of their personal information as normal. The paper primarily focuses on Roomba-900 but asserts that these autonomous vacuum cleaners are representative of a broader category of smart consumer products, that possess similar surveillance capabilities. Overall, the paper reflects on the changes that may occur to individuals’ understanding of informational privacy, due to the normalisation of data-collection.

I begin by discussing Roomba-900’s onboard mappings system, then establish that this feature puts users’ informational privacy at risk (Nissenbaum, 2010), as it collects sufficient data about them for external actors to determine their gender, relationship status, income and spending habits. Although iRobot has assured its client-base that they will only share this data with third-parties with users consent, the paper argues that this guarantee does not safeguard informational privacy.
After demonstrating that consent in online contexts has become a weak mechanism for protecting informational privacy (Solove, 2012), I move onto discussing social factors that influence individuals to agree to intrusive data-collection. Many scholars have argued that pervasive data-collection impacts targeted individuals’ and populations’ understanding of appropriate online behaviour, causing them to treat these practises as though they are normal or unavoidable (Wissinger, 2018; Yeung, 2018; Zuboff, 2015). Joseph Turow explains that this normalisation process occurs due to a variety of disciplinary measures he calls hidden curricula, which reward and penalise certain behaviour (Turow, 2017). This term, hidden curricula, represents the norms and values individuals must implicitly accept, if they wish to access online services, including those offered by tech companies such as iRobot.

The paper then applies this line of argument to the Roomba-900 case, in order to show that the robots’ users may agree to intrusive data-collection even though they disagree with these practices on personal or ethical grounds. I conclude by highlighting that technical artefacts similar to Roomba-900 have the potential to alter their users’ perception of what counts as intrusive (Kudina, & Verbeek, 2019), which may cause them to reassess their understanding of informational privacy (van de Poel, 2018).

Bibliography

  • Kudina, O., & Verbeek, P.-P. (2019). Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy. Science, Technology & Human Values, 44(2), 291–314. https://doi.org/10.1177/0162243918793711
  • Nissenbaum, H. F. (2010). Privacy in context: technology, policy, and the integrity of social life. Stanford, Calif: Stanford Law Books.
  • Solove, D. J. (2012). Privacy Self-Management and the Consent Dilemma (SSRN Scholarly Paper No. ID 2171018). Retrieved from Social Science Research Network website: https://papers.ssrn.com/abstract=2171018
  • Turow, J. (2017). The Aisles Have Eyes: How Retailers Track Your Shopping, Strip Your Privacy and Define Your Power. New Haven: Yale University Press.
  • van de Poel, I. (2018). Design for value change. Ethics and Information Technology. https://doi.org/10.1007/s10676-018-9461-9
  • Wissinger, E. (2018). Blood, Sweat, and Tears: Navigating Creepy versus Cool in Wearable Biotech. Information, Communication & Society, 21(5), 779–785. https://doi.org/10.1080/1369118X.2018.1428657
  • Yeung, K. (2018). Algorithmic regulation: A critical interrogation: Algorithmic Regulation. Regulation & Governance, 12(4), 505–523. https://doi.org/10.1111/rego.12158
  • Zuboff, S. (2015). Big other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, 30(1), 75–89. https://doi.org/10.1057/jit.2015.5

Samantha Copeland (Delft University of Technology, The Netherlands
Epistemically disruptive technologies

This paper takes it for granted that epistemic communities are sites where knowledge is produced, shared and used, in an interdependent manner, wherein issues of justice and trust are key. Important questions in epistemology concern where the boundaries of such communities might be drawn, how such communities are formed, and the relationship between the epistemology and the ethics of such communities. I focus on a situation that is underrepresented in the field, the problem of epistemic communities in flux. Such situations often occur as the result of new technologies of knowledge production, sharing and use coming into play. Consequences can include the shifting of epistemic roles (new knowledge producers, new expertise) and the re-distribution of expertise (different expertise becomes valuable). I look at two examples, one of narrow and one of wide scope.

The first example is the introduction of deep brain stimulation (DBS) as a medical technology. Given the importance of the knowledge that such technology can produce, not only have the roles of medical researcher and neuroscientist intersected, but the role of the human participant shifts too, epistemically speaking. That is, human participants in experimental DBS trials are knowledge producers in a way that differs from the epistemic role of other human subjects in medical experimental situations. They are active observers, and therefore interpreters of the knowledge that can be gained from the trial, for one, being awake during the procedure and being the primary observer of the effects of the DBS. As a result, how they should be treated in these contexts should shift as well—their role has shifted from contributor of data to producer of knowledge, from passive to active, and so the interdependency relations shift, and in turn the nature and location of the trust involved in the knowledge exchange shifts, too. I note that this interdependency is not stable, however, given the context (both narrow and wide) of the use of DBS as both a therapeutic and experimental technology.

The second example I will give of an epistemic community in flux is the urban community represented in modern resilience planning and smart city applications. The interconnected, digital environment depicted in these futures embraces the connectivity that new technologies offer. But there are particular, epistemic consequences to the adoption of such connective technologies, as we have already begun to see. To take a contemporary urban context for an example, the influx of immigration, city planning in preparation for climate change (and, for instance, the expansion of transportation infrastructure into new neighbourhoods), and increasing use of novel technologies (like IoT devices), the epistemic roles and profile of the epistemic community members are almost constantly in flux. New knowledge producers, the need for new kinds of knowledge, and how we regard each other’s testimony and capacities for contributing to the community, epistemically speaking, are all changing in relation to one another.

From these two cases, the beginnings of a framework for understanding epistemic communities in technology-driven flux will be drawn.

Lesley-Ann Daly (Central Saint Martins, UAL, United Kingdom)
Democratising the design and ethical development of human enhancement technology

Democratising the development of emerging technologies begins with allowing more diverse voices and perspectives to be considered. By creating visualisations about the speculative impacts of Human Enhancement technologies, I aim to bring awareness and power to non-specialist audiences.

With the rising interest in Human Enhancement (HE) academics and governmental bodies have commissioned reports about the potential impacts of the technology (STOA, 2009; Bioethics, 2003; The Royal Society, 2012). These reports look to define what constitute HE technologies (as opposed to therapeutic technologies), group and give insight (e.g. pharmaceuticals, gene editing, cognitive enhancement), and extrapolate potential ethical issues that could arise when in widespread use. This investigation raises pertinent questions around what this will mean for humankind in the future and what kinds of enhancements we will actually want? (Haraway, 1991; Savulesco & Persson, 2011)

Examining these research documents it became clear that they hold a plethora of valuable information which should be accessible to non-specialists. Each document also advocated the public being involved in the discussion on policy development and the future progression of the technology. The authors acknowledge the wisdom of specialists in writing these texts (scientists, ethicists, psychologists, philosophers) but also highlight the importance of the publics’ opinion, as they are the future users and could have critical, differing views on these disruptive technologies – What works for one does not necessarily work for all. However, these documents are long, dense and use language that is not necessarily understandable for a non-academic audience. As a designer I decided to use data visualisation techniques in order to create a piece that conveys the potential impacts of HE technology in a more accessible medium. This would let the public gain more knowledge about the technology and its ethical considerations. Thus allowing them the chance to contribute an informed opinion to the discussion on HE policy development at this crucial early stage (Tsekleves et al., 2017).

For this talk I will present the 6 visuals that I have created in collaboration with Graphic designer Gill Brown (Daly, 2019). These highlight some of the key issues raised in the reports, under the headings: Therapy vs Enhancement; Human Rights; Regulatory Considerations; Agency & Accountability; and Sociocultural Impacts, and a visual description of Human Enhancement itself. I will talk about how important it is to speculate about the consequences of emerging technologies, in order to make changes in the present that can impact their development. I will also articulate how important it is to visualise these concepts, through Speculative Design, in order to engage and provoke wider audiences, spreading awareness to groups outside of the science, tech and design bubble (Auger, 2013; Disalvo, 2015). I will show that communicating ideas through a visual format allows for wide dissemination, digitally or physically. And I will present some of the insights gained from my own non-specialist workshops which use visual materials to incite discussion with diverse audiences (Blaikie, 2010). Lastly, I will emphasise how designers, and others, have the power to facilitate the democratisation of decision making for ethics and design, by providing a platform for diverse voices to be herd.

Bibliography

  • Auger, J. (2013) ‘Speculative design: crafting the speculation’, Digital Creativity, 24, pp. 11-35.
  • Bioethics, T.P.C. on, Kass, L.R. (2003) Beyond Therapy: Biotechnology and the Pursuit of Happiness. New York: Harper Perennial.
  • Blaikie, N.W.H. (2010) Designing social research: the logic of anticipation. 2nd edn. Cambridge: Polity.
  • Daly, L. (2019) Impacts of Human Enhancement Technology. Available at: https://www.lesleyanndaly.com/impacts-of-he
  • Disalvo, C. (2015) Adversarial Design. Rev. edn. Cambridge: MIT Press.
  • Hayles, N.K. (1999) How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.
  • Haraway, D. (1991) ‘A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century,’ in Simians, Cyborgs and Women: The Reinvention of Nature. New York: Routledge, pp.149-181.
  • Kensing, F. and Blomberg, J. (1998) ‘Participatory Design: Issues and Concerns’, Computer Supported Cooperative Work, 7, pp. 167-185.
  • Savulescu, J. and Persson, I. (2011) ‘Unfit for the Future?’, in Savulescu, J., Meulen, R. and Kahane, G. (Eds.) Enhancing Human Capacities. U.K: Wiley-Blackwell, pp. 486-500.
  • STOA Science and Technology Options Assessment (2009) Human Enhancement Study. Brussels: European Parliament.
  • The Royal Society, (2012) Human Enhancement and the Future of Work. UK: The Royal Society.
  • Tsekleves, E., Darby, A., Whicher, A., Swiatek, P. (2017) ‘Co-designing Design Fictions: A New Approach for Debating and Priming Future Healthcare Technologies and Services’. Archives of Design Research 30, pp. 5-21.

Matthew Dennis (Delft University of Technology, The Netherlands)
Cultivating ourselves online: Reinventing self-care technology for the 21st century

Recently an increasing number of apps have been designed to cultivate human flourishing in ways that aim to disrupt and displace traditional methods. Once downloaded, such apps guide and monitor how we cultivate our well-being, notifying us with a buzz or a beep when we act in ways that harm our life-goals, offering us tips on how to lift our emotional state, or advising us on how to fine-tune our exercise regimes. The developers of these self-care apps claim that they can radically improve the practice of self-care, and that these products offer powerful new ways to cultivate human flourishing.

Today such self-care apps are created and developed by teams of programmers, entrepreneurs, and industry executives using psychological models of well-being derived from self-help literature. Not only are many of these teams incredibly demographically homogenous, they primarily exclusively understand human flourishing as the subjective experience of well-being, ignoring other factors that philosophers believe are essential for a flourishing human life. Bringing the research of ethicists and practical philosophers into discussion with those that claim that
app-based technologies can cultivate human flourishing can show how technologies of self-care can be radically enriched to better serve our practical lives.

My presentation will show that self-care apps can be improved by drawing on the philosophical understanding of human flourishing. By thinking more deeply about what flourishing involves, we can design self-care apps in ways that enable their end-users to engage in more protracted and meaningful processes of self-development. To do this, this presentation will explore the various conceptions of human flourishing currently operating in the appbased self-care industry, and will propose a series of ethically-informed directives detailing precisely how these conceptions can be enriched. I will also comment on the methods app-based self-care companies currently use to increase flourishing, showing how app-based tools can be improved with the resources of the philosophical tradition of self-cultivation.

Key research questions:

  1. How is human flourishing currently understood by app-based self-care companies? Do these various conceptions marginalise or exclude what philosophers identify as vital aspects of the flourishing life?
  2. Which aspects of the flourishing life does app-based self-care technology currently neglect? How can philosophical reflection enrich, complicate, and improve existing understandings of flourishing to make them more robust and dynamic?
  3. How do current products of app-based self-care companies cultivate flourishing? Does app-based technology have any disruptive advantages over the kinds of methods of self-cultivation explored in the
    philosophical tradition?
  4. Does recent philosophical research on self-cultivation offer new conceptual resources to re-think how we can cultivate ourselves? How can these resources help us meet the demands of contemporary life?

References

  • Brey, P., Briggle, A., Spence, E. (2012). The Good Life in the Technological Age. London: Routledge.
  • Bynum, T. (2006). ‘Flourishing Ethics.’ Ethics and Information Technology 8 (4): 157–173.
  • Couldry, N. (2013). ‘Living Well and Through Media.’ In Ethics of Media, N. Couldry, M. Madianou, A. Pinchevski (eds.), 39–56. New York: Palgrave Macmillan.
  • Rodogno, R. (2015). ‘Well-being, Science, and Philosophy.’ Well-being in Contemporary Society. J. H. Soraker, J. van der Rijt, J. Boer, P. Wong, P. Brey (eds.), London: Springer. 39–57.
  • Vallor, S. (2010). ‘Social Networking Technology and the Virtues’. Ethics and Information Technology. 12 (2): 157–70.
  • Vallor, S. (2012). ‘Flourishing on Facebook: Virtue Friendship and New Social Media’. Ethics and Information Technology, 14 (3): 185–99.
  • Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford: Oxford University Press.
  • Verbeek, P. (2005). What Things Do: Philosophical Reflections on Technology, Agency and Design. University Park: Pennsylvania State University Press.
  • Verbeek (2011). Moralizing Technology: Understanding and Designing the Morality of Things. Chicago: University of Chicago Press.

Anna Deplazes Zemp (University of Zurich, Switzerland)
Biotechnology in conservation biology, is this protection of naturalness?

Avian Malaria is a bird disease that endangers certain bird species in Hawaii and it is expected that Global Warning will increase the negative impacts of Avian Malaria. Some conservation biologists suggest a strategy involving a synthetic biology tool to protect the endangered birds. This tool would genetically modify mosquitos because they are the transmitters of the lethal disease.

Is it a contradiction to apply such technological interventions in order to protect nature? The answer to this question depends very much on what we mean by ‘nature’ or ‘naturalness’. I will present a broad understanding of ‘nature’ and ‘naturalness’ which should help us to understand in what way the use of biotechnology in nature conservation is contradictory and in what respect it is not.

To begin with, I will introduce my perspective-based definition of ‘nature’, which acknowledges that humans are products of nature. However, as humans we look at the rest of nature as something that is different from us, and something that can act and react independently from any human plan. In that sense, nature is not only understood as a passive object but also as an active force. This distinction, between active and passive nature takes up certain elements of Spinoza’s natura naturans and natura naturata (Spinoza 2017/1677). These two aspects of nature lead to different types of naturalness.

For the discussion of naturalness I start with Dieter Birnbacher’s distinction between genetic and qualitative naturalness (Birnbacher 2014). According to Birnbacher, something is natural in the genetic sense if it has a natural origin, and underwent a natural ‘genesis’. In contrast, something is natural in the qualitative sense, if it appears naturally. In other words, if no human agency and plan is observable. While genetic naturalness is a past-oriented understanding of naturalness, qualitative naturalness is present-oriented. Therefore, it seems to be the understanding of nature as a passive object that is predominant when we think of qualitative naturalness, whereas in case of genetic naturalness active forces of nature in the past are as relevant as the present nature as an object resulting from these forces.

However, drawing on the understanding of nature as an active force, I suggest that there is a third type of naturalness, which is future-oriented. I use the term ‘prospective’ to refer to this type of naturalness. Prospective naturalness of an object concerns its future, the more impact active forces of nature have on such an object the more natural it is in the prospective respect.

Returning to the Avian Malaria case, the different understandings of ‘nature’ and ‘naturalness’ can help to understand why the idea of using biotechnology for nature conservation has been as controversially discussed.

If successful, biotechnological intervention could protect qualitative naturalness or nature as a passive object. However, human intervention perceived as an antagonist to nature’s activity, would destroy genetic naturalness. However, if human intervention is understood as a collaborative rather than antagonist force to nature, it could be argued that this conservation strategy could strengthen prospective naturalness. This could be the case under the condition that the endangered bird species would be ‘left to nature’s active forces’ after the intervention.

References

  • Piaggio, Antoinette J. (2017): Is it time for Synthetic Biodiversity Conservation? Trends in Ecology & Evolution 32(2) pp. 97-107
  • Spinoza Baruch (2017 /1677): The Ethics (Ethica Ordine Geometrico Demonstrata) https://www.gutenberg.org/files/3800/3800-h/3800-h.htm
  • Birnbacher Dieter (2014): Naturalness, Is the “Natural” Preferable to the “Artificial”? University Press of America

Lily Frank (Eindhoven University of Technology, The Netherlands)
Health by compulsion? When artificial intelligence meets behaviour change technology

Behaviour change technologies (BCTs) are devices or technological information systems intentionally designed to induce behaviour change in users using a wide range of persuasive methods, for example, by providing feedback on one’s current physical activity level or information on the activity levels of the user’s peers (Fogg 2009; Oinas-Kukkonen 2010). Health-related behaviour change technologies (BCTs) have the potential to address some of the most significant contributors to unhealthy states that have to do with social and lifestyle factors such as diet, exercise, and medication adherence, which do not directly involve access to medical care, thus improving individual health and reducing the social burdens and costs of chronic disease (Rosser et al. 2009). Artificial intelligence (AI) has the potential to amplify the effectiveness of existing health related behavior change techniques and to expand the repertoire of techniques that exist (Michie et al. 2017). Simultaneously, the use of AI in BCTs raises significant new ethical concerns and complicates ethical tensions that have already been identified in the field of behavior change. In the health care context, patients expect that the ethical principles of respect for patient autonomy, the safeguarding of privacy and confidentiality of health data, and robust informed consent will be honored. However, BCTs for health that use AI may challenge these expectations. Most centrally, AI enhanced BCTs may be designed in ways that fail to respect patients or user’s autonomy, that is they may interfere with user’s ability to make their own well informed decisions and live their lives according to their own values and preferences. Secondly, BCTs that use AI may function by analyzing very large data sets gathered from sources such as electronic health records, public health statistics, or even online consumption and social media patterns. For this reason relevant privacy concerns must be addressed. Third, to the extent that BCTs use AI in ways that are not transparent to users how is it possible for user’s to consent to their use? In this talk, I will map several of the ways in which AI enhanced BCTs for health challenge the traditional ethics of health care.

Benedetta Giovanola and Simona Tiribelli (University of Macerata, Italy)
ICTs, algorithms and social justice: Framing new ethical challenges

That algorithms-based ICTs are socially disruptive technologies is today no longer in question. They are not only increasingly blending into our informational environment – being applied in more and more contexts, from recruitment to healthcare, from delivering goods to ensuring national defense, as personal facilitator or tool for justice in courtrooms, etc. – but they are powerful transformative forces that deeply reshape the structure itself of our society: from how we perform daily tasks to institutional practices and decisions. They mediate our interactions and social relations; they influence politics as well as civic and social engagement. At last, they have a key role in the way in which we conceive of and promote social justice.

This paper aims at arguing the relationship between social justice and algorithms-based ICTs, by discussing the ethical challenges raised by algorithms’ impact on the key dimensions of social justice.
In order to achieve this goal, the first part of the paper elaborates a complex conception of social justice, able to combine distributional and socio-relational aspects (Giovanola 2018): this conception is developed basing on the notion of respect and drawing insights from John Rawls’s theory of justice (Rawls, 1971), going beyond most of the philosophical reflection on social justice, that is mainly focused only on the distributive dimension and the role played by institutions (Dworkin, Cohen, Arneson, etc.) or on the socio-relational dimension (Anderson, Scheffler, Honneth, etc.). The second part of the paper sheds light on how the huge predictive power of ubiquitous profiling and filtering algorithms can deeply affect the constitutive dimensions of social justice, by reshaping i) the institutional practices concerning the distributive dimension and ii) the recognition practices at the root of the socio-relational dimension.
Specifically, the paper defines how the distributive dimension is today increasingly delegated to autonomous algorithmic decision-making processes which, inasmuch unceasingly fueled by huge amount of personal and historical data, instead of eliminating or mitigating human biases, might silently (hence: more dangerously, hidden through data neutrality) continue to perpetuate and exacerbate social discrimination and stigmatization, with the consequence of standardizing social injustices in juridical, financial, recruiting, education, and healthcare domains, among others. Moreover, our analysis argues the crucial role played by algorithms in transforming our social and political dimension, by specifically acting at the epistemological level (beliefs) of human autonomy: we show in particular how algorithms, by mediating, reshaping, and filtering – quantitatively and qualitatively – our informational environment as availability of information (Pariser 2011, Sunstein 2017), affect the way in which we develop our moral and social identity: the way in which we understand ourselves and, above all, we understand and consider each other; a crucial mutual recognition at the core of social justice.

Thomas Grote (University of Tübingen, Germany)
Machine learning in healthcare and the perils of trustworthiness

Driven by advances made in deep learning techniques and due to medical data increasingly being collected digitally, machine learning is assumed to transform professional healthcare. Hence, a plethora of high-profile scientific publications report about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations (reviewed by Esteva et al. 2019). This has spiked interest to deploy relevant algorithms with the aim of achieving more reliable and efficient decision-making. However, as current machine learning algorithms are ‘black boxes’ (at least those based on DNNs) its involvement in healthcare comes at the expense of uncertainty. Accordingly, an algorithm might diagnose certain forms of malicious skin-cancer at high accuracy. Yet, clinicians are unable to explain relevant diagnosis or even to properly assess its level of confidence. At first glance, this seems to be a predominantly epistemic issue. However, it is closely intertwined with further normative problems, such as the attribution of accountability in light of diagnostic errors.

How then shall we resolve the trade-off between accuracy and uncertainty? In this paper, I am going to scrutinize a proposal by Alex London (2019), who suggested that the appropriate (epistemic and normative) standard of machine learning algorithms should be its trustworthiness, as opposed to them being explainable. Thereby, possible criteria for trustworthiness might be us having sufficient evidence to believe in the reliability and robustness of said algorithms. There are broadly two possible strands of justification for conceiving trustworthiness as an appropriate standard. On consequentialist terms, it might be argued that if trustworthiness is being ensured, the benefits of involving machine learning algorithms in healthcare significantly outweigh possible non-beneficial effects. On a different note, uncertainty seems to be a common feature in medical practice – even in the absence of machine learning. For instance, drugs are prescribed on the basis of evidence that these are likely cure disease. Oftentimes, this happens in ignorance of the underlying biological mechanisms (Topol 2019). Moreover, even though the gold standard for medical diagnosis is an iterative process of information gathering and hypothesis testing, in practice it still hinges on the intuitions of clinicians (Braude 2009). In short, trustworthiness might be deemed as the ‘modus operandi’ of healthcare.

Both justifications strike me as being flawed. For this reason, I will discuss different objections to trustworthiness as an appropriate standard of machine learning in healthcare. Drawing on Katherine Hawley`s (2014) account of trust, I will start with a high-level critique of the notion of ‘trustworthiness’ with respect to machine learning algorithms. Here, my conjecture is that within given context, trustworthiness is a highly ambivalent concept. At worst, placing trust in algorithms might be even a category mistake. At best, it amounts to a lopsided distribution of risks, at the expense of patients. Building on this, I will discuss different cases of medical diagnosis where involving trustworthy machine learning algorithms potentially backfires. In particular, it might enforce paternalistic patterns of medical decision-making, in the guise of a ‘computer knows best’ attitude (McDougall 2019) and threatens to erode the epistemic authority of clinicians.

Janna van Grunsven (Delft University of Technology, The Netherlands)
Whose voice is it anyway? Autism, AAC technology, and the problem of epistemic injustice

To varying degrees of severity, communicative challenges are ubiquitous among persons on the autism spectrum. Augmentative and Assistive Communication technologies [or AAC technologies] have the potential of mediating and scaffolding the communicative skills and capacities in autistic persons such that otherwise unavailable forms of self-expression and self-other relations are made possible. Arguably, the extent to which a given AAC technology can play an emancipatory transformative role in the everyday lives and participatory practices of persons on the spectrum depends on how it is designed. Though the research that assesses the use and design of AAC technology in the context of autism is still relatively underdeveloped, convincing arguments have been made for a bottom-up interdisciplinary approach to the design of AAC technology that draws on “developmental psychology, visual arts, human–computer inter- action, artificial intelligence, education, and several other cognate disciplines” (Cf. Porayska-Pomsta et al 2011). In this paper I present the fast-growing philosophical research area of epistemic injustice (Cf. Fricker 2007) as an important resource for those who develop AAC technologies for autistic persons. Epistemic Injustice in its broadest sense refers to “those forms of unfair treatment that relate to issues of knowledge, understanding, and participation in communicative practices” (Kidd et al 2017, my italics). The main aim of my paper is to show how AAC technology can be both a tool for ameliorating epistemic injustice and a technology capable sustaining forms of epistemic injustice. Broadly put, AAC technologies have the potential to ameliorate forms of epistemic injustice among autistic persons insofar as they promote participation in communicative practices by lending a voice to those who might otherwise not be heard. At the same time, AAC technologies have the potential to further entrench forms of epistemic injustice among autistic persons to the degree that they impose the linguistic and expressive norms characteristic of typically developed persons onto those who are autistic. How exactly an awareness of the notion of epistemic injustice can and should inform engineers building AAC technologies is a complicated matter that is not the main aim of the paper. I do, however, offer some concrete recommendations with an eye to this question towards the end of my paper.

References

  • Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing,
    Oxford University Press
  • Kidd, Ian James ; Medina, José & Pohlhaus, Gaile (eds.) (2017). _The Routledge Handbook of Epistemic Injustice_. Routledge.
  • Porayska-Pomsta, K., Frauenberger, C., Pain, H., Rajendran, G., Smith, T., Menzies, R., … Lemon, O. (2011).
  • Developing technology for autism: an interdisciplinary approach. Personal and Ubiquitous Computing, 16(2), 117-127. DOI: 10.1007/s00779-011-0384-2

Ariel Guersenzvaig (ELISAVA Barcelona School of Design and Engineering, Spain)
The goods of design: a regulative ideal for the design profession

That ethical reflection must be a part of the design process is an uncontroversial conclusion that is widely shared by many scholars (1). The question remains, however, how can designers integrate ethics into their professional practice.

This paper explores the urgent need for a design professional ethics and offers a ‘regulative ideal’ (2) for the design profession; that is, a teleology of design that can purposively guide the designer, and contribute to advance the profession. The paper does not aim to definitely settle the discussion around the goals that ought to be pursued, which, just like most things in ethics, will not be definitely resolved. Its goal is more modest, it aims to advance the public discussion around the socio-ethical implications of design professional practice.

Designers occupy a prominent role in how products, services or environments get from abstract idea to concretion, although they are never the sole actors. Despite its importance, and unlike medicine or law, professional design lacks widespread principles and normative frameworks for addressing ethical issues. Codes of design ethics, however useful to prompt discussions, rarely go beyond generalities as preventing harms and respecting human rights. Moreover, ethics is rarely incorporated into the design of artefacts on a wide-scale basis (3).

The paper argues that the cultivation of ethics has to come from within the practice as ethics ‘cannot come from on high, as it were, to articulate guidelines for action’ (4). Practitioners are the best suited to recognize the internal goods that are inherently associated with their practice and it is they who are best suited to determine what excellent designing is. To obtain these internal goods, the practitioner has to aim to excel at the practice that produces them. These internal goods form the real purpose of that practice – the telos – where it finds justification for its existence.

The paper poses the question: what goods should professional design ought to pursue? To answer it, it draws on Alasdair MacIntyre’s philosophical anthropology (5), and carries out an enquiry on the distinct contribution that the design profession makes to society. Asking what goods are achieved when professional design is performed at its best is another way of asking what goods does professional design ought to pursue.

The general arguments the paper makes and argues are:
1) Design is a practice whose main internal good is in intentionally resolving problems, by the creation of plans for the human-made world, and
2) The design profession is intrinsically related to promoting some key aspect of people’s well-being.
Based on these two premises, a plausible regulative ideal for the design profession is proposed:
3) professional designers ought to pursue the promotion of people’s well-being by designing a world in which people can flourish.
This regulative ideal is a normative disposition. It is not a representation of what professional design currently is, but what professional design can be at its best, i.e. it is a standard of excellence that guides the designers’ journeys towards the good.

References

  1.  Glenn Parsons, The Philosophy of Design (Cambridge: Polity, 2016); Ibo Van de Poel and Peter Kroes, “Can Technology Embody Values?,” in The Moral Status of Technical Artefacts, ed. Peter Kroes and Peter-Paul Verbeek (Dordrecht: Springer, 2014); Peter-Paul Verbeek, Moralizing Technology: Understanding and Designing the Morality of Things (Chicago: The University of Chicago Press, 2011); Sheila Jasanoff, The Ethics of Invention: Technology and the Human Future (New York: W.W. Norton & Company, 2016).
  2. Justin Oakley and Dean Cocking, Virtue Ethics and Professional Roles (Cambridge: Cambridge University Press, 2001), 74.
  3. Aimee Van Wynsberghe and Scott Robbins, “Ethicist as Designer: A Pragmatic Approach to Ethics in the Lab,” Science and Engineering Ethics 20, no. 4 (2014), 948-49.
  4. Carl Mitcham, “Ethics into Design,” in Discovering Design: Explorations in Design Studies, ed. Richard Buchanan and Victor Margolin (Chicago: The University of Chicago Press, 1995), 183.
  5. Alasdair MacIntyre, After Virtue: A Study in Moral Theory (Notre Dame: University of Notre Dame Press, 2007), 191.

Agata Gurzawska, Anais Resseguier, Rowena Rodrigues and David Wright (Trilateral Research, United Kingdom and Ireland)
Ethics by design and data-driven policing technologies

This paper proposes a methodology to inform the ethical design and development of data-driven policing technologies. Data-driven policing technologies include those that help (a) characterise crime patterns across time and space, (b) help leverage this knowledge for the prevention of crime and disorder [Fitzpatrick, 2019] and (c) help identify criminals and terrorists. For example, law enforcement agencies (LEAs) use artificial intelligence (AI) in video and image analysis, facial recognition, biometric identification, autonomous drones and other robots and predictive analyses to forecast crime “hot spots” [OSCE 2019]. AI algorithms can help uncover blockchain transactions on the dark web involving the sale of illegal weapons, drugs, crime as a service or for multilingual automatic speech recognition, natural language processing and real-time network analysis. AI can provide advanced near-real-time analysis of multiple big data streams to capture the structure, interrelations and trends of terrorists, cybercriminals and organised crime groups for enhanced situational awareness. AI can power knowledge discovery techniques, big data analytics, and cognitive machine learning to improve digital and forensics capabilities for the prediction, detection and management of crime.

Though they have various advantages such as helping to identify malefactors efficiently and quickly in the face of budgetary constraints, data-driven policing technologies can also have deleterious societal and ethical impacts. The media, academic community and civil society organisations (CSOs) have criticised such technologies due to the lack of transparency about which data sources were used, how the analyses work and how they are used [Norwegian Board of Technology, 2015]. Moreover, they have criticised algorithmic inaccuracy, discriminatory data and results, automation bias [Ferguson 2012], potential stigmatisation of people (e.g., vulnerable or minority groups and individuals) and places (hot spots) [Norwegian Board of Technology, 2015], underemphasising assessment and evaluation and overlooking fundamental and privacy rights [Perry et al., 2013]. These arguments are not ill-founded. Currently, existing data-driven policing tools, such as COMPAS and PredPol, are widely discussed and raise ethical and human rights controversies. It is, thus, critical that designers and developers proactively adopt an ethics by design approach during the research and development stages to ensure that data-driven policing technologies are both ethically and scientifically aligned. Our proposed methodology draws on research in the EU-funded COPKIT, TITANIUM, ROXANNE, INSPECTr and PREVISION H2020 projects, all of which involve development of AI technologies for LEAs, and parallel work on ethically-aligned design (e.g., IEEE principles of ethically aligned design – human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse, competence for safe operation [IEEE 2019] and trustworthy AI [HLEG AI 2019].

Our methodology includes a self-assessment tool to help developers design data-driven policing technologies so that they are better equipped to identify and address potential ethical impacts and consequences. This tool will complement human rights impact assessments and/or privacy impact assessments or could inform such activities. By taking ethics into account in the design stages of the technology, our self-assessment tool will help build public trust in LEA use of such technologies and even minimise negative implications [Policy Connect and the All-Party Parliamentary Group on Data Analytics, May 2019].

References

  • Europol, 2017 Internet Organised Crime Threat Assessment (IOCTA), The Hague, 2017, p. 40. https://www.europol.europa.eu/activities-services/main-reports/internet-organised-crime-threat-assessment-iocta-2017
  • Ferguson, A. G., Predictive policing and reasonable suspicion. Emory LJ, 62, 259, 2012.
  • Fitzpatrick, D. J., Gorr, W. L., & Neill, D. B. (2019). Keeping Score: Predictive Analytics in Policing. Annual Review of Criminology.
  • High-Level Expert Group on Artificial Intelligence (HLEG AI), Ethics Guidelines for Trustworthy AI, April 2019. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
  • Norwegian Board of Technology, “Predictive policing can data analysis help the police to be in the right place at the right time?”, September 2015. https://teknologiradet.no/en/publication/predictive-policing-can-data-analysis-help-the-police-to-be-in-the-right-place-at-the-right-time/
  • OSCE, “Artificial Intelligence and Law Enforcement: An Ally or an Adversary?”, Concept Paper, presented at the 2019 OSCE Annual Police Experts Meeting, Vienna, 23-24 September 2019. https://polis.osce.org/2019APEM
  • Perry, Walter L., Brian McInnis, Carter C. Price, Susan Smith, and John S. Hollywood, Predictive Policing: Forecasting Crime for Law Enforcement. Santa Monica, CA: RAND Corporation, 2013. https://www.rand.org/pubs/research_briefs/RB9735.html
  • Policy Connect and the All-Party Parliamentary Group on Data Analytics, Building ethical data policies for the public good: Trust, transparency an tech, May 2019. https://www.policyconnect.org.uk/sites/site_pc/files/report/1214/fieldreportdownload/raa35577ipcldatatechethicsreportlsinglepagesl0519.pdf
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. IEEE, 2019. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/ autonomous-systems.html

Julia Hermann (Eindhoven University of Technology, The Netherlands)
The disruptive potential of the artificial womb

In 2017, US-scientists succeeded in transferring lamb foetuses to what comes very close to an artificial womb: a “biobag”. All of the lambs emerged from the biobag healthy. The scientists believe that about two years from now it will be possible to transfer preterm human babies to an artificial womb, in which they have greater chances to survive and develop without a handicap than in current neonatal intensive care (Couzin-Frankel 2017). At this point in time, developers of the technology, such as Guid Oei, gynaecologist and professor at Eindhoven University of Technology, see the technology as a possible solution to the problem of neonatal mortality and disability due to preterm birth. They do not envisage uses of it that go far beyond that. Ethicists, however, have started thinking about the use of artificial womb technology for very different purposes, such as being able to terminate a risky pregnancy without having to kill the foetus, or strengthening the freedom of women. If we consider such further-going uses, the socially disruptive potential of this technology becomes apparent.

In my talk, I will ask what might happen if it were to become possible to use the artificial womb for the whole gestation process. If women, or parents, had the choice between “pregnancy as usual” and delegating gestation to the artificial womb. I will argue that ethicists need to reflect upon this possibility and the disruptive consequences it would have. I will then address the technology’s possible impact on gender roles and gender justice. If it were not anymore “naturally” the mother who had to be pregnant and give birth, this would remove support for the widespread view that it is also “naturally” the mother who is most important for a child in its first years, and therefore the main caregiver. The choice between being pregnant and using an artificial womb could bring with it a new division of roles between mothers and fathers. Once the baby was “born” from the artificial womb, it would have to be taken care of just like babies born in a conventional way. But why should this task fall mainly on the mother, rather than on the father? It is imaginable that men would come to be as involved in parenting as women, with the effect that employers would have no reason to favour a male applicant over a female applicant on the basis of the fear that the woman could want to become a mother.

The practice of delegating the gestation process to an artificial womb would have far-reaching effects. It would, for instance, affect the emotional attachment between parents and child, the parent-child relationship more generally, the experience of parenthood, the self-understanding of mothers and fathers, and the self-understanding of women and men more generally.

Bibliography

  • Cohen, I.G. (2017), “Artificial Wombs and Abortion Rights”, Hastings Center
    Report 47/4.
  • Couzin-Frankel, Jennifer (2017, April 25), “Fluid-filled ‘biobag’ allows premature lambs to develop outside the womb”, Sciencemag.org, doi:10.1126/science.aal1101, retrieved from:
    https://www.sciencemag.org/news/2017/04/fluid-filled-biobag-allows-premature-lambs-develop-outside-womb
  • Kudina, Olya and Verbeek, Peter-Paul (2019), “Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy”, Science, Technology, & Human Values 44/2: 291-314.
  • Landau, Ruth (2007), “Artificial Womb versus Natural Birth: An Exploratory
    Study of Women’s views”, Journal of Reproductive and Infant Psychology 25/1: 4-17.
  • Romanis, Elizabeth Chloe (2018), “Artificial Womb Technology and the Frontier of Human Reproduction: Conceptual Differences and Potential
    Implications”, Journal of Medical Ethics 44: 751-755.
  • Simonstein, Frida (2006), “Artificial Reproduction Technologies (RTs) – All the Way to the Artificial Womb?”, Medicine, Health Care and Philosophy: A European Journal 9/3: 359-365.

Christian Illies (University Bamberg, Germany)
Silent might: How manipulative information and communication technologies disrupt the human condition

Power is the ability to make others do things even against their (original) will. Its basic form is force and coercion, but it can also be based upon good arguments, authority, or technology (Popitz 1986). And on manipulation, sometimes wheedlingly referred to as ‘public relations’ (Ivy Lee), ‘propaganda’ (Edward Bernays), ‘nudging’/ ‘choice architecture’ (Thaler and Sunstein 2008), or ‘mediation’ (Verbeek). Manipulation influences people to do things by modulating the emotional attraction of certain ends. As a consequence, some options (action, belief, etc.) are more appealing (or unappealing) to the manipulated. They are then more likely to be chosen. The manipulated, however, remains free to choose or not to choose this option (cf. Fischer/ Illies 2018). Manipulation works mostly subliminally or remains unnoticed; I will, therefore, call manipulation the Silent Might. All forms of power overlap, and especially technology serves other forms. Weapons, for example, are means of coercion, influencers exercise their authority via social media, and artifacts can make certain options more attractive (Illies/Meijers 2009). Technology has also become a prime means of manipulation.

Homo sapiens had always had social expertise allowing subtle manipulation of others within the social group, an ability that might even have driven human cognitive evolution (Byrne/ Whiten 1988, critical: Lyons et al. 2010). History provides examples of highly talented manipulators like Caesar, Rasputin, and Hitler. Since the beginning of the 20th century, however, manipulation has become a systematically developed and refined skill that is applied by politicians and lobby-groups, governments, policy-makers, or economists. This development results from a specific understanding of politics in modern mass-democracies (Lippmann 1922), the possibilities of industrial mass-production, the insights of psychology and psychoanalysis, the promotion of consumerism (Horkheimer/ Adorno 1944), and novel technologies/SDT. In particular, modern information and communication technologies, the “Infosphere” (Floridi 2016), has strengthened the possibilities of manipulation.

Silent Might can serve radical reforms and behavioral transformations that are urgently needed in the face of today’s ecological and climate crisis. It is arguably better to use Silent Might to build up a sustainable society than more aggressive forms like prohibitions and coercion. However, the long term effects on the human condition seem problematic and are possibly not psychologically sustainable. Based upon the “Strength model of Self-Control” (Baumeister et al. 1994, Baumeister/ Vohs/ Tice 2007), and on insights from contemporary action-theory (Vogler 2002), I will argue that a systematic use of Silent Might, especially in the Infosphere, will undermine and disrupt the individual ability to act in a self-controlled and rational manner. First, because manipulation uses external means to motivate internally and weakens thereby the strength of the will. Secondly, resulting actions or choices often create cognitive dissonance; they are not in accordance with convictions and aims held by the manipulated. That will undermine the internal consistency of the manipulated. Thirdly, information and communication technologies are particularly powerful because of the central role of understanding and communication for humans. Our social expertise made humans evolve – and the Infosphere with its Silent Might disrupts this expertise fundamentally.

Naomi Jacobs (Eindhoven University of Technology, The Netherlands)
Capability sensitive design for health & wellbeing technologies

The increasing awareness of the impact that technology design can have on the support or undermining of values has led to the development of various ‘ethics by design’ approaches. One of the most prominent and influential ‘ethics by design’ approaches is Value Sensitive Design (VSD). VSD is an approach “that aims to address and account for values in a structured and comprehensive manner throughout the design process” (Friedman et al. 2013, p.55).

But despite being a highly promising approach of ‘ethics by design’, VSD faces various challenges (for a detailed discussion of these challenges see: Manders-Huits 2010; Borning and Muller 2012; Davis and Nathan 2015; Jacobs and Huldtgren 2018). The three most prominent challenges that VSD faces include: (1) obscuring the voice of its practitioners and thereby claiming moral authority that is unfounded, (2) taking stakeholder values as leading values in the design process without questioning whether what is valued by stakeholders also ought to be valued, and (3) not being able to provide normative justification for making value trade-offs in the design process (Manders-Huits 2010; Jacobs and Huldtgren 2018). To overcome these challenges, Manders-Huits (2010) and Jacobs and Huldtgren (2018) have argued that VSD practitioners need to complement the VSD method with an ethical theory.

The aim of this presentation is to strengthen the approach of VSD by complementing it with the ethical theory of the Capability Approach (CA) so that VSD overcomes the various challenges it currently faces. This presentation thereby contributes to the further development of the approach of VSD in particular, and the domain of ‘design for values’ in general.

The presentation introduces ‘Capability Sensitive Design’ (CSD): a combination of VSD with the normative foundation of the CA. Subsequently, the approach of CSD is applied to the design case of an Artificial Intelligence (AI)-driven therapy chatbot that aims to improve people’s mental health. Through the CSD analysis of the design of the AI-driven therapy-bot the exact workings of the CSD approach are explicated, as well as the merits that paying attention to valuable capabilities right from the start of the design process can have for the design of such an AI-driven therapy chatbot.

Bibliography

  • Borning, A. & Muller, M. (2012) Next Steps for Value Sensitive Design. CHI 2012 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Accessed December 9, 2016, from http://dl.acm.org/citation.cfm?doid=2207676.2208560.
  • Davis, J., & Nathan, L. P. (2015). Value sensitive design: Applications, adaptations, and critiques. Handbook of ethics, values, and technological design: Sources, theory, values and application domains. Dordrecht: Springer. pp. 11–40.
  • Friedman, B., Kahn Jr. P. H., Borning, A., & Huldtgren, A. (2013) Value Sensitive Design and Information Systems. In: Early Engagement and New Technologies: Opening Up the Laboratory. Ed: Doorn, N., Schuurbiers, D., van de Poel, I., Gorman, M.E. Dordrecht: Springer. pp.55-95.
  • Jacobs, N. & Huldtgren, A. (2018) Why Value Sensitive Design Needs Ethical Commitments. Journal of Ethics and Information Technology.
  • Manders-Huits, N. (2010) What Values in Design? The Challenge of Incorporating Moral Values Into Design. Science Engineering Ethics. 17(2), pp.271-287.

Sean Jensen and Philip Jansen (University of Twente, The Netherlands)
Robotic & AI-assisted devices that enhance athletic performance: Anticipating issues that could impact values in sports

Consider two hypothetical Olympic athletes: Aubrey and Bárbara. Aubrey lives in Canada and trains with a world-class professional who has coached numerous athletes to win gold medals. Bárbara lives in Brazil and heavily relies on a free application on her smartwatch to boost her motivation using an AI-based reward system. Aubrey and Barney compete in the same competition and achieve similar placings. Also consider two more hypothetical Paralympic athletes: Asbel and Mark. Asbel, who is Kenyan, was born without toes on both his feet and uses relatively simple prostheses to simulate some of the functionality of his missing toes. Mark, who is German, has had both his entire lower legs amputated after a childhood accident and now uses personalized prosthetic legs that incorporate advanced AI and robotics technology. Mark’s artificial legs were designed to perform at the level of an elite able-bodied athlete, and happen to work slightly better than his biological lower legs would have performed. However, given Asbel’s elite-level cardiorespiratory capacity for endurance (as evidenced by his VO2max), Asbel and Mark perform similarly in competitions. What are the important differences between the Olympic and Paralympic cases above?

As the AI and robotics landscape becomes increasingly complex, we anticipate new options for athletes to safely, and oftentimes cheaply, enhance their abilities. These options may include AI-based training and motivation-boosting wearables (e.g., smart watches or earpieces that can persuade the user not to give up) [1] various kinds of robotics and AI-infused prosthetics (e.g., artificial lower limbs that use AI and robotics technology for enhanced strength and optimal articulation of the artificial joints) [2] as well as other technologies.

Intuitively, it may seem as though Barney’s use of a smartwatch application to increase his motivation could entail taking a short-cut that diminishes what we value from traditional training like Aubrey’s, even though both roads lead to the same outcome. Similarly, it may seem that Mark’s use of advanced AI and robotics-based leg prostheses makes for less praiseworthy athletic achievements than Asbel’s more limited use of simpler prosthetics. What may ground such intuitions? Do such intuitions constitute a fair assessment?

In this paper, we explore the impact on sports of emerging AI and robotics technologies that could enhance human physical and mental performance, as well as reasons for accepting or rejecting the use of such technologies in sports. First, we will discuss the potential performance-enhancing effects of AI-based motivation-boosting wearables and various kinds of robotics and AI-infused prosthetics. Subsequently, we will apply a case-based approach to analyze how these technologies may conflict with (or strengthen) important values (e.g., fairness, respect, integrity, responsibility) and virtues (e.g., sportspersonship, commitment, work ethic, resilience, perseverance) in sports. Finally, we will outline possible reasons for allowing or prohibiting, to some extent, the use of these technologies in sports.

References

  1.  Joshi, Naveen (2019). “Here’s How AI Will Change The World Of Sports!” Forbes. https://www.forbes.com/sites/cognitiveworld/2019/03/15/heres-how-ai-will-change-the-world-of-sports/#10e5ff10556b
  2. Kwon, Diana (2017). “A Prosthetic Advantage?” The Scientist. https://www.the-scientist.com/notebook/a-prosthetic-advantage-30236

Fleur Jongepier (Radboud University, Nijmegen, The Netherlands)
Do algorithms know best?

According to Yuval Noah Harari, nothing much is left of the fundamental liberal assumption that individuals know themselves best. In the age of data-driven algorithms, “governments and corporations will soon know you better than you know yourself” (Harari 2018). Harari isn’t the only one to have made claims about the epistemic authority of algorithms. In an (in)famous article which ended up playing a significant role in the Cambridge Analytica scandal, an algorithm only needs 10 ‘likes’ to know you better than a colleague, and 70 likes to match your friends (Youyou, Kosinski, and Stillwell 2015). The knowledge algorithms have about us is increasingly being used by governments, for instance to determine the chance of someone to commit a crime, drop out of school, or to illegitimately receive social benefits (NOS 2019).

When it comes to the growing influence of algorithms, much attention is given to the threats with respect to privacy violations and discrimination, and rightly so. But what gets less attention is the more subtle and complex issue of how growing deference to algorithmic authority changes the nature of social interactions. If algorithms and not individuals know best, then what we should be relying on is what algorithms say, not what people themselves say. This development is particularly acute in light of the increasing digitalization of public administration or ‘e-government’ (Smith, Noorman, and Martin 2010). Public officials increasingly make use of algorithms, which unavoidably changes the nature of the second-person relation between citizens and officials. In this paper I focus on the question of whether the growing trust in algorithmic authority threatens first-person authority (Bar-On 2004), the latter of which, I argue, is a fundamental part of what it means to respect someone as a person.

The aim of this paper is first of all to clarify the respective notions of first-person authority and algorithmic authority. I start out by arguing that that epistemic authority or “knowing best” is an ambiguous notion. The epistemic authority of algorithms is a type of authority that is ‘detectivist’ or evidence-based. Given that algorithms not only have a much larger quantity of data available to them than individuals but also have a wholly new type knowledge (Hildebrandt 2008), algorithms (and the institutions making use of them) indeed know individuals “best”, in a sense. However, drawing on recent literature in the debate of self-knowledge, I argue that self-knowledge isn’t just a matter of ‘detecting’ or finding out things about oneself on the basis of the available evidence. When it comes to judgment-sensitive attitudes like beliefs, intentions and conception of the good, individuals are authoritative because individuals have a performative type of authority. Individuals have a type of knowledge which algorithms cannot – in principle – acquire.

In the second part of the paper I turn to discuss the relevant moral ramifications of potential conflicts between first-person and algorithmic authority. I do so by applying Miranda Fricker’s work on epistemic injustice, i.e. the distinct type of injustice that occurs when someone is wrongly not treated as a knower. I shall suggest that if algorithms are unjustifiably deferred to in determining what people’s beliefs, intentions and conceptions of the good are, this may constitute testimonial injustice, since the individual is wrongly not treated as a self-knower.

References

  • Bar-On, Dorit. 2004. Speaking My Mind: Expression and Self-Knowledge. Oxford: Oxford University Press.
  • Harari, Yuval Noah. 2018. “Yuval Noah Harari: The Myth of Freedom.” The Guardian, September 14, 2018, sec. Books. https://www.theguardian.com/books/2018/sep/14/yuval-noah-harari-the-new-threat-to-liberal-democracy.
  • Hildebrandt, Mireille. 2008. “Defining Profiling: A New Type of Knowledge?” In Profiling the European Citizen: Cross-Disciplinary Perspectives, edited by Mireille Hildebrandt and Serge Gutwirth, 17–45. Dordrecht: Springer Netherlands. https://doi.org/10.1007/978-1-4020-6914-7_2.
  • NOS. 2019. “Overheid Gebruikt Op Grote Schaal Voorspellende Algoritmes, ‘risico Op Discriminatie’ | NOS.” NOS, 2019. https://nos.nl/artikel/2286848-overheid-gebruikt-op-grote-schaal-voorspellende-algoritmes-risico-op-discriminatie.html.
  • Smith, Matthew, Merel Noorman, and Aaron Martin. 2010. “Automating the Public Sector and Organizing Accountabilities.” Communications of the Association for Information Systems 26 (1). https://doi.org/10.17705/1CAIS.02601.
  • Youyou, Wu, Michal Kosinski, and David Stillwell. 2015. “Computer-Based Personality Judgments Are More Accurate than Those Made by Humans.” Proceedings of the National Academy of Sciences 112 (4): 1036–40. https://doi.org/10.1073/pnas.1418680112.

Michal Klincewicz (Tilburg University, The Netherlands)
Directly affecting social change with computing technologies: Methods and pitfalls

Computing technologies will indirectly affect the way that moral norms govern human behavior by disrupting specific normative domains, such as the workplace, healthcare, war, and intimate relationships, among many others. What is not often discussed is how they can directly affect the way that humans navigate these normative domains by delivering interventions that stimulate, nudge, and improve moral decision making. In this paper I critically review three categories of proposals that aim to do this and evaluate their potential to (a) positively impact society in general and (b) the affordances that each leave for morally problematic repurposing.

The first category of proposal is the artificial moral adviser (Giubilini and Savulescu 2018; Lara and Deckers 2019; Klincewicz 2016), which can leverage automated language processing and mathematized first-order moral theories to deliver persuasive arguments and/or moral advice. This proposal appears to have potential for significant positive impact on society, especially since argumentation can be amplified by social media. These features also generate a worrying affordance for repurposing. A badly designed artificial moral adviser can deliver arguments that may on the surface appear to be grounded in moral theory and fact, but unbeknownst to its designers or users be actually grounded in prejudices or misinformation. At worst, the artificial moral adviser can be used to intentionally bamboozle or manipulate people.

The second category of proposal is the moral nudger (Borenstein and Arkin 2016; Savulescu and Maslen 2015), which can leverage affective computing and insights from human-technology interaction studies to deliver relevant information about any specific normative domain in real-time, including information about the environment, one’s own mental state, or the mental states of others. This category of proposal appears to have limited potential for significant positive impact on society, since what counts as morally relevant is highly situational and specific to the individual. It is not easy to see how any specific nudging intervention could scale up to affect society at large. Repurposing of moral nudging interventions would likely be widespread, especially in contexts where situational awareness is particularly important, such as dangerous workspaces and war. Nonetheless, there are ways in which moral nudgers can be designed to improve moral cognition, so be more akin to a moral teacher (Klincewicz forthcoming).

The third category of proposal is the moral teacher, which can leverage VR/AR, video games, among others, to deliver moral knowledge, including moral know-how, to its human users (Brey 2008; Wildman and Woodward 2018). The potential impact of this category of intervention appears to be limited only by its availability. Delivering moral facts through serious video games or moral know-how via VR/AR simulations can be a standard part of education or it can remain in the laboratory. The potential risks of repurposing associated with this sort of technology are large and arguably already pervasive and ongoing.

In sum, not all direct intervention in moral cognition are made equal, but some promise significant positive impact on society.

References

  • Brey, P. (2008). ‘Virtual Reality and Computer Simulation,’ Ed. Himma, K. and Tavani, H., Handbook of Information and Computer Ethics, John Wiley & Sons.
  • Giubilini, A. & Savulescu, J. (2018) ‘Artificial Moral Adviser: The “Ideal Observer” Meets Artificial Intelligence’ Philos. Technol. 31: 169.
  • Klincewicz, M. (forthcoming). ‘Robotic Nudges for Moral Improvement through Stoic Practice’ Techne: Research in Philosophy and Technology.
  • Klincewicz, M. (2016). ‘Artificial Intelligence as a Means of Moral Enhancement’ Studies in Grammar, Logic, and Rhetoric 48 (1):171-187.
  • Lara, F. & Deckers, J. (2019). Neuroethics. 1-17
  • Savulescu, J. and H. Maslen. (2015). ‘Moral Enhancement and Artificial Intelligence: Moral AI?’ Beyond Artificial Intelligence, Springer: 79–95.
  • Wildman, N. and R. Woodward. (2018). ‘Interactivity, Fictionality, and Incompleteness’ in The Aesthetics of Video Games, eds. G. Tavinor & J. Robson.

Olya Kudina and Ibo van de Poel (Delft Univeristy of Technology, The Netherlands)
Pragmatist account of value change

Much attention recently has been attributed to the phenomenon of technologically induced value change, whereby technologies can challenge existing value understandings and enable new ones. In this paper, we offer a pragmatist account of value change building on the work of John Dewey. A turn to pragmatism helps to resolve a tension between value persistence on the one hand and value change on the other. According to Dewey, values are best understood from a perspective of practices that encompass people and their sociomaterial environment. We will show how this allows to explain the continuity of values as an accumulation of previous experiences to help guide action, and the context-specific value judgement that requires interpretation of previous experiences in the current practice. As a result, values appear not as finalities but as dynamic ends-in-view, as weighted judgements or evaluative devices that both result from reflecting on a present situation and give it guidance. Finally, we will explore different types of value dynamism as value specification and value adaption to elaborate on the role of technologies in this process. We will argue that technologies are not just disrupting existing ways of living, but also bring structural changes to human life and present people with new moral opportunities.

Marjolein Lanzing (Radboud University Nijmegen, The Netherlands)
Tapping the heart: The commodification of selves and relationships on Tinder

Intimate relationships are import for living an autonomous and flourishing life (Christman 2009; Vangelisti & Perlman 2006). ‘Quantified Relationship Technologies’ that collect one’s data in order to offer personalized feedback are becoming increasingly popular tools to improve one’s relationship management (Danaher, Nyholm & Earp 2018; (Frank & Klincewicz 2018). However, they are often commercial. Commercial technologies that mediate and transform the relationship by pushing market norms raise (new) ethical concerns about the commodification of relationships in the digital age. While users look (and often find!) for love on dating apps large corporations capitalize on their desires. While some may argue that this is a fair trade, one may also wonder whether market norms can coexist with
our old social norms in these intimate relationships or whether they distort and change our relationships for the worse.

In this paper I elaborate on the impact of dating-apps on our social relationships by
investigating them from the perspective of commodification. My aim is to examime to what extent dating-apps that manage our intimate social relationships transform these relationships and to contribute conceptual clarifications for evaluating these technologies from the perspective of commodification. As my case, I present dating app Tinder.

The argument of this chapter consists of six steps. I first describe the dating-app
Tinder and its business model. Secondly, I define the concept of commodification and argue that commodification is a continuum. I take a non-compartmentalist approach (Anderson 1990, Radin 1996; Roessler 2015; Sandel 2012): commodification is a continuum, which means that there are instances in which market norms can coexist with social norms. I conclude this section by identifying two interrelated phenomena of commodification on Tinder: 1) ‘being on the dating market’ and 2) ‘being on the data market’.

In the third section, I present three harms of commodification. The most important
harms are reification and alienation. I use Eva Illouz’ research on objectification and
commodification on dating-sites (Illouz 2007) and Axel Honneth’s critique of reification (Honneth 2008). The other two harms involve devaluation and taking advantage of vulnerability, for which I draw on various normative criteria of inappropriate commodification (Anderson 1990a; Panitch 2016; Radin 1996, Roessler 2015; Sandel 2012; Satz 2010).

Fourth, I evaluate Tinder’s practices of commodification by in interpreting the
phenomena of ‘being on the dating market’ and ‘being on the data based on the normative criteria I indicated in the previous section.

In section five and six, I reflect on the evaluation of Tinder’s practices of
commodification. I conclude that dating-apps mediate our self and social relationships by structuring interaction according to market norms and by commodifying our intimate disclosures. This changes the character of these relationships. This is not necessarily problematic. Yet, if we want technologies that support developing relationships that empower us, we should exercise caution with regard to disruptive market influences in intimate social contexts. Inappropriately commodified relationships may inhibit rather than support an autonomous and flourishing life.

  • References
    Anderson, E. 1990a. ‘The ethical limitations of the market’, Economics and Philosophy (6:2) pp. 179–205.
  • Christman, J. 2009. The Politics of Persons. Individual Autonomy and Social-Historical Selves. Cambridge: Cambridge University Press.
  • Danaher, J., S. Nyholm, and B. D. Earp. 2018. The quantified relationship. American Journal of Bioethics 18 (2):3–19.
  • Frank, L., and M. Klincewicz. 2018. Swiping Left on the Quantified Relationship: Exploring the Potential Soft Impacts, The American Journal of Bioethics, 18:2, pp. 27-28.
  • Honneth, A. 2008. Reification: New Look at an Old Idea. (Jay, M. ed.) Oxford: Oxford University Press.
  • Illouz, E. 2007. Cold Intimacies: The making of emotional capitalism. Cambridge: Polity Press.
  • Panitch, V. 2016. Commodification and Exploitation in Reproductive Markets:
    Introduction to the Symposium on Reproductive Markets. Journal of Applied Philosophy (33:2) pp. 117-124
  • Pham, A. & Castro, C. 2019. The moral limits of the market: the case of consumer scoring data. Ethics and Information Technology. https://doi.org/10.1007/s10676-019-09500-7
  • Radin, M. 1996. Contested Commodities. Cambridge MA: Harvard University Press
  • Roessler, B. 2015. Should personal data be a tradable good? On the moral limits of markets in privacy. In: Social Dimensions of Privacy (Roessler, B & Mokrosinksa, D. eds) Cambridge: Cambridge University Press. Pp. 141-161
  • Sandel, M. 2012. What money can’t buy. Penguin.
  • Satz, D. 2010. Why some things should not be for sale. Oxford University Press: Oxford
  • Vangelisti, A. & Perlman, D. 2006. The Cambridge Handbook of Personal
    Relationships. Cambrige University Press: Cambridge.

Sanna Lehtinen (University of Helsinki, Finland)
Intergenerational urban aesthetics: Sustainable directions for envisioning urban futures

Experienced quality of urban environments has not traditionally been at the forefront of understanding how cities evolve through time. Within the humanistic tradition, the temporal dimension of cities has been dealt through tracing urban or architectural histories or science-fiction scenarios. However, attempts at understanding the relation between currently existing components of the city and planning based on them, towards the future, has not captured the experience of the temporal layers of cities to a satisfying degree. Contemporary urban environments comprise both lasting and fairly stable elements as well as those that change continuously: change is an inevitable part of urban life. Different
aspects of city life evolve with a different tempo: urban nature has its cycles, inhabitants their rhythms and building materials and styles different lifespans, for example. This becomes an especially important issue, when future imaginaries are projected onto existing urban structures and when decisions about the details of urban futures are made.
This paper aims at bringing environmental and urban aesthetics into the discussion about the possible directions of urban futures. The focus is on introducing the notion of aesthetic sustainability as a tool to understand better how urban futures unfold experientially and aesthetic values of urban environments develop with time. The concept has some background in the field of design, more specifically in sustainable usage and product design, but it has not so far been used in order to study large scale living environments. The concept can prove to be a valuable supporting tool in urban sustainability transformations based on how it captures the experiential side of the physical and temporal dimensions of cities.

Yotam Lurie (1) and Shlomo Mark (2) ((1) Ben-Gurion University, (2) Sami Shamoon College of Engineering, Israel)Ethical framework as a quality driver in agile based development of new technology

Agile development methods, which originated in the field of software projects, are becoming increasingly pertinent in engineering, business management and the design of new technologies. The agile processes are interactive, iterative and highly rapid, adjustable and flexible. It focuses on the “human factor”. This paper explores the sense in which a set of ethical tools, adopted and adjusted for this particular context, can serve as guiding and regulating tools in agile managerial and development processes, in order to provide higher quality.

Quality in engineering is a complex and highly context-dependent concept and has numerous definitions and approaches. In fact, there is no single agreed-upon definition for quality. Essentially, there are two main conceptions as to how to approach quality improvement: the “product base” approach and the “process base” approach. We take the process based approach.

Turning to ethics, ethics is not just a matter of individual morality. Rather, there is a sense in which ethics has to do with systematic matters. Ethical tools are a way of dealing with the systemic issues: with teamwork, stakeholder involvement, transparency, identifying failures and mistakes and so forth. Ethical tools serve to create shared norms of behavior within organizational setting that serve to safeguard against ethically problematic behavior and lead the development team to a higher ethical standard.

Putting all this together, an ethical Managerial approach to the design of new technology is conceptualized as the practices, roles, ceremonies, and artifacts that provide a skeletal abstraction of a solution to a number of similar problems. It has to be a reusable, extendable and abstract set of basic objects that characterize it as a quality driver within the agile approach. A specific aim of this paper is to expand the agile manifesto’s values and principles by examining how ethical tools, ethicsby design, can be integrated into the agile process. In so doing, the ethical managerial approach serves as a quality driver.
One goal of this paper is to explore whether the outcome of this synergy between professional agile skills and ethical tools provides higher quality: the provision of more efficient and qualitative solutions that are also better from an ethical standpoint. We hypothesize that the implementation of an ethical managerial approach in the framework of an agile approach will promote quality.
To conclude, the overall aim of this paper is to introduce the notion of an ethical managerial approach as an ethics by design in agile management processes in order to deliver quality value. The significance of this interdisciplinary paper is both theoretical and practical. Theoretically, it examines and seeks to assess the sense in which ethical tools can serve as a quality driver in agile managerial processes, as well as an understanding of the conditions for a fruitful adoption of agile to managerial processes. The practical significance of this research is a deeper more concrete understanding of how and when it is appropriate to adopt agile methods.

Bibliography

  • Lurie, Yotam, and Shlomo Mark. “Professional Ethics of Software Engineers: An Ethical Framework.” Science and Engineering Ethics 22.2 (2016): 417-434.
  • Abdulhalim, H. Lurie, Y. and Mark, S. “Ethics as a Quality Driver In Agile Software Projects” in Journal of Service Science and Management. 2:1 (2018). 13-25

Lavinia Marin (Delft University of Technology, The Netherlands)
Looking for the truth online. Communities of inquiry on social media

Online social media are platforms designed to foster different functions for their users such as relating, providing entertainment and also bring in advertising revenue. In spite of its initial intent, people use social media (SM) increasingly for epistemic purposes: information gathering and sharing and belief formation. Since SM were not designed to function as journalistic platforms, with no fact-checking and no editorial overseeing, their epistemic functions are far from virtuous, often leading to undesirable phenomena such as echo-chambers and trapping their users in filter-bubbles. Recent disinformation campaigns have led to the increasing polarisation of users into isolated communities of believers, instead of open communities of inquiry. If, following Jason Baehr, inquiry is taken to be an intellectual virtue, it is worth researching whether SM are actively promoting epistemic vices in their users. Hence the main research question of this presentation: to what extent is SM leading to virtuous or vicious epistemic environments through their design features? The focus of this presentation is restricted to online debates: how do users reason and argue for their point of view on SM? What features of SM foster critical engagement in debates? Are there specific epistemic virtues or vices which are fostered by SM? And, furthermore, are there specific epistemic virtues that need to be promoted online? Starting from the concept of cognitive affordances as developed by Andy Clark, we will argue that in the case of SM we should look into the concept of negative scaffoldings which are features of the environment inhibiting the normal functioning of cognitive human functions. This concept of negative scaffoldings will then be related to the epistemic virtue theory. While classical virtue ethics places the responsibility of the virtue on the agent herself, in this presentation we want to shift the perspective from the agent-centred view to a complementary perspective, by emphasising also the role that the community plays in fostering an agent’s acquiring of virtues and, furthermore, for the case of online social media, we will also argue that the medium through which a community exercises its influence on the epistemic virtuous agent also matters to a large extent. Our angle of research is not meant to exonerate the individual users from their responsibilities, but to disclose the previously neglected role that media play in debates.

Anna Melnyk (Delft University of Technology, The Netherlands)
Why Value Sensitive Design fails its task in design for changing values? The case of community energy systems

Values have a fundamental role in the energy sector since they shape our socio-technical systems. Value Sensitive Design (VSD) is a commonly used approach for the embodiment of values in technical and institutional design. Numerous scholars were analyzing and applying VSD to a broad range of topics in the energy sector, including shale gas, nuclear energy, biofuels, offshore energy parks, and smart grids. Specific references to changes in prioritization and conceptualization and the emergence of values in public debates about energy technologies were made. However, each of these applications has been lacking the explicit sensitivity to the dynamic nature of values which may change after a system has already been designed. As a consequence, a mismatch may occur between the values that were embodied in the past, when designing the socio-technical system and the new and emerging values we find relevant today. Such a static vision of the values in VSD and its applications results in short-term solutions to design for values. By “locking” certain values in socio-technical systems for decades, designers determine the societal path without a backlash for new potentially relevant alterations. In the long run, such a static perception of values may abolish adaptability of design to changing values. Hence, the role of anticipation should be reconsidered as a necessary element of VSD if applied in the energy sector and aimed at long-term design adaptable to the future value change. As a key focus, this paper considers the current low-carbon transition to decentralized community energy project and corresponding design choices. It suggests a deliberate analysis of ethical acceptability and questions the moral foundations of the low-carbon transition from the perspective of a long-term design. The aim of this paper is to illustrate some insights about the role of changing values in low-carbon energy transition in order to confront VSD with these findings and achieve two outcomes: i) highlight the design challenges that are raised by changing values in the current energy transition, ii) use these insights to indicate the anticipatory gap that VSD has as a method to design for values.

Parisa Moosavi (University of Toronto, Canada)
The Good of non-sentient organisms and the moral status of robots

The fact that artificially intelligent robots are becoming increasingly incorporated in our social and personal lives has led some authors to argue that robots should be granted moral rights (Whitby 2008; Coeckelbergh 2010; Robertson 2014). Some such arguments appeal to an analogy between robots and animals, while others rely on claims about robots’ potentially developing mental capabilities like consciousness. In response to these arguments, opponents argue that robots cannot have the capacity for suffering in the same way as sentient animals, and therefore should not be granted moral rights (Johnson 2018).

However, the idea that the capacity for sentience is a necessary condition for moral considerability has been contested. Some environmental ethicists argue that non-sentient biological organisms, species, and ecosystems can be candidates for moral patienthood, because they have a good of their own (Goodpaster 1978; Attfield 1981; Basl and Sandler 2013). This raises the question whether non-sentient robots can similarly enjoy moral patienthood.

In this paper, I first give an account of what makes certain non-sentient biological entities potentially morally considerable, and then explain why this moral considerability does not extend to non-sentient robots. My account, which is based on Foot’s (2001) conception of natural goodness and McLaughlin’s (2001) welfare-based account of functional explanation, shows that non-sentient goodness requires intrinsic teleology – a form of functional explanation that is applicable independently of the interests of humans or other external parties. In the case of ecological entities, I argue that while non-sentient individual organisms can have a good of their own, the relevant notion of welfare does not extend to other biological categories that interest environmental ethicists. In the case of non-sentient robots, I argue that to the extent that they are not plausibly characterized as having intrinsic functions, they should not be treated as moral patients or given moral rights.

References

  • Attfield, R. (1981). The good of trees. Journal of Value Inquiry 15 (1), 35–54.
    Basl, J. & Sandler, R. (2013). The good of non-sentient entities: organisms, artifacts, and synthetic biology. Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 44 (4), 697–705.
  • Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12(3), 209-221.
  • Johnson D. G. (2018). Why robots should not be treated like animals. Ethics and Information Technology 20, 291–301
  • Foot, P. (2001). Natural Goodness. Oxford University Press.
  • Goodpaster, K. E. (1978). On being morally considerable. The Journal of Philosophy 75 (6), 308–25.
  • McLaughlin, P. (2001). What Functions Explain: Functional Explanation and Self-Reproducing Systems. Cambridge University Press.
  • Robertson, J. (2014). Human rights vs. robot rights: Forecasts from Japan. Critical Asian Studies, 46(4), 571–598.
  • Whitby, B. (2008). Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents. Interacting with Computers, 20(3), 326-333.

Christopher Nathan (The University of Warwick, United Kingdom)
Disagreement and catastrophe

I argue that value disagreement is a key obstacle to the governance of anthropogenic global catastrophic risks (AGCRs), such as those risks to humanity arising from climate change, nuclear war, artificial intelligence, and biotechnology. My central claim is that within AGCR governance, a heterogeneity of views about norms form an under-appreciated obstacle.

In establishing this, I set out a map of the forms of disagreement that are likely to arise in AGCR governance, focusing on two key slogans: (1) those who benefit from a technological intervention should pay its costs, and those who face its cost should gain its benefits (the ‘cost iff benefit’ principle); and (2) technological interventions should be for the benefit of all (the ‘benefit of all’ principle). Those two slogans are apparently congruent. Indeed the first would seem to entail the latter in the context of AGCR, where a cost in the form of a risk to life is imposed upon all globally, and so the benefit of such technological interventions ought also to benefit all globally. However, neither slogan can be taken literally, and each is open to multiple interpretations regarding the issue of how to deal with uncertainty and risk. The problem of what principles would guide a fair distribution of the costs and benefits of AGCR-threatening technology is open to genuine disagreement. Further, I suggest, one of the obstacles to governing such technologies is precisely these disagreements. I present case study in climate change policy as a way of illustrating this, showing how the commitment to agreement on process rather than principle in the Paris Agreement has been more conducive to action in spite of disagreement about values.

Finally, I examine the kinds of institution that would overcome these disagreements about value, noting the importance both of ‘top-down’ and ‘bottom-up’ forms of governance, and in particular the dangers of overcentralised governance. In a recent working paper, Nick Bostrom intimates that draconian centralisation of global power is necessary to avoid catastrophic risk. While Bostrom presents his argument as an element of a trade-off between security and other values, I argue that such centralised governance structures can themselves be open to exploitation by a few individuals. There are security risks that go with a centralised world order. Furthermore, I argue that we are more likely to reach a centralised world state that has _legitimate_ preventive policing and surveillance through mechanisms that promote agreement, and that such policing functions can be exercised in less centralised ways that are less liable to dangerous capture. There is thereby strong reason to tilt towards what I describe as decentralised structures.

  • Bostrom, Nick. “The Vulnerable World Hypothesis.” (2018): http://www.nickbostrom.com/papers/vulnerable.pdf
  • Chan, Nicholas. ‘Climate Contributions and the Paris Agreement: Fairness and Equity in a Bottom-Up Architecture’. Ethics & International Affairs 30, no. 03 (2016): 291–301. https://doi.org/10.1017/S0892679416000228.
  • Held, David. ‘Reframing Global Governance: Apocalypse Soon or Reform!’ New Political Economy 11, no. 2 (2006): 157–176.
  • Kofler, Natalie, James P Collins, Jennifer Kuzma, Emma Marris, Kevin Esvelt, Michael Paul Nelson, Andrew Newhouse, Lynn J Rothschild, Vivian S Vigliotti, and Misha Semenov. “Editing Nature: Local Roots of Global Governance.” Science 362, no. 6414 (2018): 527-529.
  • Nelson, Andrew Newhouse, et al. ‘Editing Nature: Local Roots of Global Governance’, n.d., 9.
  • Pickering, Jonathan, Steve Vanderheiden, and Seumas Miller. ‘“If Equity’s In, We’re Out”: Scope for Fairness in the Next Global Climate Agreement’. Ethics & International Affairs 26, no. 4 (ed 2012): 423–43. https://doi.org/10.1017/S0892679412000603.

Philip Nickel (Eindhoven University of Technology, The Netherlands)
Technological disruption and moral uncertainty

This paper analyzes the way in which technology can bring about moral disruption, defined by Robert Baker (2013) as a process in which technological innovations undermine established moral norms without clearly leading to a new set of norms. I analyze this process in terms of technological disruption creating moral uncertainty.

Baker’s historical case is the introduction of mechanical ventilation and organ transplantation technologies. The introduction of mechanical ventilation technologies in the mid-Twentieth Century led to situations in which individuals who would never regain minimal function were kept alive for long periods of time. This created moral uncertainty about the appropriate regard for these people’s bodies: whether to treat a living person who lacks brain function as being equally worthy of the dictum “Do no harm” and the norm against withdrawal of aid (Beneficence). In addition, by increasing opportunities for effective organ donation, it opened up the possibility to use “brain dead” bodies as a life-saving resource, creating further uncertainty about the moral status of these bodies and their parts.

In this paper, I explore the possibility that such moral uncertainty is a harm. The setback to interest in the case of mechanical ventilation consisted of an inability to know one’s own and others’ obligations regarding withdrawal of ventilation and questions of organ transplantation and use. Historically, a physician who believed in the principle of “do not harm” might have been uncertain whether it applied to a living person with no higher brain function, kept alive by mechanical ventilation technology. In a condition of moral uncertainty, the physician might do things that will turn out to have been morally wrong in historical perspective and for which they may be morally judged. Even if s/he is not blameworthy in the final analysis, his or her own conscience may raise persistent questions about whether s/he should have acted differently. On this analysis, then, people do not simply dislike moral uncertainty. They have good reason to feel that it undermines their moral agency.

Moral uncertainty also seems to harm trust and commitment between the physician and the family members of the artificially ventilated person. The family members would not have known to what standards and expectations they should hold the physician when relying on him or her, nor would the physician know what the family could reasonably trust him or her to do in this situation.

After setting out this account of moral disruption, I relate it to philosophical theories of technomoral change, change of norms, and moral progress. I argue that accounts of moral change and progress have been blind to the possibility of harmful moral uncertainty.

Peter Novitzky and Vincent Blok (Wageningen University and Research, The Netherlands)
Steps towards defining a philosophy of innovation

Is it possible to develop disruptive technologies that are at the same time responsible innovations for societies? We approach this question from the viewpoint of Clayton M Christensen (1997, 2003), the author of the term ‘disruptive innovation’. We highlight that the business-originated definition of disruptive innovation substantially differs from current philosophical interpretations of disruptive technologies (e.g. Floridi 2004). This conceptual divergence has key influence on the requirement of defining responsibility for disruptive innovations. Our further investigation will employ the capabilities approach defined by Amartya Sen in order to provide normative framing for determining the criteria of responsibilities for disruptive innovations.

Ezio Di Nucci (University of Copenhagen, Denmark)
Medical AI: A dilemma of concordance?

We are witnessing growing delegation of increasingly complex and delicate human tasks to IT systems, including ‘machine learning’ and ‘artificial intelligence’ systems. This is happening across core domains such as the military, law and healthcare. When a system is designed and introduced to perform and replace a task previously fulfilled by a human agent (or an older technological system), we are confronted with a dilemma of concordance; or so I argue in this paper.

The dilemma of concordance goes like this: the performance of the intelligent IT system to which we delegate the previously human task can either have high levels of concordance with humans (or older technological systems) already fulfilling that task or the intelligent system’s performance can have low levels of concordance. If the new system has high levels of concordance, then it can be argued to be redundant on the humans or older systems already performing that task. And if the new system has low levels of performance, then it can be argued to be unsafe and we may not deploy it.

One case study of this dilemma of concordance is IBM’s attempts over the last decade to develop and market its Watson system to oncologists as a decision-support platform delivering ranked therapeutic options for cancer patients, so-called IBM Watson for Oncology. IBM used supposedly high concordance levels to market its system to oncology departments around the world.

Can we offer a solution to the concordance dilemma? We can if we distinguish between so-called ‘strategic delegation’ and ‘economic delegation’: according to the former, we delegate in order to improve performance, like when we delegate our tax affairs to an accountant. According to the latter, we delegate in order to spare resources, like when a cleaner takes care of our while we go earn more money than we pay them.

If we delegate strategically, then high concordance is indeed bad news but low concordance is not: indeed we would reasonably expect low concordance if the point of delegation it to improve performance, given that we are aiming for different – better – outcomes. And if we delegate economically, then high concordance is not necessarily bad news, because we would be satisfied with comparable outcomes given fewer invested resources – indeed we might be satisfied even with marginally worse outcomes if the savings are big enough. But in the case of economic delegation it is low concordance levels that are possibly bad news, given that we are aiming for comparable outcomes at a lower price.

Summing up, a solution to the dilemma of concordance has been put forward by distinguishing between strategic delegation and economic delegation. But this is clearly not the end of the story for concordance as a performance metric: what about the fallibility of the control group? And what about evaluating performance through objective standards which do not depend on the performances of the humans or older technology that used to be in charge of the relevant task?

Sven Nyholm (Eindhoven University of Technology, The Netherlands)
Can robots act for reasons?

Robots – like self-driving cars or automated weapons systems – are often said to need to be able to make life and death decisions. A self-driving car might need to decide to go left (and possibly kill five people) or go right (and possibly kill one person). Similarly, a military robot might need to decide whether or not to fire at a human target, where this decision might depend on whether the human being is a civilian or a combatant. Proponents of “machine ethics” claim that it is possible to create machines that act on the basis of moral principles. (Anderson & Anderson 2007) In response to such suggestions, Brian Talbot, Ryan Jenkins, and Duncan Purves have argued that, firstly, robots cannot act for reasons and that, secondly, therefore they can neither act rightly or wrongly. (Talbot et al. 2017) The presumption behind this is that whether an agent acts rightly or wrongly (for example in making a life-or-death decision) depends on the reasons for which the agent acts. In my presentation, I plan to discuss the question of whether robots can act for reasons, first by considering the robot’s agency in isolation and then by considering the robot’s agency in relation to the human beings around it.

In discussing whether a robot can act for reasons, I will rely to a certain extent on Susanne Mantel’s account of what it is to act for reasons, as presented in her book Determined by Reasons. (Mantel 2018) I will also draw on my own previous work on how to fill responsibility gaps, in which I argue that robot agency is best understood as a sort of collaborative agency, where certain human beings always play key roles. (Nyholm 2018) Since I am interested in robots supposedly needing to make life and death decisions, I will focus on moral reasons, by which I mean the sorts of considerations the main ethical theories claim to be important from an ethical point of view.

I will argue that considered in isolation from the human beings around it, robots can be seen as capable – or potentially capable – of acting for reasons roughly in the sense described by Mantel. However, since robots cannot (yet) be properly praised or blamed or otherwise held responsible for their actions, robots themselves cannot have obligations. So even if robots considered in isolation can be said to be responsive to moral reasons, this might still not mean that they can act rightly or wrongly in the sense of being able to have obligations. However, I will argue that the agency of robots is best understood in terms of human-robot collaborations, where humans play managerial or supervisory roles. I argue that this means two important things: one, the human beings can be held responsible for what the robots do (since what the robots do can be understood in terms of how they collaborate with the human beings in question). And second, the corporate agents formed in human-robot collaborations can act rightly or wrongly.

References

  • Anderson, M. & Anderson, S. (2007): Machine Ethics: Creating an Ethical Intelligent Agent, AI Magazine
  • Mantel, S. (2018): Determined by Reasons. Routledge
  • Nyholm, S (2018): Attributing Agency to Automated Systems: Reflections on Human-Robot Collaborations and Responsibility-Loci, Science and Engineering Ethics
  • Talbot, B., Jenkins, R., & Purves, D. (2017). When Robots Should do the Wrong Thing, in Robot Ethics 2.0. Oxford University Press

Isaac Oluoch, Michael Nagenborg, Monika Kuffer and Karin Pfeffer (University of Twente, The Netherlands)
(Machine) learning about populations in need of support: The ethics and politics of `slum’ mapping

Following the United Nations’ Sustainable Development Goal 11, aiming at ‘adequate housing and infrastructure to support growing [urban] populations, [and] confronting the environmental impact of urban sprawl’, there has been a growing international concern with identifying ‘slums’ in cities (predominantly in the Global South) in support of the SDG indicator 11.1.1 (“Proportion of urban population living in slums, informal settlements or inadequate housing”). The identification of slums through geo-spatial, earth observation (EO) based, mapping strategies allows localizing slums and can fill existing data gaps by providing more consistent information on this SDG indicator (which can assist in devising policies for upgrading these slums) (Kuffer et al., 2016).

But geo-spatial mapping strategies to identify slums must first answer the question: what is a ‘slum’? The term ‘slum’ is used by UN-Habitat at the household level, while ‘informal settlement’ is used at the area level, and ‘inadequate housing’ at the house level. More so, the term ‘slum’ is also geographically and culturally context dependent, with different countries having different terms to describe what a ‘slum’ is. However, what is consistent is that in each of the areas where ‘slums’ are stated to be, slums are rapidly increasing, both in terms of numbers and their spatial extent. For both government and non-government organisations (such as Slum Dwellers International), it is difficult to a) accurately know how big slums actually are within a city, and b) develop adequate strategies for upgrading slums when there is a lack of up-to-date and consistent data about slums. Mapping slums can therefore help put slum dwellers and their communities on the political agenda, a principal aim of Slum Dweller International. But at the same time, by making these areas and communities living in them visible on maps (in particular with top-down mapping approaches) without their permission or sufficient information of the socio-political issues, the making of these maps can lead to conflicts such as forced evictions and land rights issues.

One of the core concerns here is that (automated) EO technologies are “top-down technologies“ (in a very literal sense). And in principle, at least at first glance, there is also no inherent technical need to collaborate with the inhabitants of slum areas in the production of such maps. However, the employment of such technologies only seems reasonable, if automated slum detection becomes part of pro-poor development. The ultimate goal should be to support the inhabitants in finding ways to improve their living conditions.

Given the ‘top-down’ nature of these technologies, we also need to be aware of the risk to frame ‘slum dwellers’ as an object of governance from a distance. Research into the politics of the transition towards sustainable and resilient cities emphasizes the need to recognize the rights and privacy of all inhabitants and to understand how slum dwellers build their resilience in an otherwise hostile environment which can only be observed bottom-up (with community based mapping strategies).

In our paper, we will explore how we can make responsible use of top-down technologies in collaboration with bottom-up processes and how we can support the people on the ground with the help of the ‘eyes in the sky’.

Acknowledgement

The paper is based on the master thesis by Isaac Oluoch.

Stef van Ool and Katleen Gabriels (Maastricht University, The Netherlands)
All that is solid melts into code. An analysis of moral argumentation in debates on big data and algorithmic decision-making

This paper analyzes ethical debates on big data and algorithmic decision-making (ADM). The body of data consisted of ten expert interviews. The main arguments were analyzed and categorized in order to study recurring patterns of moral argumentation. Findings reveal that consequentialist considerations were most dominant.

The promises and pitfalls of algorithms and big data are widely discussed in academic and societal debates (see e.g. Mittelstadt et al., 2016; O’Neill, 2017). Algorithms are extremely helpful, especially in analyzing complex datasets. In healthcare, ADM has already led to great promises (see e.g. Esteva et al., 2017). Yet, several concerns have been raised, for instance concerning fairness, transparency, and non-neutrality. Critics warn that blind application of ADM might amplify biases already present in data.

This study is particularly interested in the arguments of experts. Our sample consisted of ten experts working in research, the European Commission, NGOs, the private sector, politics, and the Dutch police. Nine of them were Dutch, which allowed us to focus on the Dutch public sector. Swierstra and Rip (2007) offer a framework to analyze underlying patterns of moral argumentation in debates on new technologies. Drawing upon their framework, we categorized our dataset (46000 words) according to four overarching ethical theories: consequentialism, deontology, distributive justice, and ‘good life’ ethics.

Our empirical findings point at the underlying dynamics that drive ethical deliberation concerning ADM and big data. They show, firstly, that consequentialist considerations are dominant when promoting technologies based on algorithmic models. This so-called ‘efficiency paradigm’ receives substantial criticism based on equally consequentialist reasoning. General and vague promises are called into question. Experts point at a mysticism that surrounds big data and algorithms which informs a rather naïve belief in what these technologies are actually capable of.

Secondly, deontological principles such as transparency and fairness are invoked to counter the presumed negative effects of consequentialist reasoning. Critics and proponents alike stress the importance of ADM procedures being transparent and explainable in such a way that the overall model can be scrutinized, but by emphasizing the need for a human in-the-loop in those cases where highly impactful decisions need to be made. Yet, operationalizing these principles into concrete practices remains to be problematic.

Furthermore, arguments focus on a just distribution of costs and benefits. How, if at all, can we guarantee that certain parts of the population are not disproportionately faced with the negative effects of ADM? This question is related to a fourth category of arguments, namely those that make reference to the ways in which general societal value structures guide our use of technology. The way in which we, as a society, define what is to be considered as problems in the first place determines how we use an algorithmic model to solve them. These definitions are mutually shaped by technologic capabilities. What can be processed by an algorithm becomes important, while the rest is at risk of losing relevance.

Overall, this descriptive ethical approach seeks to inform future inquiries into specific domains where ADM is used.

References

  • Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks. Nature, 542, 115-118.
  • Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society, 3(2), 1-21.
  • O’Neill, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Penguin Books.
  • Swierstra, T., & Rip, A. (2007). Nano-Ethics as NEST-Ethics: Patterns of Moral Argumentation About New and Emerging Science and Technology. NanoEthics, 1, 3-20.

Zoë Robaey (Delft University of Technology, The Netherlands)
Virtues for innovation in practice: A virtue ethics account of moral responsibility for biotechnology

Recent techniques, such as gene editing, complicate the biotechnology debate because they challenge the legal boundaries governing biotechnology and their use begets empirical, social and ethical uncertainties. Gene editing in the field of agriculture, industrial, and medical biotechnology is a socially disruptive technology. How can we develop and use new techniques in biotechnology responsibly?

The field of Responsible Research and Innovation (RRI) has strived to provide an activity-based account to address the problem of societal responsiveness along the dimensions of inclusion, anticipation, reflexivity, and responsiveness on the part of the innovator (Van Schomberg, 2013; Stilgoe et al. 2013; Sonck et al., 2017). Engineering ethics has also developed innovative approaches, such as Value Sensitive Design (Friedman et al., 2006; Van den Hoven, 2013), and formulated conceptions of moral responsibility under uncertainty (Van de Poel, 2016; Robaey, 2017). Yet, these frameworks have limitations when it comes to uncertainties brought about by biotechnology because they are innovations involving living organisms, and few explicitly examine the practice of innovation in connection to uses of innovations, where uncertainties actually arise. Considering responsibility in practice invites formulating a virtue ethics account of moral responsibility for uncertainties in biotechnology. The advantage of pursuing a virtue ethics account of moral responsibility, connecting practices of innovators and users is that it can stay stable while being flexible in its application to a specific context, and at different moments in the life of an innovation.

I have previously argued that forward-looking moral responsibility for uncertain risks could be defined as the cultivation of epistemic virtues (Robaey, 2016). Van de Poel and Sand (2018) describe responsibility-as-virtue as the capacity to accept other types of responsibilities and recognize new normative demands. Pellé (2015) identifies virtue ethics as a main rationale in RRI. However, neither account sufficiently explains how agents can act responsibly in accordance with their virtues. The relationship between “doing the right thing at the right time” (MacIntyre, 1981; Vallor, 2016) and moral responsibility thus needs teasing apart. To understand what uncertainties an innovator or a user is responsible for, I argue for investigation into three areas which are, uncoincidentally, the most contentious issues in the biotechnology discourse regarding uncertainties: safety and security, sharing benefits, and naturalness. If these are the object of responsibility, what virtues help being responsible for these?

This paper considers innovators and users in order to develop an account of moral responsibility for dealing with uncertainties specific to biotechnology. This paper constitutes a first step towards a theoretical framework identifying the virtues for innovation in practice (VIPs) in relation to three areas where moral responsibility is necessary. To do this, I build on recent ethics scholarship taking renewed interest in neo-Aristotelian accounts of the virtues. I will draw from the literature on techno-moral virtues (Vallor, 2016), virtues of innovators (Sand, 2018), epistemic virtues (Robaey, 2016), green virtues (Jamieson, 2007) and virtues for risk decision-making (Athanassoulis and Ross, 2010).

Per Sandin (1), Helena Röcklinsberg (1) and Mickey Gjerris (2)((1) Swedish University of Agricultural Sciences, Sweden, (2) Copenhagen University, Denmark)
More and less natural biotechnologies and disruption of social orders

Many technologies that have been socially disruptive have been agricultural technologies. One example is how more efficient cultivation equipment have prompted social innovations like land-use reforms that led to the breaking up of medieval villages into smaller farm units. In recent times, the agricultural technologies of the 1960s and 1970s ‘Green Revolution’ have had similar results, in addition their other obvious negative consequences (such as chemicals and nutrient leakage) and benefits (increased food production and prosperity).

As noted already by Hume and Mill, naturalness is a concept with many interpretations. Among them we find ‘opposite of the supernatural’, ‘independence from human beings’ and ‘environment friendliness’ (Siipi 2015) as well as ‘healthiness’ and nutritive suitability (Siipi 2013). While notoriously elusive, naturalness is nevertheless thought to carry considerable normative weight. Food items, for instance, are marketed as containing ‘all natural ingredients’, and there are plenty of examples in legislation: Laws on animal welfare require that animals be allowed to exercise their natural behavior, and in the EU legislation on genetically modified organisms (GMOs), the definition of GMO states that a GMO is ‘an organism in which the genetic material has been altered in a way that does NOT OCCUR NATURALLY by mating and/or natural recombination’ (Directive 2001/18/EC, emphasis added).

One aspect of naturalness is the idea of a social order. The notion is not unfamiliar from the discussion of naturalness ever since the Stoics (Soper 1995), but it is less often discussed in the context of agricultural biotechnology. Here naturalness has primarily been seen as an objective property of plants and other organisms. However, many people’s concerns about biotechnological applications in agriculture and food have more to do with social aspects, including concerns about potential disruption of farming communities (at the producer end of the food chain) and radically altered practices around eating (at the consumer end). This is particularly salient in the Global South, where regulatory and commercial practices connected to new technologies (for instance patenting) have resulted in radical social change.

Thus, the seemingly straightforward question ‘is this natural…?’ might be misdirected. Instead, we propose that the distinction between natural and unnatural be unpacked as a multi-dimensional, relational concept in terms of factors related to social orders rather than (mere) technological ontology.
We explore the ramifications of this in relation to some biotechnological applications: gene edited cows (Eriksson et al 2017) and insects (Gjerris et al 2018). Such projects are likely to be controversial, at least within the EU: 70% of EU consumers found that ‘GM food is fundamentally unnatural’ (European Commission 2010). By unpacking the social concerns often hidden under the term ‘unnatural’ we aim to show that at least some of the skepticism towards GM-technology is based on concerns about the social disruptiveness of the technology rather than an ontological understanding of a sharp divide between natural and unnatural. This, in turn, can help shape the social context of the technology thus allowing certain uses of it in the transition towards a more sustainable food production.

References

  • Eriksson, S, Jonas, E, Rydhmer, L, Röcklinsberg, H. 2017. Invited review: Breeding and ethical perspectives on genetically modified and genome edited cattle. Journal of Dairy Science 101: 1-17.
  • European Commission (2010): Eurobarometer 73.1 Biotechnology. European Commission, Brussels
  • Gjerris, M., Gamborg C, Röcklinsberg H. 2018. Could crispy crickets be CRISPR-Cas9 crickets – ethical aspects of using new breeding technologies in intensive insect-production. In: Springer, S. and Grimm, H. Professionals in food chains. Wageningen: Wageningen Academic Publishers, 424-429.
  • Siipi, H. 2013. Is natural food healthy? Journal of Agricultural and Environmental Ethics 26:797-812.
  • Siipi, H. 2015. Is genetically modified food unnatural? Journal of Agricultural and Environmental Ethics 28:807–816.
  • Soper, Kate. 1995. What is nature? Oxford: Blackwell.

Mehmet Sinan Senel (1) and Halil Turan (2) ((1) Katholieke Universiteit Leuven, Belgium; (2) Middle East Technical University, Turkey)
A critical assessment of mediation theories in philosophy of technology

In classical philosophy of technology, typically represented by Heidegger and Jaspers, there is a clear distinction between the human being as the subject and the technological artifact the object. This view has been challenged by contemporary approaches such as postphenomenology, a turn characterized by the employment of empirical methods, and with the claim that the human being and technology are both transforming and therefore constituting each other. Here the key term for conceptualizing the transformation is mediation displacing the negatively sounding alienation. Thus, for example, it is argued that technologies function as mediators, that they shape both human subjectivity and objectivity of the world. The postphenomenological view is certainly helpful for understanding the contemporary human being and the world as being constituted by her: the tools shaped by us are shaping our lifeworld. But this philosophical perspective, which tries to overcome the subject-object dichotomy and avoiding the related alienation argument, may imply an overoptimism regarding technology and its impact on our world. The stress being laid on the mediation of technology leads us to a view of the ordinary human being as a user (or consumer) of technological artifacts and obscures her role in shaping the reality in an active way.

We believe that Zygmunt Bauman’s conception of ‘consumerist revolution’ can shed light on the problems involved in the use of technology in today’s world. Postphenomenology, concentrating mainly on the concept of mediation, does not seem to be paying due attention to contemporary consumerist’s subjectivity. The mediation view is rather occupied with the problem how the innovations taking place shape our world, but not with the transformation of technology into a consumer’s activity. In other words, the postphenomenological frame concentrating on use and avoiding the notion of alienation, can hardly enable us to understand the change of human being from a tool-maker to a tool-user or simply a consumer with ungratified desires in a ‘liquid modern’ society. Globalization of markets gives a new shape to human subjectivity confining it to a consumer’s experience that finds its expression in Bauman’s formula of ‘one-upmanship’: the will to own goods not exclusively as tools, but as signs of one’s status. What Adam Smith in the 18th century called ‘baubles and trinkets’ may indeed be useful for the improvement of the quality of life, but consumption assumes a different colour through 21st century consumerism as it becomes a matter of ‘one-upmanship’ that shapes our subjective world in a morally problematic way through the mediation of commodities.

In this presentation we will emphasize the need for a critique of mediation theories by underlining the significance of a normative evaluation of the relations between consumerism and technology. We will argue that the predominance of the descriptive view may stand as an obstacle on the way to a critical assessment of a technologically mediated existence.

Jilles Smids (Eindhoven University of Technology, The Netherlands)
Ought we accept `blame’ from a robot?

The question whether robots can blame us humans is largely unexplored. Moreover, if robots can indeed blame us, how do we ought to respond to their blame? In my presentation, I will argue that if a robot accurately identifies our wrongdoing and let us know in a ‘blaming manner’, we often have a moral duty to accept this blame by acknowledging our fault and committing to doing better next time. Thus, I will argue for a qualified affirmative answer.

I start by explaining why we have good reason to investigate this cluster of questions. Blame is a central feature of our practices of holding each other morally responsible. Blame helps us to get along with each other and to maintain a healthy moral community (Coates & Tognazzini, 2012; McGeer, 2013). As a consequence, if robots enter our social domain and take the place of some fellow human, but would never blame us, we most likely lose some of blame’s positive and indeed essential potential.
I then argue that robots can be designed such that, at the behavioral level, they can blame us in ways very similar to how humans blame (Cf. Danaher, 2019; and Malle, 2015). However, they probably cannot in fact blame us, for example because robots cannot experience the emotional states that, according to the reactive attitudes approach, are constitutive for blame.

However, if the robot informs you about a wrongdoing by way of ‘blaming’ you, you then know that you did that wrong, or rather, your wrong is brought to your attention. If you have no valid excuse or justification to offer, then knowing your wrong ought to be sufficient for you to accept the ‘blame’. Subsequently, I discuss three objections to this claim. First, it could be objected that we are not answerable to robots, because they do not belong to our moral community. One of my responses employs the idea of robots ‘blaming’ as a third party, on behalf of the victim of some wrong. A second objection turns on the idea that robots, like hypocrites, lack the standing to blame, because they do not care about the values they invoke. My response is twofold: first, in the case of robots this is no fault, and second, I extend an argument by Bell for the view that hypocrites as well can have the standing to blame (Bell, 2013; Cf. also Fritz & Miller, 2018). A third objection claims that robots cannot engage in the sort of dialogue between blamer and blamee that often follows upon the expression of blame. In response, I appeal to robotic capacities for dialogue based on AI and machine learning techniques. Though,I admit that at least currently this is an important limiting factor, which sometimes justifies neglecting robotic blame.

I conclude by bringing my normative position on robot ‘blame’ to bear on some first empirical findings with regard to human responses to blaming robots. It appears that we have difficulties in giving uptake to robotic blame (Kaniarasu & Steinfeld, 2014; You, Nie, Suh, & Sundar, 2011) and some robotics researchers advise to limit robot blame as much as possible (Groom, Chen, Johnson, Kara, & Nass, 2010). Consequently, it appears that social robots will be disruptive technologies whether or not they blame us. If they do not ‘blame’ us, we lose some of blame’s positive potential, but if they do ‘blame’ us, we probably fail to respond in the way we ought to. This is a weighty reason to reconsider drawing many social robots into our social domain (Danaher, 2019; Cf. Gunkel, 2018).

  • Bell, M. (2013). The Standing to Blame: A Critique. In D. J. Coates & N. A. Tognazzini (Eds.), Blame: its Nature and its Norms.
  • Coates, D. J., & Tognazzini, N. A. (Eds.). (2012). Blame: Its Nature and Norms (1 edition). Oxford ; New York: Oxford University Press.
  • Danaher, J. (2019). Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism. Science and Engineering Ethics.
  • Fritz, K. G., & Miller, D. (2018). Hypocrisy and the Standing to Blame. Pacific Philosophical Quarterly, 99(1), 118–139.
  • Groom, V., Chen, J., Johnson, T., Kara, F. A., & Nass, C. (2010). Critic, Compatriot, or Chump?: Responses to Robot Blame Attribution. Proceedings of the 5th ACM/IEEE International Conference on Human-Robot Interaction, 211–218.
  • Gunkel, D. J. (2018). Robot Rights. Cambridge, MA: The MIT Press.
    Kaniarasu, P., & Steinfeld, A. M. (2014). Effects of blame on trust in human robot interaction. The 23rd IEEE International Symposium on Robot and Human Interactive Communication, 850–855.
  • Malle, B. F. (2015). Integrating robot ethics and machine morality: The study and design of moral competence in robots. Ethics and Information Technology, 1–14.
  • McGeer, V. (2013). Civilizing Blame. In D. J. Coates & N. A. Tognazzini (Eds.), Blame: its nature and norms.
  • You, S., Nie, J., Suh, K., & Sundar, S. S. (2011). When the Robot Criticizes You…: Self-serving Bias in Human-robot Interaction. Proceedings of the 6th International Conference on Human-Robot Interaction, 295–296.

Patrick Smith (University of Twente, The Netherlands)
Who may geoengineer: Global domination, revolution, and solar radiation management

This paper uses a novel account of non-ideal political action that can justify radical responses to severe climate change injustice, including and especially deliberate attempts to engineer the climate system in order reflect sunlight into space and cooling the planet. In particular, it discusses the question of what those suffering from climate injustice may do in order to secure their lives, property, homelands, and patrimony in the face of severe climate change impacts. Using the example of risky geoengineering strategies such as sulfate aerosol injections, I argue that nations that are innocently subject to severely negative climate change impacts may have a special permission to engage in large-scale yet risky climate interventions to prevent them. Furthermore, this can be true even if those interventions wrongly harm innocent people. The core idea is that risky geoengineering can be justified as a species of ‘dirty hands’ supreme emergency, where the political dynamics of a particular threat make it permissible to engage in actions that would normally be impermissible. These actions would normally be impermissible because the specific features of risky climate interventions make them ill-suited as standard proportional and discriminate acts of self-defense. However, the nature of climate injustice lends itself to the idea that, in the name of resisting oppression, political agents may—if they are willing to accept the consequences—engage in resistance in order to change the underlying structural and institutional features that make climate injustice possible. One might call this type of action ‘uncivil resistance’ or, as this paper suggests, ‘revolution.’ I suggest that these revolutionary or uncivilly disobedient actions—in virtue of the dirty hands created by doing something that is normally impermissible—have a complicated justificatory structure. First, the actor must be subject to severe injustice and lack the means to escape that injustice short of violating the legitimate entitlements of others. Second, if the revolutionary vanguard violates those entitlements, then they must aim at and successfully achieve institutional reforms that make some progress towards resolving structural injustice. Finally, the revolutionary coalition must be inclusively organized by distributing the benefits, costs, and decision-making authority equitably, with special attention paid to the classes of individuals who will be victims of revolutionary violence. I then demonstrate how a country—innocent of climate change but subject to its impacts—might satisfy these requirements.

Andreas Spahn (1) and Marcus Düwell (2) ((1)Eindhoven University of Technology, (2) Utrecht University, The Netherlands)
Behaviour change technologies, nudging and human dignity

Recent developments of ICT and AI have made it possible to change human behavior with the help of technologies. These Behaviour Change Technologies (BCT) emerge in a variety of fields ranging from healthcare, to sustainability or public safety (to name just a few examples). These technologies aim at changing behavior by influencing how we act. They have the potential to disrupt our view of what it means to be a human agent, since agency will be distributed between technology and humans.

Some authors claim that employing BCTs, such as persuasive technologies or technological nudging is the necessary to meet the challenges of the future, particular the ecological challenges (e.g. Sunstein, Lehner et al 2016, Kasperbauer 2017). However, critics have doubted that recent nudging-discussions are compatible with the current normative order based on respect for equal dignity of human beings (C. McCrudden/J. King 2015). The aim of the paper is to investigate to what extent the concept of ‘nudging’ is appropriate in the light of different conceptualisations of human dignity. Of particular relevance is the question in how far nudging is compatible with human autonomy and other elements that are related to the notion of human dignity.

Methodologically the paper uses strict conceptual, philosophical tools. As a conclusion it may turn out that the critiques of the nudging-concepts are correct in claiming that nudging fails to meet central moral requirements that relate to the respect for human dignity. But, nevertheless, is it morally required to think about appropriate ways to design the human action space in ways that human beings are enabled to meet their moral demands. If that conclusion were correct it would have the important implication that the discourse on nudging would have to be reframed in order to be appropriate for the normative requirements of our current moral-political order. More specifically we will argue that the concept of human dignity allows for morally informed classification/typology of BCT according to the question whether they undermine, support or are neutral with regard to human dignity and autonomy. Finally, we suggest a principle of complementarity and proportionality for the political debate on nudging: the increasing usage of nudges needs to be complimented with a reflection on their long-term effects and their moral acceptability, that is proportional to the thread they pose to human autonomy and dignity.

The paper contributes to the conference themes The Human Condition and The Future of a Free and Fair Society“. In particular, we will argue that human dignity is a central concept for our moral self-understanding: persuasive technologies and nudges raises questions concerning (moral) agency, autonomy and responsibility. With regard to the theme „Future of a Free and Fair Society“, we claim that human dignity is also a key concept to understand the moral order of our current legal and moral political self-understanding.

Edward Spence (University of Sydney, Australia)
Media corruption in the age of informational technology

This paper will examine and explore digital information and media corruption (media corruption in short) by reference to the Myth of Gyges in Plato’s Republic. Using the Myth of Gyges I will demonstrate that the characterising features of other types of corruption (e.g. police, political, financial, and sport corruption, among others) are on closer examination also present in what I refer to in this paper as media corruption.

The overall objective of this paper is to identify and critically analyse and evaluate some of the different types of media corruption and the different ways in which these are caused and the contexts in which they are manifested in current media environments and practices including the internet platforms by which such media corruption is enabled not by accident but by design. Whereas a lot has been written on other forms of corruption, including corporate, political, police, sports and financial corruption, to name but a few types of corruption, media corruption has been largely overlooked. Although identified as unethical within the general corpus of media ethics, practices such as, for example, cash-for-comment, media release journalism, including video news releases (VNRs), fake news, staged news, advertorials, infomercials and infotainment, and most recently fake news and information corruption more generally through the web-enabled invasive unauthorised practices of Google and Facebook among others, such practices have not been identified and defined as corrupt practices. Insofar as they have been labelled as instances of corruption, there has been no systematic theoretical study of why and how such practices constitute corruption. The reason for this oversight is partly because the concept of corruption itself is not well understood or clearly defined or when defined, it is too narrowly defined in terms of corporate financial misfeasance, or abuse of political and public office for private gain, as in the case of Watergate.

Starting with a conceptual and philosophical analysis of corruption in general, this paper will provide an applied philosophical model of corruption that will be utilized to first identify major types of corruption that arise in the media, including types if digital media that involve the corruption of information. Some key case studies will provide a practical illustration and contextualisation of some of the major types of media and information corruption.

Due to time constraints this paper will focus mainly and primarily on exploring the widely reported phenomenon of information corruption through fake news, as well as the widely reported ethical and legal breaches by Google and Facebook, by scholars such as Shoshana Zuboff in her book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019) and Frank Pasquale, The Black Box Society (2015) including other relevant journal and media publications. My aim is to show that on closer examination such ethical and legal breaches might also be construed as a form of information and media corruption. And to the extent that these practices undermine our democratic processes, such as electoral processes, for example, and the institutions on which they are founded, they also constitute institutional corruption.

Marc Steen, Josephine Sassen and Kees Van Dongen (TNO, The Netherlands)
Wise policy making: A interactive presentation

We propose an interactive presentation, in which the audience can experience several elements of a prototype we are developing in our project on ‘Wise Policy Making’.

Our project aims to promote wisdom in policy making and, ultimately, to promote people’s wellbeing. We have organized trans-disciplinary research and innovation (Cooke and Hilton, 2015) and worked on a prototype to facilitate collaboration between policy makers and citizens. Content-wise, we touch on all three sub-themes of the conference: the Human Condition; a Free and Fair Society; and Human Intervention. We aim to promote wisdom in people, in society, and in policy making.

This conference focuses on the disruptive effects of emerging technologies on society, culture and politics. We propose to turn the arrow of causality (Pearl and Mackenzie, 2018) 180 degrees and ask: What if we had a *socio-cultural-political* intervention to disrupt the ways in which we develop and deploy these technologies, so that we can use them to create societies in which people can flourish?

We are working on a prototype: a socio-cultural-political intervention, inspired by ‘Theory U’ (Scharmer, 2018), to enable policy makers and citizens to collaborate, hands-on, on creating and experiencing different policy options. The outcome is a well-thought-through, elaborated, cultivated version of the ‘vox pop’. We envision a three-day program, and various tools to facilitate collaboration, both on the level of process and on the level of content.

– The first day is for building a team and for cultivating communication and collaboration, with methods like ‘Deep Listening’ (to cross disciplinary boundaries), ‘Team Diamond’ (for communication and collaboration) and Stakeholder Interviews (to understand the context and promote empathy).

– The second day is for creating scenarios of different policy options, and for experiencing these scenarios. We have methods to help participants *create* scenarios, e.g., by quantifying their values, concerns and interests, and methods to help participants *experience* these scenarios, e.g., by qualifying their values, concerns and interests. These methods aim to promote perception and empathy, and are part of Interactive Prototyping (to enable participants to move between problem-setting and solution-finding, iteratively).

– The third day is for wrapping-up and planning follow-up activities. Maybe the participants develop consensus: ‘Taking everything into account, listening to different people and their interests, we jointly came to believe that option A is better than option B’. Or they deliver insights in how different people experience different future scenarios: ‘We have people P with values X; they value option A. We also have people Q with values Y; they value option B’.

Taylor Stone (Delft University of Technology, The Netherlands)
The streetlights are watching you: A historical perspective on values in smart lighting

Emerging ‘smart city’ trends are spurring a new generation of streetlights, with lampposts being fitted with sensors, cameras, and a host of other novel technologies aimed at monitoring and data collection. While these innovations may offer improvements in efficiency and safety, they raise concerns about privacy, surveillance, and power dynamics. More fundamentally, such smart systems seemingly extend the technical functions and ontological boundaries of streetlights. No longer simply providing illumination, they actively monitor their environment and those who inhabit it, creating a vast network of nodes encompassing urban spaces. Combined, the novel functions and capabilities of smart streetlights arguably create a new terrain of moral concerns. From such a vantage point, this technology acts as a socially disruptive force, profoundly altering the public spaces of cities and those who inhabit it.

However, the history of night-time lighting offers a different perspective. Without denigrating contemporary concerns, I will present a contrary argument: that these seemingly novel issues represent a continuity with the values fundamental to the very notion of public lighting. I will show that debates over social order at night – and the resultant tension between safety, privacy, and surveillance – has been a recurring theme for centuries. Streetlights have long been utilized as a form of policing and perceived as a symbol of authority, creating ongoing tensions between control and liberation in urban nightscapes. While offering a significant improvement in accuracy, smart streetlights embody a continuity of values – and value tensions – that can be traced back to the origins of public lighting in the 17th-18th centuries. But, I will not argue that these values are static. Rather, they are a recursive facet of night-time illumination, and inexorably intertwined with the foundations of this socio-technical system. Contemporary innovations represent new means of realizing these long-held goals, just as resistance to them offers fresh versions of protest and critique.

A historical perspective sheds light on the origins and evolution of values in (smart) streetlights. More broadly, this case study puts the apparent moral and ontological novelty of smart cities into perspective. To understand how emerging smart technologies can truly serve as a disruptive force for social good, we must first confront the complex role of inherited values in shaping discourse. Ultimately, this analysis is meant to serve as a useful step towards problematizing, understanding, and taking action on the ethics of disruptive (urban) technologies.

Emily Sullivan (Delft University of Technology, The Netherlands)
Explanatory functions of algorithms in healthcare

Doctors are not simply in the business of treating patients; they are also in the business of explaining. Doctors explain healthcare prognosis, treatment plans, and procedures to patients. Patients then freely ask questions and gain further explanations and elaborations. New machine learning technologies have the potential to increase diagnosis accuracy and efficiency; however, the role of explanation remains unclear. Machine learning models are increasingly complex and largely opaque in their inner workings, which calls even the possibility of explanation into question. To what extent does the introduction of opaque machine learning models fundamentally change or corrupt the explanatory relationship between doctor and patient?

In order to answer this question, it is necessary to first consider the function that machine learning models could have in the medical diagnosis pipeline. Is the function to supplement doctors’ knowledge or is the function of the model to off-load expertise? I will argue that it is in part the function of these models in the medical diagnosis pipeline that determines whether the use of such models changes or corrupts the explanatory relationship between doctor and patient.

More than this, it is not simply the function of the models themselves, but also the function of medical explanation, that matters. Is the function of medical explanation primarily to build trust, convey information, or enable understanding? Again, it depends on what the function of medical explanation is (or what the function of explanation should be) that determines whether machine learning models change or corrupt the explanatory relationship between doctor and patient.

To make this argument, I will draw on work in epistemology on the function-first approach to explanatory concepts (Craig 1990, Hannon 2019). I will consider two possible functions that machine learning models can play in medical diagnosis practices: 1) that models replace expertise, and 2) that models supplement a doctor’s existing expertise. I will then consider three explanatory functions of medical explanations: 1) establish trust, 2) enable understanding, or 3) aid practical decisions. I will draw on work in philosophy of science on explanation (e.g. Strevens 2008, Sullivan and Khalifa 2019), work in computer science on explanations of expert systems (e.g. Nunes and Jannach 2017, Tintarev and Masthoff 2007), and work on medical ethics (e.g. Epstein 2010, Hall et al. 2012) to articulate what it would mean for explanation to fulfill these functions, and whether each or any of these explanatory functions get corrupted with the use of machine learning models in the medical diagnosis pipeline.

In the end, when machine learning models are a supplement to already existing expertise, these technologies can preserve the desirable explanatory relationship between doctors and patients. However, when machine learning models replace or off-load expertise the explanatory relationship between doctor and patient becomes corrupted because understanding and trust is no longer achievable.

Shelly Tsui (Eindhoven University of Technology, The Netherlands)
Empowerment of the new citizen-subject: The ethics of living labs on citizenship transformation

Living labs are an increasingly utilised tool in which citizens are be coming involved in experimental technological innovations to solve societally relevant problems (Bergvall-Kåreborn & Ståhlbröst,2009; Voytenko et al, 2015). To include end-users in the innovation process of a product or service has become policitised—it is seen as a potential policy tool that can improve engagement with diverse stakeholders, especially the public, according to the European Commission (n.d). This would then lead to a better alignment of innovation’s aims and societal needs, and lead to ‘empowered’ and innovation-literate stakeholders that can effectively participate in issues on science and technology innovation (Stilgoe, Owen & Macnaghten, 2013; European Commission, n.d.).

Thus, the living lab approach has become a social tool through which citizens are expected to become part of a collective way of dealing and solving societal. With this comes the clear transformation of the citizen into an active citizen-subject: a citizen who participates in the living lab while at the same time, is a subject of the lab. This raises several ethical issues such as the new role and expectations for citizens with regards to issues such as how to participate. For example, the living lab runs the risk of turning into mere participation i.e., an “empty ritual of participation and having [no] real power needed to affect the outcome of the process (Arnstein, 1969), for example. This could lead to disenchantment in such initiatives, and discourage future participation, but also treat citizens as marginal but necessary parts of a broader innovation process.

To illustrate these worries, one can look to “Jouw Licht op 040”, a public procurement of innovation initiative established by the municipality of Eindhoven. Together with Heijmans, Signify, researchers, and citizens, the aim is to co-create smart technological urban solutions (e.g. lighting) in the city center to make Eindhoven “prettier, safer, and more interesting” (Jouw Licht Op 040, n.d.) by creating living labs in neighbours. However, citizens are primarily relegated to the role of information and feedback providers. Activities (boot-camps and information sessions) are organized from the perspective of experts on how they envision participation in the living lab. These practices reflect implicit and one-sided ideas of participation. Citizens are at risk of becoming boxed into roles and functions that limit their ability to participate, which can have consequences not only for the project, but future initiatives (Felt & Fochler, 2010; Wynne, 2007).

If the hope for living labs are to realise the ideal vision of public engagement by the Commission, and more generally the potential for the approach to better align innovation and societal needs, then there is a need to conceptualise how living labs can empowers its citizen stakeholders and to be clear on how citizens can be empowered. In this case, I propse four dimensions of empowerment that should be present in order to ensure the ethical issues for citizens are well accounted for: Knowledge development, allowing for ownership, taking and delegating responsibility, and allowing for their to be collective decision-making.

Pieter Vermaas (Delft University of Technology, The Netherlands)
Normative design as an ethically disruptive response

One of the responses to the fast pace of change created by new disruptive and regular technologies, are normative design approaches that protect or realise moral and societal values. Such approaches will not block the fast pace of change, yet – so the argument goes – safeguard that designers preserve our values when creating new technology applications. In this contribution I argue that these normative design approaches, such as value-sensitive design, design for values, social design and nudging, are themselves disruptive by challenging the framework of engineering ethics we already have put in place to guide new technology applications. Engineering ethics incorporates the obligation for designers to be transparent to stakeholders about the values designed for, while normative design approaches tend to not offer this transparency.

For giving my argument I take three elements from engineering ethics: engineering codes of conduct, the obligation to acquire informed consent with stakeholders, and the obligation to take responsibility. These three elements support the moral obligation for normative designers to be transparent to stakeholders about the values designed for. Then I focus on two normative design approaches: social design (Marzano 2007) and nudging by the design of choice architecture (Thaler and Sunstein 2008). By discussing cases I establish the tendency in social design and nudging to not offer value transparency to stakeholders.

I discuss two counterarguments. The first is that informing stakeholders about the values designed for may hamper effectiveness of normative design. The second counterargument is that engineering ethics does not apply to normative design. I agree that adding transparency as an extra value requirement may make normative design more challenging, yet not impossible. Design research and medicine provide ways to offer this transparency even in cases where this may damage the effectiveness of research or treatment. I also agree that engineering ethics has been developed for engineering, and to a lesser degree applied to product design and design thinking. Hence, when social design is taken as an offshoot of product design, it is not immediate that engineering ethics applies to social design. For other approaches of normative design this reasoning does not hold. Value-sensitive design (Friedman et al. 2006) and design for values (Van den Hoven et al. 2015) emerged out of engineering design, which places them within the range of engineering ethics.

But even if one accepts these counterarguments, there is a good reason that normative design should comply with engineering ethics or minimally be transparent about the values it designs for. If value-sensitive design, design for values, social design and nudging are to safeguard our values in the fast pace of change, stakeholders should trust normative design. And a lack of value transparency can undermine that trust.

References

  • Friedman, B., Kahn, P. H. Jr., and Borning, A. (2006). Value sensitive design and information systems. In P. Zhang & D. Galletta (Eds.), Human-Computer Interaction in Management Information Systems: Foundations (pp. 348–372). Armonk, NY: M.E. Sharpe.
  • Marzano, S. (2007) Flying over Las Vegas. Koninklijke Philips Electronics NV.
  • Thaler, R. H., and Sunstein, C. R. (2008). Nudge: Improving Decisions About Health, Wealth, and Happiness. New Haven, CT: Yale University Press.
  • Van den Hoven, J., Vermaas, P. E., and Van de Poel, I. (Eds.). (2015). Handbook of Ethics, Values and Technological Design. Dordrecht: Springer.

Pieter Vermaas and Udo Pesch (Delft University of Technology, The Netherlands)
Dilemma’s in design thinking: Revisiting wicked problems

In the 1970s, Rittel and Webber (1973) introduced the notion of ‘wicked problems’ to describe the class of societal problems that were hard if not impossible to solve within the then dominant paradigm of systems-based thinking. The failure of this paradigm to address the profound problems of that time gave rise to a situation in which society developed a general distrust of professionals. This distrust has been countered by the development of methods and approaches that fully acknowledging the status of wicked problem, especially in the form of ‘design thinking’ (Buchanan, 1992). However, we will argue in this paper that wicked problems appear to be resurfacing in society creating new patterns of distrust in professionals that challenge this confidence.

Rittel and Webber presented ten characteristics of wicked problems emphasizing their normative, open-ended and complex nature. The challenge posed by these problems have been taken on by design thinking, reviving the confidence of professionals to take on societal problems.

Design thinking explicitly takes on the challenge posed by wicked problems, and sees this challenge not as a hampering factor but as an opportunity to develop creative solutions by providing a reinterpretation of a plurality of societal problem definitions. The chosen solution is seen as a temporal one, as the problem will co-evolve with this solution.

The emergence of design thinking dovetailed with other trends in managing societal problems, such as the privatization of public services, the praise for ‘star designers’, the politicization of policy-making, the emphasis on participatory forms of decision-making and the idea of having products and services that are ‘permanently beta’.
The promises of design thinking can be qualified by revisiting Rittel and Webber’s original ten characteristics of wicked problems. It then turns out that not all of the characteristics are satisfactorily taken on. The dilemma’s for professionals have not been completely eradicated, which may explain the emergence of new forms of distrust that we observe.

References

  • Buchanan, R. (1992). Wicked problems in design thinking. Design issues, 8(2), 5-21.
  • Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155-169. doi:10.1007/bf01405730
Scroll to top