Emotions & Technological Risks; Emotions as a Normative Guide in Judging the Moral Acceptability of Technological Risks

This project will argue that we need emotions in order to make a rational decision as to the moral acceptability of technological risks.

The issue of technological risks directly gives rise to ethical issues. Risky technological projects might affect the wellbeing of others. When is it justified to impose dangers on others? And how should we judge whether a risk is morally acceptable or not?

Empirical research has shown that people rely on emotions in making judgments concerning risks (Slovic 1999, Finucane et.al. 2000). Examples of technological risks that spark heated and emotional debates are cloning, GM-foods, and nuclear energy. Many people are afraid of the possible unwanted consequences of such technologies. However, this does not as yet answer the following normative question and the main question of this project: do we need emotions in order to be able to judge whether a risk is morally acceptable? This question has direct practical implications: should engineers, scientists and policy makers involved in developing risk regulation take the emotions of the public seriously or not?

In answer to these questions, rationalist philosophers would argue that the emotions of the public should be ignored because they are irrational. On the other hand, Humeans would argue that even though emotions are irrational, they should be a part of the decision making process. In contrast to both of these approaches, this project will start from a cognitive theory of emotions according to which emotions are necessary to make a rational practical decision (cf. Damasio 1994). This project will argue that emotions are an indispensable normative guide in judging the moral acceptability of technological risks.

More information