Dealing ethically with risks and decisions
With many big decisions, political and otherwise, the consequences cannot be absolutely predicted. There is always the danger that wrong decisions could be made on matters such as how to store nuclear waste. How can one deal with such uncertainty?
Just imagine that people become very afraid that the underground storage of the greenhouse gas CO2 beneath their homes might well lead to a “blow out” so that the gas escapes and, as it is heavier than air, subsequently goes on to suffocate everyone in the neighbourhood. Scientists are always inclined to claim that the chance of this happening is very small. They can furthermore support such arguments with probability calculations. Many do not find such calculations particularly comforting. They prefer instead to draw on precautionary principles which hold that even if there is the slightest chance of major consequences ensuing for present or future generations, certain technologies must be abandoned.
These are the kinds of considerations that preoccupy Dr. Martin Peterson, considerations that can have far-reaching social consequences. He has this to say on the matter of precautionary principles: “There is no real coherent interpretation of this. You could, as it were, use this to prevent everything because there is always inherent uncertainty. At the same time it is unsatisfactory to reduce technological risks to probabilities. One must thus find other ways of supporting one’s belief in the correctness of a decision.”
Peterson does not maintain that it is the task of ethicists to think up solutions to decisions. Their primary concern should be to see that citizens and politicians are aware of the value of their arguments and of the position that technological risks occupy in society.
To take a smaller-scale example one may think of medical equipment. Scientists generally prefer to incorrectly declare something unfit for use than to incorrectly approve of it. They then know for certain that what they regard as true, really is true. If such a preference extends to medical equipment it can probably ultimately influence the diagnosis. As Peterson says, “many computer models embrace ethical presumptions, like when balancing the costs of environmental or traffic considerations against the numbers of human lives which can in that way be saved. That is not a bad thing, otherwise it would be hard to function, but one must be aware of the situation.”
In the case of familiar technologies such assumptions can be traced (though they might be many) but with new technologies, such as CO2 storage, that is harder because the degree of uncertainty is so great. The result is that it becomes difficult to support arguments in a watertight fashion. “There are various ways to justify the running of technological risks”, Peterson says. “But sometimes one must simply decide to believe certain things and base your decisions on that.”