Ethical software

The logic of ethics

Innumerable computer systems, like those used for instance in hospitals, take decisions that have an ethical side to them. The question is, how can such decisions be responsibly implemented with the use of software.

The Netherlands is well prepared for disasters. Fire services, police forces and other emergency services possess extensive plans of campaigns for dealing with catastrophes. In chaotic situations people soon become confused either because they are bombarded with information or because they lack information. In addition to that there can, at such times, be hundreds of perpetually changing factors which people simply cannot deal with and comprehend.image It is therefore a good thing that they are supported by information systems that are programmed beforehand in such a way that one of the things they at least do is send information to the right place.
“When computers also help to guard crucial information flows, privacy and other important matters, the question is how can those moral values can be translated into computer terms in a reliable fashion”, says Professor Jeroen van den Hoven. “There may well be evidence of complicated political, legal and ethical considerations but the computer only recognizes ones and noughts.”
In other words, the computer is not conversant with the subtle perceptions of the world of humans. That is why software that deals with delicate matters will have to be ethically transparent: the moral considerations that form the basis for various decisions that are taken must be absolutely clear. Programmers therefore have to ensure that the relevant assumptions and opinions remain recognizable in their code and are not shrouded in obscure formulas and models. The ways and means of helping programmers to do that are still in their infancy and are, for instance, being developed within the framework of our 3TU Ethics organization.
“Logical consequences can also emanate from available information, corollaries that individuals are unaware of simply because the situation is so complicated”, van den Hoven explains. “For instance, for a nurse it will often not be perfectly clear which sections of an electronic file she has access to and ” if she has access whether she may just read the information or whether she can also send it on to someone else and amend the data. The system will have to steer that but it will also have to justify why she is unable to gain access to certain information.”
Computer systems therefore have to be able to convert their digital reasoning into human terms so that not only the programmers but also the users can come to understand how they work. That is important not only for working efficiently with such systems but also for being able to unravel the responsibilities better in cases when the systems fail.
One need only think of the countless electronic systems deployed these days in the civil aviation sector to guide planes, also during take off and landing. If something goes wrong whose fault is it? The pilot because he switched to the system, the information architect who designed the whole system or the manager who drew up the policy to rely as much as possible on the automatic pilot system? There is never an unequivocal answer to such a question but at least it is helpful if we know exactly what the system does and why it does that.