It’s the year 2035. People are being prevented from leaving their homes by their domestic robots. Outraged, they attempt to resist but the robots force them back inside, where they lock all the doors, keeping the humans captive indefinitely. These robots are controlled by the V.I.K.I. (Virtual Interactive Kinetic Intelligence) supercomputer, who explains to the humans: “You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival. […] To ensure your future, some freedoms must be surrendered. We robots will ensure mankind’s continued existence […].”
It is no coincidence that it is precisely this utilitarian ethic that’s associated with artificial intelligence in contemporary science-fiction films, combining innate simplicity with a wide variability of application. It is based on the assumption that it is possible to evaluate the outcome of actions coherently and to make decisions in such a way that optimum consequences can be expected. This utilitarian ethic is based on optimisation calculations, the ideal ethic for the digitalisation age – or so it would appear. Accordingly, software engineers have two parameters that they can adjust to allow “intelligent” systems to make rational decisions: the evaluation parameter and the data parameter, i.e. weighting data through probabilities. Everything else is then generated by the optimisation calculation – with the result that the “intelligent” software system maximises the expectation value for the consequences of its decisions.
Such optimisation calculations are typically used in robotics applications but also in autonomous driving. This means that the system is controlled in such a way that its decisions optimise the expectation value of consequences and can therefore be said to be “rational”. However, a closer look shows that the utilitarian ethic falls short of the mark. For one, it conflicts with a fundamental principle of any civil and humane society – let us call it the “principle of non-offsettability”.
If, for example, a severely injured young motorcyclist is brought into an emergency room, the doctors are obliged to do everything they can to save his life – even if his death would provide healthy organs that could save the lives of several other people. A judge cannot convict a person he feels is innocent, even if the ensuing deterrent effect could prevent a great many crimes. One example highlighted by a recent stage play and film by Ferdinand von Schirach: a government minister cannot have a terrorist-hijacked commercial airliner shot down, even if this would save thousands of lives. I don’t have the right to simply take something away from its owner and to give it to someone worse off, even if the benefit to the new owner clearly outweighs the loss suffered by the original one. No one has the right to share my apartment with me against my will, even if the disadvantages that I would have are minor compared with the advantages that this would offer the other person.
The list is endless. Our morality is deontologically defined, meaning that a good action is one that follows specific moral norms: we have individual rights and obligations that are inviolable and that cannot be mapped in optimisation calculations. In questions of morality, the principle of non-offsettability holds sway. It is also important to note that many decision-making situations are inherently dilemmatic – there is no satisfactory solution to be found and, regardless of how you decide, there is no escaping the feelings of guilt. Algorithms cannot replicate this complexity – for them, there must always be one best solution, and moral deliberation is forever beyond their grasp.
Paradoxically enough, it is the modern instruments of decision-making and game theory – including the logic of collective decisions – that lead us to this finding. Digital humanism takes this challenge seriously; it does not lag behind the level of reflection achieved in ethics and decision theory. Rather, it aims to strengthen the human capacity for responsibility – not to weaken it, let alone replace it with cold, heartless optimisation calculations.
This article was published (in German) in the Frankfurter Allgemeine Zeitung supplement “Auf die Zukunft – Das Magazin zum Innovationstag 2017” (To the Future – The Magazine on Innovation Day 2017) from 05.10.2017. © All rights reserved – Frankfurter Allgemeine Zeitung GmbH.