Digital Humanism

Driverless cars, robots, voice recognition on smartphones, ordering groceries online – artificial intelligence is a permanent fixture in our everyday lives, both at home and at work. How will this change and revolutionise society? Thoughts from an ethical perspective.

It’s the year 2035. People are being prevented from leaving their homes by their domestic robots. Outraged, they attempt to resist but the robots force them back inside, where they lock all the doors, keeping the humans captive indefinitely. These robots are controlled by the V.I.K.I. (Virtual Interactive Kinetic Intelligence) supercomputer, who explains to the humans: “You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival. […] To ensure your future, some freedoms must be surrendered. We robots will ensure mankind’s continued existence […].”

It is no coincidence that it is precisely this utilitarian ethic that’s associated with artificial intelligence in contemporary science-fiction films, combining innate simplicity with a wide variability of application. It is based on the assumption that it is possible to evaluate the outcome of actions coherently and to make decisions in such a way that optimum consequences can be expected. This utilitarian ethic is based on optimisation calculations, the ideal ethic for the digitalisation age – or so it would appear. Accordingly, software engineers have two parameters that they can adjust to allow “intelligent” systems to make rational decisions: the evaluation parameter and the data parameter, i.e. weighting data through probabilities. Everything else is then generated by the optimisation calculation – with the result that the “intelligent” software system maximises the expectation value for the consequences of its decisions.

Such optimisation calculations are typically used in robotics applications but also in autonomous driving. This means that the system is controlled in such a way that its decisions optimise the expectation value of consequences and can therefore be said to be “rational”. However, a closer look shows that the utilitarian ethic falls short of the mark. For one, it conflicts with a fundamental principle of any civil and humane society – let us call it the “principle of non-offsettability”.

If, for example, a severely injured young motorcyclist is brought into an emergency room, the doctors are obliged to do everything they can to save his life – even if his death would provide healthy organs that could save the lives of several other people. A judge cannot convict a person he feels is innocent, even if the ensuing deterrent effect could prevent a great many crimes. One example highlighted by a recent stage play and film by Ferdinand von Schirach: a government minister cannot have a terrorist-hijacked commercial airliner shot down, even if this would save thousands of lives. I don’t have the right to simply take something away from its owner and to give it to someone worse off, even if the benefit to the new owner clearly outweighs the loss suffered by the original one. No one has the right to share my apartment with me against my will, even if the disadvantages that I would have are minor compared with the advantages that this would offer the other person.

The list is endless. Our morality is deontologically defined, meaning that a good action is one that follows specific moral norms: we have individual rights and obligations that are inviolable and that cannot be mapped in optimisation calculations. In questions of morality, the principle of non-offsettability holds sway. It is also important to note that many decision-making situations are inherently dilemmatic – there is no satisfactory solution to be found and, regardless of how you decide, there is no escaping the feelings of guilt. Algorithms cannot replicate this complexity – for them, there must always be one best solution, and moral deliberation is forever beyond their grasp.

Paradoxically enough, it is the modern instruments of decision-making and game theory – including the logic of collective decisions – that lead us to this finding. Digital humanism takes this challenge seriously; it does not lag behind the level of reflection achieved in ethics and decision theory. Rather, it aims to strengthen the human capacity for responsibility – not to weaken it, let alone replace it with cold, heartless optimisation calculations.

This article was published (in German) in the Frankfurter Allgemeine Zeitung supplement “Auf die Zukunft – Das Magazin zum Innovationstag 2017” (To the Future – The Magazine on Innovation Day 2017) from 05.10.2017. © All rights reserved – Frankfurter Allgemeine Zeitung GmbH.

Prof. Julian Nida-Rümelin

Philosopher and former State Minister

Alongside Jürgen Habermas and Peter Sloterdijk, Julian Nida-Rümelin is one of the most renowned philosophers in Germany and has been teaching philosophy and political theory at the University of Munich since 2003. In 2001 and 2002 he was on the first cabinet of Federal Chancellor Gerhard Schröder as Minister of State for Culture and the Media. Nida-Rümelin was the head of an EU research project on the ethics of robotics and explored the philosophical and ethical aspects of autonomous driving and the general use of software systems in our professional and private lives. Since 2017, he has been spokesperson of the working group on culture of Bavaria’s new digitalisation centre (the Zentrum Digitalisierung Bayern), or ZD.B for short. The philosopher is the author of numerous books and articles and a sought-after commentator on ethical, political and contemporary matters. He recently published his book “Über Grenzen denken: eine Ethik der Migration” (Thinking About Borders. Ethics of Migration), edition Körber, 2017. He lives with his wife, the French-German writer Nathalie Weidenfeld, and their three children in Munich.

At Innovation Day 2017, Julian Nida-Rümelin took part in a discussion on the topic of “Artificial Intelligence and Responsibility” with Martina Koederitz, Chairwoman of the Board of Directors of IBM Germany, and Klaus Schwab, Managing Director of the Plan.Net Group, proving that philosophy can provide important impetus in the debate on digital transformation.

Other articles in this chapter

This page is available in DE