Interview

Artificial intelligence: Who is responsible?

When cars drive autonomously, who is liable if an accident happens? When supercomputers help doctors make diagnoses and provide therapy, who is responsible for an incorrect diagnosis? What happens when candidates sue companies because their application was rejected by an algorithm? So-called artificial intelligence is a comparatively new technology that is quickly spreading into many areas of life due to rapid advances in computing power.

Image (from left): Julian Nida-Rümelin, Martina Koederitz, Klaus Schwab

At the Innovation Day of Serviceplan, Prof. Julian Nida-Rümelin, the acclaimed Munich-based philosopher and former State Minister, and Martina Koederitz, Chairwoman of the Management Board of IBM Deutschland, discussed new ethical standards with Klaus Schwab, Managing Director of the Plan.Net Group. How should artificial intelligence be dealt with responsibly; in a manner that contributes to creating a more humane and fairer future in a world shaped by digitisation?

  • Ms Koederitz, Prof. Nida-Rümelin, artificial intelligence has barely started to reach its potential. However, the sceptics already appear to dominate. Should we actively work to build up people’s trust in artificial intelligence? Perhaps also to prevent AI from falling short of its potential?

    Martina Koederitz: As with every new technology, what matters is that the people who deal with it use it in a conscientious manner and for specific purposes. It must be clear why we are doing certain things with AI, and for whom we are doing them. In the end, we are not simply observers in this revolution. Each one of us is also a participant. In the past, we created trust by using products and technologies, ensuring reliability or with strong corporate values. This should now also be transferred to an interconnected, data-driven economy. In Germany, we have to make clear that we handle digitisation in a reliable manner.

  • Do people in Germany understand artificial intelligence yet? Current studies show that people are quite sceptical and mistrusting when it comes to this issue.

    Martina Koederitz: People are always scared of new things. We have to invest a lot more in training people. The Internet has existed for 30 years, but there is still no uniform education plan in Germany. Education and training should be a critical issue in our country. Everyone also has to know what contribution they can and want to make to a digital society – and how to deal with their own data responsibly.

    Julian Nida-Rümelin: We made a mistake in the last few years. With the German DIN standards, we set standards very early during industrialisation. During digitisation, we have neglected to establish the framework within which the technological revolution can develop. This means that we are now in the peculiar situation where a few Internet giants provide the entire digital infrastructure. This is why people are scared.

    Generally speaking, these companies are benevolent. Nothing truly bad has happened yet and we may hope that it stays this way. However, the big data economy leads to a concentration of knowledge that is potentially susceptible to abuse, which is very worrying. This is one of the reasons why people have lost so much confidence. People have a vague fear – not everyone, but this does even include young people. We are going to have to try and remedy this to ensure that this feeling does not turn into a barrier to innovation and investment.

  • The further the technological potential develops, the more our fears grow?

    Martina Koederitz: The worry is that we are currently creating monopolies that could be used against us. Let’s be honest though – if the service is good for us as the consumer, then we use it. We take part in this decision-making process day-by-day with our behaviour. The more convenient the service, the lower the priority data protection has. It only comes into consideration when a problem occurs. We have to promote people’s sense of responsibility.

  • How do we make this tangible? At this point, should industry not be required to invest more into technology education, Ms Koederitz?

    Martina Koederitz: At IBM, we are investing a huge amount into training our employees, as well as into science dealing with artificial or cognitive intelligence. However, we have to do more in nurseries, primary schools and all educational establishments, as well as secondary schools. In the future, all jobs will deal with data, digitalisation or networking. In my view, we have not sufficiently stayed abreast of these developments.

  • Artificial intelligence does not always automatically behave how we want it to. Internet trolls taught a Microsoft chatbot sexist and racist language, meaning that it had to be switched off. The American algorithm, Beauty AI, only chose white, light-skinned people as winners. Is there a risk of discrimination due to algorithms? Can you even talk about moral or immoral when dealing with AI, Mr Nida-Rümelin?

    Julian Nida-Rümelin: Alongside people, organisations, companies and politics, I strictly warn against designating other parties responsible – such as autonomous robots. It has seriously been suggested that robot responsibility should be established alongside human responsibility. I believe this is highly dangerous for two reasons:

    1. With artificial intelligence, we are not creating an equivalent. Even the most intelligent software does not have any personal characteristics. Otherwise, human rights could actually be applied to it. We would soon no longer be able to switch off any computers.
    2. Giving computers personalities would result in a diffusion of responsibility, which is not good. You would lose the direct link in terms of assigning responsibility.
  • Ms Koederitz, to what extent do companies that provide AI have an ethical responsibility?

    Martina Koederitz: We are one of the first providers of cognitive computing and we have very clearly stated that there are rules concerning limits on purpose – i.e. what data we would use for what purpose, as well as what data is relevant in order to reach what result. Whenever we collect data together with customers and partners, we are transparent about who the data and any outcome from the data analysis belong to, and about which algorithms we use to analyse data.

  • In principle, there are three players in artificial intelligence – service providers like IBM, application developers like Serviceplan as an agency, and users, such as a brand, for example. When it comes to the crunch, who bears the responsibility when something goes wrong?

    Julian Nida-Rümelin: You would now have to call upon lawyers because the matter is very complex. We essentially have at least two different legal philosophies – on this side and on the other side of the Atlantic. Our principle of considering the risks in detail beforehand plays quite a small role in the USA. However, if damage actually occurs, the costs can be immense. In our cultural sphere we can never rule out risks either; however we should always behave in a risk-adverse manner. Despite all the eagerness for innovation, the probability of a disaster should be more important than the probability of a great success.

    Martina Koederitz: One thing is really important to me – that we use artificial intelligence to bolster people in their decision making. This is why we are also discussing decision-making and assistance systems. The final decision should remain with people. This could be a doctor who selects a treatment method from three suggestions, or the interaction between the supplier and customer in the financial sector, where the customer ultimately chooses a product.

    Due to the fact that this technology is still in its infancy, IBM joined the Partnership on AI in the USA early on – together with Google, Sony, Zalando and eBay, for example. This is where we pose ourselves questions as to what ethical regulations and what guidelines we want to give ourselves – what things do we have to regulate together as an economy and as companies? Who does the data belong to, what are ethics in this context and how far do they reach? Since we do not yet have an answer to everything, we have to work out the answers with each other.

  • Do you believe that people will really retain control in this evolution? Or in other words – how long will it still be the case that people are the ones who are making the decisions?

    Julian Nida Rümelin: It is difficult to make predictions here. The question for me is rather – what should be the case? I am against falling back on the point of view of “letting things just run”. This is why I have decided to latch onto the debate around autonomous driving. In our understanding of a fair and humane society, we have determined, and often even laid out in our constitutions, that there should be a ban on trade-offs, for example. You cannot say that we will sacrifice two elderly gentlemen so that we can save a child, for instance. As an individual it is possible to decide this, but you cannot write it into an algorithm. We cannot sort or evaluate people according to age, gender or whatever.
    This is a challenge for autonomous driving, particularly when it comes to deciding how to act in a dilemma. An individual driver can decide either way; however, you have to treat a public rule completely differently.

  • And what would your solution be?

    Julian Nida-Rümelin: In my opinion, we should use everything there is. The keyword is then ‘highly automated driving’. However, for the time being, the ultimate responsibility, the possibility for people to intervene, should be maintained.

  • That sounds good in theory. In practice, however, there is a lot of discussion around ‘autonomous driving’ and experts expect that we will no longer have to be at the steering wheel in five years at the latest. Would we nevertheless have to keep our hands on the wheel?

    Julian Nida-Rümelin: At times when the traffic situation is very clear, e.g. with stop and go traffic on the motorway, the signal would be “You can read the newspaper now!” At moments where the situation is less clear (where conflict situations may arise), the system requires the driver to take over again. This is the recommendation from the responsible commission, which I also support.

  • Do we generally need ethical standards for artificial intelligence? If so, to what level?

    Julian Nida-Rümelin: We have these standards. There is the German Constitution (Basic Law) and certain cultural attitudes. The great challenge is that they are also implemented so that they are effective. One thing that should be seriously concerning is the risk of the loss of privacy due to the use of completely normal communication and interaction tools. Clarification and regulation are also urgently required here internationally. And this does not necessarily have to be at the expense of companies. It is also in their interests that there is clarity. At the moment, there are cases being brought that have gigantic penalties – for example, against Google currently. Clear guidelines are also in companies’ economic interests. This is why we need global institutions that take responsibility, not just in the case of climate change, but also in the case of digitisation.

  • Can you give our audience tips to take away with them on how each of us can ensure in our company that people deal with artificial intelligence in a responsible manner?

    Martina Koederitz: Firstly, each individual in the company should ask themselves how they deal with data. Secondly, we should actively engage with artificial intelligence. And not wait until something is finished, but rather gain our own experiences with it. Thirdly, the companies should position themselves very clearly as regards to what purpose they use AI for – what should be the result? Who is the beneficiary or who gains what value from it? I believe you can make a big difference by using AI.

    Julian Nida-Rümelin: I am rather ambivalent to the demand for transparency, for example. Not everything has to be transparent. This transparency ideology should have its limits when it comes to individual and collective self-determination. On the other hand, as a citizen we should be able to demand more transparency when it comes to how our data is handled in companies.

    Of course, this is a big challenge for companies because they guard their data trove closely, and do not like discussing algorithmic control in public. However, it is precisely this behaviour that causes uncertainty – if people do not know exactly what is going to happen to their own data. I think that large companies such as Facebook now play a cultural role. They advise an entire generation that is growing up with them on correct behaviour. However, they do not always fulfil this responsibility.

This page is available in DE