What future for humanity in a society governed by AI?

September 29, 2022

Five hot topics surrounding artificial intelligence in questions and answers.

Irene Kuling and Lambèr Royakkers (Photos: Angeline Swinkels)
Irene Kuling and Lambèr Royakkers (Photos: Angeline Swinkels)

Artificial intelligence is an important area of research at TU/e. Hundreds of researchers are searching daily for new meaningful applications of AI in, for example, healthcare and robotics. But what do our scientists think about some of the hot topics surrounding AI, such as privacy, autonomy and social media? We asked Irene Kuling, assistant professor specializing in haptic systems and telerobotics, and Lambèr Royakkers, professor of ethics of the digital society. It resulted in a fascinating conversation about the opportunities and risks of AI, and the role of humans in it.


DO THE STRICT PRIVACY REGULATIONS OF THE EUROPEAN UNION REALLY SERVE THE INTERESTS OF EUROPEAN CITIZENS?

  • Yes

    By putting the interests of individual users first, they are protected from misuse of their personal data by companies and governments.

  • No

    By imposing strict privacy rules, Europe discourages innovation, and risks losing the competition with the U.S. and China.

Irene Kuling: "I believe it’s a good thing that the EU has adopted these strict rules. Privacy is something you can't leave to the individual citizen, because you can't expect them to understand everything and be able to oversee the consequences. Now, these matters are regulated by the government, instead of everyone always having to think for themselves whether or not they want to share their data.

I am also very much in favor of privacy by design. Developers and engineers should think at a very early stage what the privacy implications are of the robots or apps they create.

That's something I also really encourage in my PhD students. That they ask themselves: why do I actually want to make this: because it is possible or because it adds something?

But it remains a difficult balancing act: think of a robot that enables you to remotely pour a cup of tea for your grandmother in the nursing home. That may seem handy, but it does raise all kinds of privacy issues. For example, around touch. Many people experience touch as something very intimate, so that’s something you really have to think through."

Lambèr Royakkers:  "The contradiction between privacy and innovation is not as black and white as it may seem at first sight. True, companies like Google and Facebook became huge by profiting from customer data. But under pressure from European privacy laws they have since turned 180 degrees. For Apple, privacy has even become a selling point. We also shouldn’t underestimate the impact of the billions in fines that Brussels has imposed on, for example, Google. Those are really no laughing matter!

I believe European privacy rules actually force companies to innovate. Take Self Sovereign Identity (SSI), where you dispose of your personal information yourself, and which makes transactions much faster. This is in the interest of both citizens and businesses. That technology was developed under pressure from European privacy protection, and is even stricter than those rules!"

Irene Kuling

Irene Kuling is assistant professor of Haptics & Soft robotics at Eindhoven University of Technology. She studied physics, human-technology interaction and obtained her PhD with a dissertation on the question "Where is my hand?”. As a researcher, she combines the perceptual and biomechanical properties of humans with designing technology for remote task performance and perception. Such haptic systems find applications in various fields ranging from healthcare and rehabilitation to fruit picking.

 

 

WILL SELF-DRIVING CARS MAKE OUR ROADS SAFER?

  • Yes

    The smart algorithms in autonomous cars will eliminate the human factor as much as possible, preventing thousands of accidents and saving lives.

  • No

    Traffic in our crowded cities is so unpredictable that humans should always have the final say, if only because of liability in accidents.

Lambèr Royakkers: "I am really a big fan of the self-driving car, and also of the driver support systems that we increasingly see in our cars. As far as I'm concerned, these should become mandatory. Automatically limiting the speed limit already saves 21 percent traffic fatalities. More than 90 percent of all accidents happen because of human error. Wouldn’t it be great if we could take out that factor?

However, I believe things will not move as rapidly as many people seem to think. It’s possible that we will see self-driving vehicles on our highways within the next five years, but in built-up areas it may take another ten to twenty years. Mainly because our infrastructure is not ready for it. I think autonomous cars won’t start driving until it’s really safe.

As for liability, I don’t see how that’s an issue. In the future, we will probably see collective liability insurance for everyone with a self-driving car. And since there will be far fewer accidents caused by self-driving cars, the cost will be much lower than your current car insurance."

Irene Kuling: "As far as autonomy is concerned, I don’t think self-driving cars are the best example. We're dealing here with a completely different approach to transportation. Getting from A to B automatically, without having to do anything. Wouldn’t make that everyone happy? Moreover, today's autonomy is often a sham. Who is able to understand their own car nowadays?

I do see a psychological problem, though, especially in built-up areas. People tend to find it very difficult to forgive mistakes they would never have made themselves. In our eyes, the mistakes of autonomous systems are often so stupid that we no longer trust the entire system.

In the end, we have to make a choice. Either we keep humans in the loop, but then humans really have to stay in the loop. Or we say: these smart systems have become so opaque that they can no longer be fully understood. Then we have to take the human factor out completely. That's much safer."
 


IS IT POSSIBLE TO DESIGN ALGORITHMS THAT DON’T DISCRIMINATE?

  • Yes

    By thinking carefully about what data you use, stripping that data of bias as much as possible, and designing algorithms that are fully transparent and explainable.

  • No

    Algorithms and the decisions based on them necessarily always reflect existing power relationships. They should therefore be used only if they do not negatively affect the rights and interests of individual citizens.

Irene Kuling: "People have trouble reflecting on their own biases, if they are aware of them at all. Until we properly understand those biases, I think it's impossible to develop uncolored algorithms. In itself, that need not be a problem, as long as those algorithms do what you want them to do. It also very much depends on what you are using the system for. In job applications, for example, it can work very well for an initial screening of candidates, precisely to prevent human bias in that phase.

However, I would be very careful with it in health care, for example, especially when it comes to questions of life and death. The moment the computer says, well, your chances of survival are very small, so we're not going to treat you. I think it will be a very long time before we accept that."

Lambèr Royakkers: "Agreed. Healthcare is really a domain where you have to put the responsibility with the human. You can't leave that to a computer. The same goes for killer robots, for example, which I've written a lot about. There, too, you want the operator to have the last word.

But overall, I'm hopeful. In fact, I think AI systems will soon possess less bias than a human. A lot will depend on how much we invest in developing explainable AI, so that, as a user, you know why a system makes certain choices. People need to understand why they have been picked out by the IRS, or why they are denied certain loans.

With certain simple AI models this is already possible, but with deep learning and neural networks it is a lot harder. Of course, these systems will never be truly error-free. The question is how many errors one is willing to accept."

Lambèr Royakkers

Lambèr Royakkers is professor of Ethics of the Digital Society at Eindhoven University of Technology. He studied mathematics, philosophy of technical social sciences and law, and obtained his PhD with a dissertation on standards logic. He is involved as a researcher in several national and international projects on digitization, and is an ethical advisor to several European projects. He is (co)author of more than ten books, including Ethics, Technology, and Engineering (Blackwell, 2011) and Just Ordinary Robot (CRC Press, 2016).

 

 


DO AUTOMATION AND ROBOTIZATION CONTRIBUTE TO MORE PROSPERITY?

  • Yes

    By automating tedious work, more people can do work that is truly meaningful and contributes to a happier society.

  • No

    The increasing use of robots will put more and more people, including middle-class people, out of work. The government must regulate this transition, including by ensuring that these people are offered a meaningful alternative.

Lambèr Royakkers: "That's really a hard question. On the one hand, there are scientists who say: automation and robotization may well lead to more unemployment in the beginning, but it also creates jobs. On the other hand, there are people who say: those new jobs will also be done by robots. What’s more, it’s no longer just about low-skilled labor, but also about cognitive routine work. Think of your accountant or your notary. That will also affect the middle class. So, yes, I do believe the government has a responsibility here, for example, by retraining people.

On the other hand, I’ve always been a supporter of universal basic income. If automation and robotization indeed take over much of the work, then people will finally have the freedom to really do what they feel like doing. Maybe then you will have a much happier society."

Irene Kuling: "I also expect that robots will eventually take over certain repetitive jobs. Or jobs that we just can't find people for now, because nobody wants to do them. And I also really believe that could be useful. Think of dirty work, or seasonal labor.

One problem I do see is scaling. How do we prevent those robots from being in the hands of a few large companies with a monopoly position in the food industry, for example? That can be really disruptive, not only for smaller companies, but also for society as a whole.

Whether robotization really becomes a problem, depends on a broader sketch we should make of what society will look like in the long run and what we would like it to look like. That is not only a task of scientists, but of everyone."

 

ARE SOCIAL MEDIA LIKE FACEBOOK AND TWITTER RESPONSIBLE FOR THE INCREASING POLARIZATION OF OUR SOCIETY?

  • Yes

    Through their algorithms and revenue model, these companies contribute to the spread of fake news and extreme views.

  • No

    On the contrary. Social media are an important platform for the exchange of new political ideas, contributing to greater pluralism and inclusion.

Irene Kuling: "Yes, I definitely think social media play a role in fueling doubts and unrest in society. Of course, it's natural to question things. As scientists, we do that all the time. But if people are not presented with a broad palette of answers, because they just don't know what to look for, they’ll lose all nuance. Then the algorithm sucks them into a kind of tunnel of their own truth.

So, I do believe that social media carry a responsibility. Whether that also means they can play their own judge, and decide for themselves who or what to block, is another question.

It would be great if social media did their best to promote dialogue. That we don't stop at shouting at each other in 140 characters. I see a role for teachers there as well; that we teach students to communicate and argue why something is or is not so. That it's not always black and white..."

Lambèr Royakkers: "I also think social media has contributed to the increasing polarization in our society, although there are also studies that say it's not as bad as some fear. Either way, I think it's crucial that we think about the role of social media in the future.

As Irene said, democracy requires that people come into contact with different points of view. But right now it's mostly short communication in one or two sentences. And we see how politicians like Donald Trump and Thierry Baudet benefit from that, using these platforms to become political stars. Whereas politics is pre-eminently about complex considerations and difficult compromises. Which cannot be captured in a limited number of characters.

The question is whether we can leave it to the social media to counter this polarization. Ultimately, it goes against their revenue model. I think we really need to regulate that from the outside. So there is a job for governments."

Ethics and EAISI

Thinking about ethical dilemmas surrounding artificial intelligence and robotics is an important part of EAISI, TU/e's AI institute. Ethics plays a major role both in the training of our engineers and in research. In this way, we want to ensure that all AI innovations actually contribute to solving societal problems. And that the people who use these technologies can be confident that their interests are paramount. Do you want to join TU/e as an AI researcher? These are our current vacancies.

Media contact

Henk van Appeven
(Communications Adviser)

Latest news

Blijf ons volgen