Wim Nuijten argues for vigilance in the development of AI:

'No one knows for sure where we are now and where we are heading for at great speed'

May 12, 2023
Wim Nuijten, scientific director of the Eindhoven Artificial Intelligence Systems Institute. Photo: Bart van Overbeeke
Wim Nuijten, scientific director of the Eindhoven Artificial Intelligence Systems Institute. Photo: Bart van Overbeeke

How do you make sure that Artificial Intelligence technology continues to serve humanity while protecting the rights of individuals? This is the question MEPs are working on as part of the development of the AI Act, that TU/e is also provides input for. But this European act is not yet in place and in the meantime, systems like GPT-4 (the engine behind ChatGPT) continue to develop at a frantic pace. This is happening way too fast, reads an open letter signed by many prominent technologists, including several TU/e researchers. An interview with Wim Nuijten, scientific director of the Eindhoven Artificial Intelligence Systems Institute.

The open letter, which was published online a month ago, calls for a development pause for large AI experiments because it argues that the developments are moving too fast and their consequences would be incalculable. More than 25 thousand people signed the letter, including many leaders in the world of Artificial Intelligence and a few TU/e researchers. Wim Nuijten, who is the scientific director of the Eindhoven Artificial Intelligence Systems Institute (EAISI), also signed the letter. He thinks that we should take the risks of the development development of an Artificial General Intelligence (AGI) seriously, because it such AI systems could potentially endanger the existence of humanity.

AI Act

One thing that could maybe help avoid such catastrophic consequences is the AI Act: MEPs are currently diligently working on this European Act to protect the rights of individuals with respect to AI. TU/e has taken the initiative to represent academic research in this process and based on that has proposed amendments. “The act sets a number of requirements for AI systems. For example, the developers must administer the risks, keep extensive technical documentation and ensure that human supervision is possible.” This latter requirement currently is impossible to meet for systems such as GPT-4, the language model behind ChatGPT and the system that is also referred to in the open letter. This is because the problem is that nobody knows exactly what happens inside that system, not even the developers themselves, says Nuijten.

“What you see is the input and the output. And generally speaking, we do understand the output of GPT-4: it’s sequences of words. But those sequences are the results of a network of millions and millions of real numbers with weights and functions,” says Nuijten. But those functions were built into it by the developers, so surely they understand what happens inside the system, right? “They do understand the architecture of it, but they don’t understand why the system arrives at certain answers; and that’s quite disconcerting regarding the capacities of systems like GPT-4 and even more advanced versions in the future.”

TU/e and the AI Act

TU/e has proposed amendments for the AI Act that will still safeguard the goals of the European act without interfering too much with university research. Many of the requirements being proposed protect the individual user but at the same time involve a huge administrative burden, says Nuijten. Therefore TU/e, together with MEPs and industrial partners, is looking at possibilities for a lighter administrative burden that still protects the consumer, to make sure that research is not being slowed down unnecessarily.

No AGI

In any case, according to Nuijten, TU/e is not doing research with the aim of developing systems that intent to create artificial general intelligence. “GPT-4 is an example of a system that falls under natural language processing and that’s not one of our focal points.” However, there are researchers at TU/e working on neural networks. “This involves applications like being able to tell from a picture of an eye whether someone is more likely to develop diabetes, or being able to make better diagnoses from an MRI scan.” If AI is used for these kinds of purposes, he believes it actually has the potential to improve life on earth. “For example, it can cure or prevent diseases and reduce poverty all over the world. Of that, I’m absolutely certain.”

Would it help if there was more transparency about how these systems work? Absolutely not, according to Nuijten. “I will explicitly say that OpenAI, the company that created GPT-4, should not make this architecture public. It’s not a good idea. Because if the system really is close to an artificial general intelligence, its coming versions might have the potential to end life on earth. I don’t think it’s the case yet for GPT-4, but no one is certain of where we stand now and where we are heading at which speed.” He draws a comparison to nuclear weapons, which have the same potential. The reasons why he believes things haven’t gone wrong yet in that regard is because they are difficult to make, you need the right materials and there is strict legislation in place. “But what if you’d make all of this much easier and nuclear weapons would fall into the hands of many small groups of people? Who do you think would vote for that?”

Stop button

The AI Act is supposed to help mitigate such high risks. But the act has not been passed yet, and GPT-4 already fails to meet several of its requirements, for example that human oversight must be possible and that people must fully understand how the system works. Also included in the act’s proposal is the requirement that a stop button must be built in by default. However, the problem with artificial general intelligence is that it may eventually become smart enough to realize that such a button exists and, for example, create backups of itself in other locations through a back door. In that case, that stop button would be useless.

Let’s hope that in 30 years, we’ll re-read the open letter and we’ll be laughing our heads off, thinking: fools, of course that was never going to happen.

WIM NUIJTEN, SCIENTIFIC DIRECTOR EAISI

Wim Nuijten. Photo: Bart van Overbeeke
Wim Nuijten. Photo: Bart van Overbeeke

If such an AGI were to pursue its own course, Nuijten believes it might well have very different plans than humanity does. “Take war, for example. We should stop that, the system thinks. It could shut down the infrastructure of arms factories. Or just look at how we treat cows and pigs: that’s unacceptable. Such a system would see this and take measures. These are all steps in which it may come to the conclusion that it is not very sensible to have people living on this earth.”

AI alignment

Nuijten believes that an important way to ensure that AI systems do not take such disastrous measures (for humanity) is to focus on AI alignment: making sure that systems are developed in such a way that they conform to human values. “However, little work has gone into that, and we’re not expected to solve it any time soon. Developments in technology, on the other hand, are advancing at an extremely rapid pace.” So whether we will be able to achieve that alignment in time remains to be seen, he says. Besides, moral values vary around the world, although Nuijten thinks we should at least be able to assume that the common denominator is that (almost) no one wants to end humanity.

Nuijten is aware that he is painting a rather catastrophic picture of a future (or lack thereof) with Artificial Intelligence. “I’ve thought carefully about what I’m saying. I could’ve focused on the positive potential of AI that can make life great, but decided not to. Let’s hope that in 30 years, we’ll re-read the open letter and we’ll be laughing our heads off, thinking: fools, of course that was never going to happen. But we can’t completely rule out the possibility that the letter is right or even underestimates the situation. Whereas I used to think we could achieve artificial general intelligence in forty or fifty years, I now think it may well be under twenty.”

Good intentions

So what’s important now, he says, is to take the risks seriously, focus on alignment and work on other things, like the AI Act, that can prevent catastrophic consequences. Nuijten is happy with how the talks in Brussels are proceeding and praises the openness and expertise of the MEPs and their staff. “They have the very best intentions. They work out of the public eye and are not ruled by talk shows or Twitter. That allows them to really think it through. There are a lot of differences between countries in Europe, but when you look at the AI Act, you see that we have a lot more in common in terms of norms and values than most people would probably think.”

This article was written by Cursor.

Frans Raaijmakers
(Science Information Officer)

Latest news

Keep following us