The future of AI: “There is always room for unfairness”

July 10, 2023

AI engineer Hilde Weerts on fairness of algorithms

Artificial intelligence engineer Hilde Weerts. Photo: Bart van Overbeeke

Ever since ChatGPT hit the scene, all eyes have been fixed on the meteoric development of Artificial Intelligence. Experts around the world are expressing concerns and speculating about what these large language models may lead to. In this series, Cursor and the scientific director of EAISI Wim Nuijten talk to TU/e researchers about their perspectives on the future of AI. Today we present part three: Hilde Weerts, artificial intelligence engineer at Mathematics and Computer Science. Her research centers on Explainable AI and fairness.

Is it enough to know that an AI system produces reliable results, or should you also be able to explain why that is the case and what the results are based on? There is some doubt as to whether the latter is possible for models such as ChatGPT. We do know what those large language models are trained for, says Weerts, whose research focuses primarily on traditional machine learning, but who is aware of the developments in the field of deep learning. “We know what areas are optimized during the training of the algorithm, but afterwards, it is not always clear what was actually learned.”

Take ChatGPT: that model wasn’t trained to recognize facts, but to predict what the most likely next word will be, she explains. “So we don’t know what it has or hasn’t learned. For example, there is discussion about the extent to which generative models reproduce training data and the extent to which they come up with new things.” Moreover, the dynamic nature of these types of models makes them difficult to test, Weerts continues. “ChatGPT does a sort of probability calculation for the next word, which means that you cannot be certain about which word it will “choose”. One single prompt may result in the same answer seven times and a different answer three times. That’s harder to test than results from, say, a predictive model that outputs a score.”

Evaluation

When it is not possible to ascertain exactly what a model bases its results on, as is the case with black box models such as ChatGPT, Weerts says you can still verify whether the results are reliable afterwards. And in some cases she says that might be enough. “Take medicine, for example. At some point, people found out that if you ate lemons, you wouldn't get scurvy, but they didn’t know the reason why yet. Later they discovered it was because of vitamin C. The same applies to some drugs we use today. We may not always know why they work, but by evaluating and testing the heck out of them in the context in which they’re going to be used, we can still prove that they’re reliable.” Evaluation is important, especially when you don’t know exactly what is happening, Weerts thinks.

However, according to her, that step is already skipped quite often. “Even though that's the right place to start. The first thing to do is check whether the system achieves its intended purpose and whether it’s inherently ethical.” The latter falls more in the realm of AI Ethics, which is concerned with whether AI systems produce fair results in terms of ethics. Weerts’ research focuses primarily on fairness of algorithms. “When you look at these kinds of models from an ethics perspective, the question is: what was this system built for and are we okay with that - is this the world we all want to create together?” There are distressing examples of biased systems that wrongly flagged people as fraudsters or labeled people with darker skin tones as apes in photographs. “The latter problem occurred in an image recognition algorithm from Google. They fixed that by removing the word gorilla from its classification. Instead of actually solving the problem, they put a patch on it to prevent only that specific situation.” So what Weerts is saying is that we are not very good at removing biases from AI models.

It’s the devaluation that worries me.

HILDE WEERTS, AI ENGINEER

This problem also occurs with large language models, which, according to Weerts, “scrape the entire Internet”. If you wanted to get rid of all the stereotypes, you’d have to filter the data, but that's a very complex task. She also points out another problem: the models use all those texts and images on the Internet without permission. “It destroys the whole balance of power. Just look at the work of artists. Big companies sell a product based on it, but the artists - who made the greatest contribution - get nothing. It’s the devaluation that worries me.”

As such, she does not share Nuijten's concerns about achieving a form of artificial general intelligence (AGI). “I’m more worried about the things that are already going wrong. I’m not at all afraid of the extinction of mankind; I think the climate crisis will bring that about well before AI ever could. And I’m more worried about AI being used by people with bad intentions as a tool to influence others.” Weerts believes that by focusing too much on preventing the possible development of a superintelligence that could take over the world, you’re ignoring the damage that is already being done by these systems right now. Nuijten is also concerned about this, as well as the climate crisis. Although he does think there is now global momentum for the latter, which is lacking for the existential risks of AI. “In that sense, I feel the same way scientists in the 1990s who warned of a climate crisis must have felt.”

Photo: Angelique Swinkels
Photo: Angelique Swinkels

Capitalism

Despite the fact that the systems can cause harm, many people also see AI as the first technology that could make the world fairer for many people. “In theory, that’s possible”, Weerts comments. “In practice, though, I don’t think the chances are all that great, because it’s simply not the mindset of the major players in the market. I think we still live in a fairly capitalist - or post-capitalist - world.” According to her, that is also one of the reasons why much of the research and developments from the ethical AI community are not being implemented by large companies: they have other priorities.

“What we also see is that a rat race has started. The result is fewer and fewer publications from big players, while there used to be a great open culture.” The publishing culture as a whole within the field of machine learning is not conducive to progress in terms of fairness and explainability either, according to Weerts. “There are so many studies being published at the moment. It’s almost impossible to keep up. But much of it is garbage, because PhD candidates are being pushed to publish too many papers in a short amount of time.” She says you can only develop truly fair systems by working interdisciplinarily and not by just picking low-hanging fruit. “You can’t solve the fairness problem by solely looking at it from a technical perspective, or only from the viewpoint of Human Computer Interaction. Nor by only having a lawyer think about transparency. You really have to do it together.”

It almost sounds like promoting EAISI, Nuijten jokes, because that is precisely the goal of the institute. But he knows from experience how difficult it can be to bring people from different disciplines together. Nuijten: “We share starter packs on this which were met with responses that ChatGPT would summarize as “leave me alone!”.” Weerts (jokingly): “ChatGPT would probably phrase that in a much friendlier way.” But they both agree: researchers from different disciplines should be open to each other’s ideas and learn from each other. “If you want to know what bias is: social sciences researchers have been studying that for decades,” says Weerts. “There’s no need for us as computer scientists to reinvent the wheel.”

Predictions

In terms of fairness, Weerts knows better than anyone what is lacking in current AI models. Do the positive results of AI actually outweigh that? It all depends on what you use your systems for, she replies. Putting the brakes on it now is too drastic, she says. You can also look at it in a more nuanced way. “My background is in Industrial Engineering. There, you have to predict things like whether or not a train is going to break down and when that train needs maintenance. Using machine learning for that purpose is an excellent idea. The worst case scenario is that it gets it wrong a few times, but that’s not the end of the world. But using models to predict whether you should hire someone for a job or not, that’s a completely different story. For a train, we can objectively determine whether it is broken or not, but does that also apply to the question of whether someone is or is going to be a good employee?”

However, even the train example is more complex than it sounds when it comes to fairness, Weerts notes: “For example, such a system could only work well for modern trains in the Randstad region and not for older train models used in Limburg (where Weerts is from herself, Ed.),” she says with a wink. “There is always room for unfairness.”

This story was written by Cursor.

 

Mediacontact

Anke Langelaan
(Science Information Officer)

Latest news

Keep following us