Finding the balance between ethics and innovation in AI

November 9, 2023

TU/e works to keep new European AI rules workable for scientists.

Susan Hommerson (photo: Levi Baruch)
Susan Hommerson (photo: Levi Baruch)

As a policy officer at TU/e, Susan Hommerson works in the field medical devices with an AI component. She is responsible for ensuring that these applications comply with laws and regulations, not only in the Netherlands but also in Europe. On behalf of the university, she gives MEPs in Brussels, input on the AI Act, a new European law that regulates artificial intelligence. We spoke to her together with Wim Nuijten, scientific director of EAISI.

TU/e is at the forefront of the development of many medical devices. It is an area that involves a great deal of compliance with relevant laws and regulations. It is Hommerson’s job to coordinate this within the university. Since many medical devices developed at the university also have an AI component, she soon became familiar with the AI Act: a new European law to regulate artificial intelligence.

“My former colleague and I noticed a clear problem in that act. The idea was that the act wouldn’t apply to research, which would allow universities, for example, to escape the huge administrative burden that companies do have to bear. However, the legal text said something along the lines of ‘it is not applicable unless...’, which basically meant that the research exemption isn’t really an exemption at all.”

The problem lies in the word “unless,” Nuijten adds. “Because this refers to the situation where AI research ends up in a product. And that’s exactly the kind of research we do at TU/e.” Whether you intend to apply AI in products from the very beginning or only after your research is complete, you ultimately carry the same responsibility, Hommerson explains.

“And as a university, we do recognize that we have that responsibility, but to be subject to the same regulations as a company: that would be far too onerous on us.” For this reason, she went to the European Parliament in February 2023 to call for a total research exemption, which has now been included in the act that is still under development.

Research and development

However, that does not mean that researchers at the university can do whatever they want in the field of AI, Hommerson emphasizes. “This is partly because the line between research and development is not fixed. Ultimately, the person who deploys or manufactures an AI model must still comply with the act, so the rule is just shifted to another party.” As such, the industry also needs assurances from researchers to be certain that the models can actually be applied under the AI Act at a later stage.

More stringent rules also apply when researchers start testing in real life conditions. “At the lower Technical Readiness Levels (TRL), you can still experiment, but when you want to develop further, we have to start looking at how we can appropriately comply with the different components of the AI Act. But that’s still not as demanding as it is for large companies like Philips. An appropriate solution must be put in place.”

I can imagine large companies will simply pay those fines, make some adjustments and carry on.

Unfortunately, there is no such solution yet and how the implementation of the AI Act will look in practice is also still unclear. Nothing is set in stone yet. Nevertheless, in many cases, researchers must already comply with ethical standards. This applies mainly to European projects, which require researchers to do an ethics self-assessment. That self-assessment currently includes seven ethical principles.

Hommerson: “Think of principles such as human in the loop, technical robustness and accuracy, social impact and bias in your model. You implement these by, for example, checking if you have a representative dataset for your goal when considering bias. The idea is that we have a way to document things in such a manner that, at a later stage, someone can look back and see: that decision was made at that point after careful consideration.”

Enforcement

In the future, the member states of the European Union will have to enforce through agencies that companies and organizations actually comply with the rules. Those who do not comply can expect heavy fines. Companies themselves are responsible for the system, its classification, its introduction to the market and its certification. Misclassification also carries high fines.

Nuijten wonders whether that is enough to stop large companies with huge budgets from continuing to develop systems that do not fully comply with the AI Act. Hommerson is not certain either: “I can imagine large companies will simply pay those fines, make some adjustments and carry on.”

As to the effectiveness of such fines, Nuijten is reminded of the benefits affair. “Now, years later, they’re imposing fines. What difference does that make to the woman who lost her children?” What he would also like to know is whether the AI Act could have prevented the benefits affair.

Hommerson: “The legal text literally states something about this very example. It deals with granting or denying social benefits based on an AI system. You have to be very careful and transparent when it comes to that. You can’t just hide behind algorithms. We must not let the systems be the leading force; humans must stay involved. That is also stated in the AI Act. I think governments will have the greatest difficulty in implementing the rules of that legislation.”

High-risk

The AI Act defines three different risk categories: high, medium and low. Which category systems fall into mainly depends on the impact they have on people or society. Practically all the research that takes place at TU/e falls under the highest-risk category, as the systems are used for medical purposes, or are considered high tech. High-risk systems have to comply with the most stringent rules of the AI Act, which is almost impossible for universities to achieve. This is why Hommerson welcomes the research exemption.

It is never enough if the act only applies in Europe. This is something you have to make global agreements about.

It strikes Nuijten that the AI Act does not cover generative systems such as Chat-GPT. “They don’t fall under this category because the legal text says ‘is intended for’, and those models are not intended to be used in high-risk settings. But that doesn’t mean that they can’t. Therefore, I think they should change ‘is intended for’ to ‘can be used for’.”

The legal text also makes no specific mention of the more general risk of the emergence of a form of superintelligence that Nuijten is concerned about, Hommerson confirms. According to her, the AI Act does not provide sufficient protection against such dangers. “The legal text in its current form might be enough, but it may also not be. However, it is never enough if the act only applies in Europe. This is something you have to make global agreements about.”

Global

In general, Hommerson thinks that the AI Act, even if it only applies in Europe, will still have a global effect. “We saw that with the European privacy legislation (GDPR) as well. If companies want to bring goods or services to the European market, they must comply with European legislation. So that way, even companies from the United States and China will eventually have to comply with the AI Act.”

There are also many discussions in the US-EU Trade Council, says Hommerson, to see if voluntary codes can be drawn up that other power blocks can also commit to. Whether they will actually do so is up to them. “In China, companies already need a license to expose people to generative AI; there is a fixed framework for that. The US is more reluctant.”

This is why many startups are now moving to the US, she says. The lack of capital and the uncertainty about the AI Act in Europe also contribute to this trend. “Startups are literally being told: why are you not based in the US?.”

Six committees

It is true that legislation in Europe can be very confusing, Hommerson knows. “You should never read European laws in isolation. They are all separate pieces that fit together, laws that are intertwined with each other. It’s almost impossible to keep track sometimes. For example, the AI Act states something about privacy, referring to the GDPR.

That act itself is also so comprehensive that initially, the European Parliament struggled to determine which committees should review it. Hommerson: “It’s a horizontal act that applies to cars, the public sector, education ... anywhere AI applications are used, the act is relevant. In the end, there were about five or six committees that all had to share their perspectives on the matter.”

Those committees completed their positions before the summer, she says. If the act is approved, it will not go into effect until January 2026 at the earliest.

Monumental

It sounds like a monumental undertaking to first understand the act and then implement it within the university. Even the legal text alone has to be read two, three or four times to make sense of it, Nuijten says. “No offense to the professionals in this field, but it’s just not doable,’ he jokes.

Hommerson appears to have no such problems with the texts, but admits that both in Europe and at the university, people are still trying to figure out how everything should take shape. “Right now we’re doing everything ourselves, but I have faith in projects that are setting up standards. I think there will be more clearly defined guidelines soon.”

Source: Cursor

Media contact

Barry van der Meer
(Head of Department)

More on AI and Data Science

Latest news

Keep following us