Request a demo

What is unacceptable risk under AI act

In this article, you will learn how the AI Act defines “unacceptable risk” and which AI systems fall into this category. You will also discover examples of AI that is completely prohibited under the Act.
Unacceptable risks AI act

Under the AI Act, “unacceptable risk” refers to AI systems that pose threats so severe they are banned outright, such as those enabling social scoring by governments or exploiting vulnerabilities to manipulate behavior. These AI uses are prohibited because they violate fundamental rights or safety, ensuring harmful technologies don’t reach the market.

What is unacceptable risk under the AI act?

When we talk about the unacceptable risk AI Act, we’re diving into the strictest category of artificial intelligence regulation in the European Union.

The AI Act draws a clear line between what’s allowed and what’s not, aiming to protect people from technology that could cause serious harm. Think of it as a safety net for society, making sure AI doesn’t cross into dangerous territory.

The unacceptable risk AI Act label is reserved for systems that threaten fundamental rights, safety, or democracy itself. These are not just minor mistakes or glitches.

This category represents the highest risk level, ranking even above high-risk AI under the AI act. While systems in this level can cause significant harm, unacceptable risk AI goes beyond that.

Examples of unacceptable risk under the AI Act

The AI Act spells out several examples of what counts as an unacceptable risk. One big example is social scoring by governments, where people are ranked based on their behavior or personal traits.

Another is using AI to manipulate people’s decisions in ways that could cause them harm, especially if they’re vulnerable. Systems that use real-time biometric identification in public spaces, like facial recognition, also fall under this category unless they meet very strict exceptions.

These types of AI are seen as too risky because they can lead to discrimination, loss of privacy, or even mass surveillance. The unacceptable risk AI Act category is designed to stop these practices before they start.

In the next chapter, we will take a deeper dive into examples illustrating unacceptable risks under the AI Act.

What happens if an AI system is labeled as unacceptable risk?

If an AI system is labeled as an unacceptable risk under the AI Act, it’s simple: it cannot be used in the EU. Companies and organizations must remove these systems from the market and stop their development.

This category was the first to take effect under the AI Act timeline, having been implemented in February 2025. Other categories, which have a lower risk-level, will follow later.

There are strict penalties for breaking these rules, including heavy fines. This approach sends a clear message that some uses of AI are simply off-limits, no matter how advanced the technology becomes.

What examples illustrate unacceptable risk according to the AI Act?

The AI Act draws a clear line in the sand when it comes to what it considers unacceptable risk. They are uses of artificial intelligence that threaten people’s safety, rights, or dignity in ways that lawmakers believe should never be allowed.

To make things crystal clear, the Act spells out specific examples that fall into this forbidden territory. Let’s take a closer look the most notable examples, each one showing exactly how the law draws its boundaries.

1. Biometric identification in public spaces

Imagine walking through a busy city square. Cameras are everywhere, quietly scanning faces and matching them to a database. You haven’t done anything wrong, but the system is always watching, always comparing.

This is biometric identification in public spaces, and under the AI Act, it’s a prime example of unacceptable risk. The law says that using AI to identify people in real time, without their consent, crosses a line.

It’s not just about privacy. It’s about the feeling of being watched, the loss of anonymity, and the potential for abuse. The Act makes it clear: this kind of surveillance is not allowed, except in very rare cases like searching for a missing child or preventing a terrorist attack.

2. Social scoring by governments

Now picture a world where every action you take is tracked and rated. Did you pay your bills on time? Did you cross the street at the right place? Did you say something critical about the government online?

In some places, this isn’t just a thought experiment. Social scoring systems use AI to assign scores to people based on their behavior, and those scores can affect everything from your ability to travel to your access to services.

The AI Act calls this out as an unacceptable risk. Why? Because it undermines fairness, equality, and freedom. It creates a society where people are judged and controlled by algorithms, not by laws or human judgment. The Act draws a hard line here, banning these systems outright.

3. Manipulation of vulnerable groups

Finally, think about how powerful AI can be when it comes to influencing people’s decisions. Now imagine that power aimed at children, the elderly, or anyone who might not be able to defend themselves.

The AI Act says that using AI to manipulate vulnerable groups is simply not acceptable. This could mean targeting kids with addictive games, or using persuasive technology to push older adults into buying things they don’t need.

The law recognizes that some people need extra protection, and it steps in to provide it. Manipulation of vulnerable groups is not just frowned upon, it’s forbidden, full stop.

Why defining unacceptable risks is necessary

When lawmakers in the European Union designed the AI Act, they didn’t set out to stifle innovation. Drawing a line between acceptable and unacceptable uses of AI is not just a bureaucratic exercise. It’s a moral and practical safeguard, meant to ensure that technology serves humanity, not the other way around.

Protecting fundamental rights

At its core, the concept of unacceptable risk is about defending the principles that underpin democratic societies. You can think of privacy, equality, freedom, and human dignity.

Without clear boundaries, AI could easily be used in ways that erode these values. By outlawing certain high-risk applications outright, the EU is reinforcing a simple truth: some technologies are too powerful, or too invasive, to be trusted without strict limits.

Creating trust in artificial intelligence

Trust is the foundation of technological adoption. When people feel that AI is being developed and used responsibly, they’re more likely to embrace it in their daily lives. Defining unacceptable risks helps build that trust.

By removing harmful or manipulative AI systems from the equation, the EU is signaling to its citizens that AI can be both advanced and ethical. This clarity also helps businesses by setting predictable rules: they know what’s off-limits, and they can innovate confidently within those boundaries.

Preventing irreversible harm

AI technology is evolving at incredible speed, often faster than society can adapt. Some systems, once deployed, can have consequences that are nearly impossible to reverse. Examples include mass surveillance, deep social manipulation, or discrimination baked into automated decision-making.

The unacceptable risk under the AI act is a preventive tool. It stops the most dangerous systems before they can cause harm, rather than trying to fix damage after it’s already done. In this sense, it’s not just a rule. It’s a form of societal insurance.

More stories you might like

Our website uses cookies to improve your experience and ensure proper functionality. By accepting our cookies, you agree to their use. For more information, please read our privacy policy.