
The EU AI Act is a comprehensive regulation that aims to ensure the safe and trustworthy development of artificial intelligence (AI) within the European Union. The Act categorizes AI systems into four risk levels, from low to high.
These risk levels are determined by the potential impact of an AI system on human life, health, safety, and fundamental rights. The categorization is crucial in understanding the regulatory requirements for each type of AI system.
The EU AI Act requires developers to assess the risk level of their AI systems and implement appropriate measures to mitigate any potential harm. This includes conducting risk assessments, implementing safety features, and providing transparency about the AI system's decision-making processes.
Understanding the risk levels and regulatory requirements of the EU AI Act is essential for businesses and developers to avoid non-compliance and potential fines.
What Is the EU AI Act?
The EU AI Act is a comprehensive regulation addressing the risks of artificial intelligence through a set of obligations and requirements. Its official purpose is to ensure the proper functioning of the EU single market by setting consistent standards for AI systems across EU member states.
The AI Act is part of a wider emerging digital rulebook in the EU that regulates different aspects of the digital economy. This rulebook includes the General Data Protection Regulation, the Digital Services Act, and the Digital Markets Act.
The AI Act covers AI systems that are "placed on the market, put into service or used in the EU." This means that in addition to developers and deployers in the EU, it also applies to global vendors selling or otherwise making their system or its output available to users in the EU.
There are some exceptions to the AI Act, including AI systems exclusively developed or used for military purposes, scientific research, and free and open source AI systems and components.
Risk Levels and Classifications
The EU AI Act categorizes AI systems into four risk levels: unacceptable, high, limited, and low. These categories determine the level of oversight and regulation each system is subject to.
The unacceptable-risk category includes AI systems deemed to pose a threat to individuals and violate EU fundamental rights and values. Examples of such systems include social scoring systems, real-time biometric identification, and systems that manipulate behavior.
High-risk AI systems pose a high risk to the safety, fundamental rights, and freedoms of individuals or society. Examples include systems used in medical devices, safety components of products, and management of critical infrastructure.
Limited-risk AI systems may cause confusion or deception for users. Chatbots and deepfakes are examples of such systems, which are subject to transparency obligations.
Low-risk AI systems are exempt from the AI Act, as they pose minimal or no risk. Text generators are an example of low-risk AI systems.
Here are the four risk levels and their corresponding characteristics:
The EU AI Act requires high-risk AI systems to meet certain requirements, including a risk management system, accuracy, robustness, and cybersecurity, as well as transparency and provision of information to users.
Consequences and Governance
Organizations operating in the EU may face inquiries or enforcement actions in multiple jurisdictions simultaneously due to the AI Act's complex framework for supervision and enforcement.
This stands in contrast to the GDPR, which generally allows organizations to deal with a single lead supervisory authority.
The AI Act's use-case based approach to regulation fails to address the recent innovation in AI, generative AI systems and foundation models more broadly.
This oversight has led to a vague definition of 'general purpose AI' and reliance on future legislative adaptations for specific requirements.
Providers of general-purpose AI will be subject to obligations similar to those of high-risk AI systems, including model registration, risk management, and data governance.
The European Parliament's proposal defines specific obligations for different categories of models, including transparency obligations and requirements to prevent the generation of illegal content.
Timing
The AI Act will have a significant impact on the way AI is developed and used in the EU. The act will come into force 20 days after its publication in the Official Journal of the EU, which is expected in June 2024.
This means that specific provisions will take effect over the following three years. Here are the key stages and possible relevant dates after entry into force:
- Six months (December 2024): Restrictions on prohibited AI practices will take effect.
- 12 months (June 2025): Regulations for general-purpose AI will be enforced.
- 24 months (June 2026): Requirements for high-risk AI systems will come into force.
- 36 months (June 2027): Rules for high-risk AI systems used as safety components in products will be implemented.
These dates are crucial for businesses and organizations that develop or use AI systems, as they will need to comply with the new regulations to avoid penalties.
Governance
The EU AI Act introduces a complex framework for supervision and enforcement, involving authorities at both the EU and national levels.
This framework can lead to multiple inquiries or enforcement actions in different EU jurisdictions simultaneously.
Organizations active in multiple EU countries may need to deal with multiple lead supervisory authorities, unlike under the GDPR.
The AI Act establishes a system of governance that requires organizations to navigate multiple authorities and regulations.
Regulated Behavior and Fines
The AI Act introduces significant fines for noncompliance, with penalties ranging from €7.5 million to €35 million depending on the nature of the violation and the size of the organization.
Infractions involving prohibited AI systems can incur fines of up to €35 million or 7% of global turnover, while other breaches of the AI Act's obligations may result in penalties of up to €15 million or 3% of global turnover.
Providing false information could lead to fines of up to €7.5 million or 1.5% of global turnover.
It's worth noting that the AI Act does not contain a private right of action for individuals, unlike the GDPR.
Regulated High
High-risk AI systems are a key area of focus for the EU AI Act. These systems are defined as those that could negatively affect the health and safety of people, their fundamental rights, or the environment.
To be classified as high-risk, an AI system must meet certain requirements, including those related to safety components of regulated products, such as medical devices, lifts, vehicles, or machinery.
High-risk AI systems include applications like biometric and biometrics-based systems, management and operation of critical infrastructure, and access to essential private and public services and benefits.
Some examples of high-risk AI systems include:
- Biometric and biometrics-based systems (e.g. remote biometric identification, categorization of persons and emotion recognition systems)
- Management and operation of critical infrastructure (e.g. road traffic, energy supply or digital infrastructure)
- Access to essential private and public services and benefits (e.g. credit-scoring, risk assessments in health insurance and dispatching emergency services)
If an AI system falls under this classification, it must meet specific requirements, including risk management systems, accuracy, robustness, and cybersecurity, data and data governance, human oversight, transparency and provision of information to users, record keeping, and technical documentation.
The EU wants to make an online register publicly accessible, listing all deployed high-risk AI systems and use-cases, as well as foundation models on the market.
Prohibited Behavior
AI systems that pose an unacceptable risk are prohibited outright. These systems can manipulate people through subconscious messaging and stimuli, or by exploiting vulnerabilities like socioeconomic status, disability, or age.
Social scoring systems, which evaluate and treat people based on their social behavior, are also banned.
Real-time remote biometric identification in public spaces, such as live facial recognition systems, is prohibited, along with other biometrics and law enforcement use cases.
Big Tech Lobbying to Water Down Rules
Big Tech companies are already lobbying to water down Europe's AI rules, as seen in the European Union Artificial Intelligence Act, where CSET's Helen Toner was cited by TIME.
This shows that Big Tech is actively trying to influence the regulations to their advantage.
The European Union's AI Act aims to regulate AI, but Big Tech is pushing back with lobbying efforts.
CSET's Helen Toner's involvement in the TIME article highlights the importance of monitoring and understanding these lobbying efforts.
It's clear that Big Tech will continue to try to shape the regulations to their benefit.
Understanding the Act
The EU AI Act is a regulatory framework designed to govern AI systems based on their risk level. The Act categorizes AI systems into four risk categories.
The EU AI Act identifies four risk categories: Unacceptable risk, High risk, Limited risk, and Minimal or no risk. These categories determine the level of compliance obligation for AI systems.
AI systems that fall into the Unacceptable risk category will have the highest level of compliance obligation. This category is likely to include AI systems that pose a significant threat to human safety or well-being.
The EU AI Act uses a sliding scale of risk to categorize AI systems, with each category having a different level of compliance obligation. This approach allows for more tailored regulation of AI systems.
Here are the four risk categories outlined in the EU AI Act:
Article-Specific Information
The EU AI Act aims to establish a risk-based approach to regulating AI systems, with four risk levels: unacceptable, high, limited, and unacceptable.
The unacceptable risk level is reserved for AI systems that pose a significant and immediate risk to fundamental rights, such as AI systems that can cause physical harm or enable mass surveillance.
The high risk level is assigned to AI systems that can cause significant harm or have a significant impact on individuals or society, such as AI systems used in healthcare or finance.
AI systems with limited risk are those that can cause limited harm or have a limited impact on individuals or society, such as AI systems used in customer service or marketing.
The EU AI Act requires developers to conduct a risk assessment for their AI systems, and to provide information about the risk level of their system to users and other stakeholders.
Incident Collection and Harm
AI harms can occur at individual, national, and societal levels. Real-world harms caused by AI technologies are widespread.
To better understand AI harms, policymakers need to track and analyze them. This improves our understanding of the variety of harms and the circumstances that lead to their occurrence once AI systems are in use.
As policymakers collect and analyze incidents, they can identify patterns and areas where AI harms are most likely to occur.
Understanding Harms
Understanding harms is crucial to grasping the impact of AI on individuals and society.
Policymakers need to understand the different types of harm that various AI applications might cause at the individual, national, and societal levels.
Real-world harms caused by AI technologies are widespread and varied.
To better understand harms, policymakers should track and analyze them to identify the circumstances that lead to their occurrence.
Real-world harms caused by AI technologies include individual, national, and societal harms, and they are widespread.
Understanding the variety of harms is essential to improving our grasp of AI's impact on society.
Incident Collection: The Great Experiment
The Great AI Experiment is an observational study that defines criteria for effective AI Incident Collection. This study highlights the importance of collecting data on AI incidents to improve AI systems.
Researchers have identified three potential reporting models: mandatory, voluntary, and citizen reporting. Mandatory reporting would require developers to report all incidents, while voluntary reporting relies on developers to self-report incidents.
Mandatory reporting is considered more reliable, but it can be burdensome for developers. Voluntary reporting, on the other hand, may not capture all incidents, but it can be less intrusive.
The Great AI Experiment aims to understand the tradeoffs between these reporting models and find the most effective approach. This study has significant implications for the development and deployment of AI systems.
Frequently Asked Questions
What is meant by high risk AI under the AI Act?
High-risk AI refers to systems that pose significant threats to human life, health, or education, such as AI used in critical infrastructure or educational settings that can impact life choices
What is the EU risk based approach to AI?
The EU takes a risk-based approach to AI, where stricter rules apply to systems that pose higher risks. This means the level of regulation depends on the potential harm an AI system can cause.
Sources
- https://www.skadden.com/insights/publications/2024/06/quarterly-insights/the-eu-ai-act-what-businesses-need-to-know
- https://www.mhc.ie/hubs/the-eu-artificial-intelligence-act/eu-ai-act-risk-categories
- https://www.euaiact.com/annex/3
- https://www.trail-ml.com/blog/eu-ai-act-how-risk-is-classified
- https://cset.georgetown.edu/article/the-eu-ai-act-a-primer/
Featured Images: pexels.com