February 23, 2024

Last Updated on February 23, 2024

After months of negotiations the EU reached agreement on its “global first” Artificial Intelligence Act—the EU AI Act. The new legislation seeks to ensure that AI systems are trustworthy and “respect fundamental rights and EU values.”

This post shares answers to nine top questions about the EU AI Act, including predicted impacts on AI users and developers in the US and worldwide.

 

Q1: What is the EU AI Act?

The EU AI Act establishes a comprehensive regulatory framework to reduce today’s AI risk while bolstering AI innovation and investment within the EU. The Act’s provisions will be in full force in about two years.

Key precepts of the legislation include:

  • Banning deployment within the EU of AI systems that pose an “unacceptable risk”
  • Defining different levels of restrictions on AI systems depending on their risk category
  • Establishing “guardrails” for deployment of foundational or generative AI models, including compliance with EU copyright law, disclosure requirements for training data, and mandates for user documentation
  • Providing legal support for continued AI innovation
  • Setting forth some of the largest noncompliance penalties of any EU statute

 

Q2: What organizations will the EU AI Act apply to?

The AI Act will spread accountability across the entire AI value chain, including AI providers and AI deployers, as well as product manufacturers, distributors, and importers. The act also applies to AI providers and users outside the EU if an AI system’s output would be used within the EU.

 

Q3: What are the EU AI Act’s 4 Risk Levels?

The core of the EU AI Act is its risk-based approach. It defines four levels of AI risk and associated provider and user obligations based on specific use cases.

The four AI risk levels are:

  1. Unacceptable risk—Seen as posing a clear threat to basic human rights and EU values. AI systems and use cases in this category are banned outright.
  2. High risk—AI models in this category now face significant compliance obligations around risk mitigation, usage oversight, data governance, transparency and documentation for users, security and privacy, robustness, and accuracy/correctness of results. High-risk AI use cases will also be subject to “conformity assessments” and “fundamental rights impact assessments” to evaluate their degree of compliance and identify concerns.
  3. Limited risk—AI chatbots, biometric categorization systems, deepfake tools, and some emotion recognition systems fall into this category. Compliance requirements will include disclosing to users that they are interacting with an AI system, and flagging AI created and/or manipulated content.
  4. Minimal/no risk—These systems, including spam filters or recommendation systems, remain essentially unregulated. But while many AI systems pose little risk, they still need to be evaluated and classified as such.

 

Q4: What classes of AI systems are banned for unacceptable risk?

AI systems banned for creating unacceptable risk to fundamental rights and EU values include:

  • Any AI system that manipulates human behavior to limit or suppress free will
  • Any AI system that exploits human vulnerabilities related to age, disability, socioeconomic status, etc.
  • Biometric categorization systems that rely on “sensitive” characteristics like race, sexual orientation, or beliefs
  • Emotion recognition in workplace or educational settings
  • Social scoring/profiling based on personal characteristics or social behavior
  • Creation of facial recognition databases by scraping facial images off the internet or CCTV footage
  • Specific “predictive policing” applications

The EU AI Law permits some law enforcement uses of otherwise banned AI use cases, such as biometric identification in public places. Approved purposes for AI in this highest-risk category include prevention of terrorist threats, targeted searches for abduction victims, and tracking down suspects in serious crimes.

The new law will not apply to military/defense AI systems, AI used only for research or still in the R&D phase, and “non-professional” AI uses.

 

Q5: How does the EU AI Law limit generative AI?

To help address societal risk associated with the rapid development and proliferation of general-purpose AI models and systems, the EU AI Law defines transparency requirements. These include:

  • Technical documentation for users
  • Compliance with EU copyright law
  • Detailed summaries of the content used to train the models

Some “high-impact” general-purpose AI models will also be subject to model evaluations, stronger cybersecurity requirements, risk assessment and risk management protocols, adversarial testing, and reporting following “serious incidents.”

 

Q6: What are the sanctions for non-compliance with the EU AI Law?

Similar to how the GDPR calculates fines, financial penalties for violations of the EU AI Law will be calculated based on a percentage of an organization’s worldwide turnover from the prior year, or a fixed amount, whichever is higher. For larger businesses, these penalties are:

  • €40 million or 7% for violations involving the use of banned AI applications
  • €15 million or 3% for compliance violations of the Act’s obligations
  • €7.5 million or 1.5% for providing the AI marketplace with incorrect or misleading information about an AI system

These are some of the harshest fines imposed by any EU law, including GDPR. As such, administrative fines issued against SMBs and startups will be proportionately capped.

If EU citizens are negatively impacted by the use or impacts of AI systems, the new law gives them the power to initiate complaints.

 

Q7: How does the EU AI Law support AI innovation and investment?

EU lawmakers wanted to ensure that SMBs could develop AI systems without unfair competition from large enterprises, which currently control much of the AI value chain. This has led to more innovation-friendly requirements for “regulatory sandboxing” and real-world testing of models to develop and train AI models before they enter the market.

The new law also takes steps to reduce its administrative burden on smaller AI providers. The legislators’ objective has been to strike a balance between controlling AI’s rapid development and wide-scale impacts with the need to support AI innovation and avoid forcing SMBs out of the market with overly stringent compliance requirements.

 

Q8: How will the EU AI Act impact AI users and providers?

The EU AI Act has significant implications for anyone developing AI systems, including the biggest players like Google, OpenAI, and Amazon.

The new requirements will force many AI innovators to get a handle on model management, including creating repositories for their models. This is especially relevant in the financial sector, where AI models and machine learning are increasingly important in areas like assessing consumer credit risk.

The statistical models that now underpin and operate our financial infrastructure will also come under regulation in the high-risk category. Business uses of biometric identification, such as employee management, could also be deemed high-risk. It remains to be seen how the new legislation will affect AI systems used for customer experience management, fraud detection, and other forms of pattern analysis.

 

Q9: Can regulations adequately protect businesses or individuals from AI risk?

Is Jeff Bezos correct that AI is “more likely to save us than destroy us?” Or does AI present humanity with deadly potential risks that could manifest far faster than we can respond?

The answer largely depends on what AI capabilities are involved. Currently the most advanced AI involves machine learning or statistical learning. Large language models (LLMs) or digital neural networks, as exemplified by ChatGPT, are based on human-engineered statistical models targeting specific problem domains and driven by enormous datasets.

This rapidly proliferating “second wave” AI excels at classification and prediction but cannot “reason” beyond its programming. Second wave AI already presents significant risks in areas like consumer privacy, human rights, political oppression, disinformation/misinformation, and various forms of harm caused by incorrect results or unpredicted destructive actions.

But AI’s ultimate goal is “human-level AI,” aka “thinking machines.” At this “third wave” level, a digital system can self-educate, interpret in context, and transcend its initial programming to perform “mental” tasks at or beyond the human level.

Will super intelligent AI benefit humanity and the planet? Or will it overwhelm, enslave, or destroy us?

Many believe that human level AI will be “walking the streets” within twenty years. Can regional regulations like the EU AI Act address current and emerging risk in the meantime? Or is unified global regulation focused on preserving human rights the only possible answer?

 

Next steps

The EU AI Act will apply in full two years after its entry into force, most likely in 2026. Bans on prohibited AI will go into effect in 2024, with transparency requirements following in 2025.

How should US businesses respond to this EU AI legislation? While many questions remain, now is the time for AI users to begin setting up a best-practice AI governance framework. This can help ensure responsible development and deployment of AI company-wide while reducing AI risks—including future compliance risks.

If your organization would benefit from a practical roadmap to effectively address the interconnected AI, cybersecurity, and regulatory risks you face today and in the future, contact CBIZ Pivot Point Security.