The EU Artificial Intelligence Act (AI Act) is a "comprehensive" regulation targeting AI technologies.
This regulation categorizes AI applications into three primary categories: prohibited, high-risk, and low-risk.
Prohibited AI practices include those that manipulate behavior, exploit vulnerabilities, or involve biometric surveillance without consent.
High-risk AI systems, which include applications in healthcare, law enforcement, and critical infrastructure, must meet stringent requirements such as comprehensive risk assessments, data governance, and human oversight.
Low-risk AI systems face minimal regulatory obligations, primarily focused on transparency.

This regulation has substantial implications for AI development, deployment, and compliance, especially for companies operating in or targeting the EU market. This blog is an attempt to summarize the key highlights of the AI Act, comparing it with US regulations, highlighting its most restrictive aspects, and offering strategies for American companies to navigate these new rules effectively.
In comparison, the US regulatory landscape for AI is more fragmented, with sector-specific guidelines issued by various federal agencies like the FTC, FDA, and NIST. The AI Act's stringent requirements will significantly impact AI development and deployment, especially for American companies operating in or targeting the EU market. Key compliance strategies for these companies include investing in dedicated compliance infrastructure, leveraging regulatory sandboxes, and forming partnerships with EU entities to navigate the regulatory landscape effectively. By understanding and adhering to the AI Act, American companies can ensure market access and continue to innovate responsibly while aligning with global standards.
Scope and Definitions
The AI Act classifies AI applications into three primary categories:
Prohibited AI Practices: These are AI uses considered a clear threat to fundamental rights and include:
Exploitation of vulnerabilities based on age, disability, or socio-economic status.
Manipulation of human behavior circumventing free will.
Social scoring, similar to China's social credit system.
Biometric surveillance without consent.
AI systems that infer sensitive data such as political beliefs or sexual orientation.
High-Risk AI Systems: These include AI applications that could pose significant harm to health, safety, fundamental rights, and democracy, such as:
AI used in critical infrastructure (e.g., energy, transport).
AI in education and vocational training (e.g., exam grading systems).
AI in employment (e.g., CV-scanning tools).
AI in law enforcement and border control (e.g., predictive policing).
AI in healthcare (e.g., diagnostic tools).
High-risk AI systems are subject to stringent requirements, including risk assessments, data governance, logging, human oversight, and cybersecurity measures.
Low-Risk AI Systems: These are AI applications with minimal risk that require basic transparency obligations, such as disclosing AI-generated content.
Comparison to US Regulations
The US approach to AI regulation is more fragmented, with sector-specific guidelines issued by various federal agencies:
Federal Trade Commission (FTC): Focuses on consumer protection and has issued guidelines on AI and machine learning transparency and fairness.
Food and Drug Administration (FDA): Regulates AI in medical devices and health-related applications, emphasizing safety and efficacy.
National Institute of Standards and Technology (NIST): Provides a framework for AI risk management and trustworthiness but lacks regulatory enforcement power.
Unlike the EU’s comprehensive AI Act, US regulations are more decentralized and sector-specific, which has its pros and cons, but beyond the scope of this blog to go into the details.
High-Risk AI Systems Requirements
The EU AI Act imposes rigorous compliance obligations on high-risk AI systems, including:
Risk Assessments: Companies must conduct fundamental rights impact assessments before deploying high-risk AI systems to identify and mitigate potential harms.
Data Governance: High-quality, representative data must be used to train AI systems, with mechanisms to ensure data integrity and accuracy.
Logging and Documentation: Comprehensive logs of AI system activities and detailed documentation must be maintained to demonstrate compliance and facilitate audits.
User Information: Clear and understandable information must be provided to users about the AI system’s functionality and limitations.
Human Oversight: Robust human oversight mechanisms must be in place to monitor AI system performance and intervene when necessary.
Cybersecurity: High standards of cybersecurity must be implemented to protect AI systems from malicious attacks and data breaches.
General Purpose AI (GPAI) and Foundation Models
The EU AI Act includes specific provisions for GPAI and foundation models, reflecting their wide applicability and potential risks:
Transparency Requirements: Companies must provide detailed technical documentation and summaries of training data and methodologies.
Systemic Risk Assessments: Regular evaluations and adversarial testing must be conducted to identify and mitigate systemic risks.
Reporting Obligations: Serious incidents involving GPAI must be reported to the European Commission.
High-impact foundation models, characterized by their advanced complexity and performance, are subject to even stricter requirements, including comprehensive risk assessments and adversarial testing.
Governance and Enforcement
The AI Act establishes a new governance framework:
AI Office: A central authority within the European Commission responsible for overseeing compliance, coordinating with national authorities, and advising on standards and best practices.
Scientific Advisory Panel: A panel of experts to evaluate GPAI models and monitor safety risks.
National Authorities: Member states’ authorities will implement and enforce the AI Act at the national level, ensuring uniform application across the EU.
In contrast, the US does not have a centralized AI regulatory body, relying on existing agencies for enforcement. Again, this can have its pros and cons—but it is beyond the scope of this blog to discuss the details.
Prohibited AI Practices
The AI Act bans several AI practices deemed unacceptable due to their potential to infringe on fundamental rights:
Cognitive Behavioral Manipulation: AI systems designed to manipulate individuals' behavior in a way that undermines their autonomy.
Untargeted Biometric Surveillance: Scraping of facial images from the internet or CCTV footage without consent.
Emotion Recognition in the Workplace: AI systems that monitor and infer employees' emotional states.
Social Scoring: AI systems that score individuals based on their behavior or personal characteristics.
Predictive Policing: AI systems that predict and profile individuals for potential criminal behavior without concrete evidence.
These prohibitions are broader and more comprehensive than any AI-specific US regulations. However, some of the above and the "intent' either are or might be covered by other laws. It is beyond the scope of this blog to go into the details.

So What?
Impact on AI and AI-Related Companies
The EU AI Act’s stringent requirements will significantly impact AI development and deployment, particularly for high-risk applications and GPAI. American companies aiming to enter or maintain a presence in the EU market must prioritize compliance to avoid penalties and market exclusion. Key impacts include:
Increased Compliance Costs: Companies will incur additional costs for risk assessments, data governance, documentation, and audits.
Innovation Constraints: Stricter regulations may slow down innovation and deployment of AI technologies, particularly for high-risk applications.
Market Access: Non-compliance could lead to restricted access to the EU market, impacting revenue and growth opportunities. Smaller companies will have significant challenges in terms of the cost of compliance.
Maneuvering Around the AI Act
To navigate the EU AI Act’s stringent requirements, American companies can:
Invest in Compliance Infrastructure: Establish dedicated compliance teams to ensure adherence to the AI Act’s requirements, including technical documentation, risk assessments, and transparency obligations.
Leverage Regulatory Sandboxes: Utilize the AI Act’s regulatory sandboxes to test and validate AI systems in real-world conditions while working closely with EU regulators to ensure compliance.
Collaborate with EU Partners: Form partnerships with European companies and research institutions to gain insights into local regulatory expectations and leverage their expertise in navigating the AI Act.
Adopt Best Practices: Implement best practices for AI development, such as ethical AI frameworks, robust data governance, and human-in-the-loop systems, to mitigate risks and enhance trustworthiness.
Conclusion
The EU AI Act represents a significant regulatory shift with profound implications for AI companies worldwide. By understanding the Act’s requirements and proactively adapting to its standards, American AI companies can ensure compliance, maintain market access, and continue to innovate responsibly in the AI domain. While the US regulatory landscape remains fragmented, the EU AI Act may serve as a sign of things to come, even to the US AI-related regulatory landscape.
By staying informed and prepared, AI technologists and entrepreneurs can navigate these regulatory challenges and capitalize on the opportunities presented by the evolving AI landscape. Please note: This is a summary overview of some of the main highlights of the EU AI Act; it is imperative that each individual or company do their own research and understand the act in depth.
Cluedo Tech can help you with your AI strategy, discovery, development, and execution. Request a meeting.
References
European Commission. (2024). The EU Artificial Intelligence Act. Link
Consilium. (2024). Artificial Intelligence Act: Council and Parliament strike a deal on the first rules for AI in the world. Link
Willkie Compliance Concourse. (2024). Europe Reaches Agreement on the EU AI Act. Link
Brookings Institution. (2023). The state of AI regulation in the United States. Link