Empowering tomorrow’s leaders. Mission

  • About us
  • Newsroom
  • Clients
  • backgound image

    European AI Act: Key Aspects You Need to Know

    Summary: In this article, associate partner at Aurum, Illia Shenheliia, breaks down the key points of the European Artificial Intelligence (AI) Act that businesses need to know to prepare and stay compliant

    Authors:

    avatar
    Illia Shenheliia

    Associate partner

    preview

    The European Union (EU) has recently made significant progress in regulating artificial intelligence (AI) with the introduction of the AI Act, which entered into force on August 1, 2024. In this article, associate partner at Aurum, Illia Shenheliia, breaks down the key points of the AI Act that businesses need to know, both to prepare and remain in compliance.

    What is the EU AI Act?

    The EU AI Act is the first comprehensive law aimed at regulating AI technologies within the European Union. The Act categorises AI systems based on their risk levels and imposes varying obligations on businesses depending on the risk associated with their AI tools.

    The EU AI Act primarily applies to providers, who develop AI systems, and deployers, who use them in the EU, meaning even businesses integrating third-party AI solutions may be regulated. This law is part of the EU’s broader digital strategy, aiming to ensure that AI technologies are used safely, ethically, and in a manner that respects fundamental rights.

    Key Features of the AI Act

    Risk-Based Classification

    The cornerstone of the AI Act is its risk-based approach, which focuses on the level of risk each AI system poses to individuals and society. This approach ensures that regulations are proportionate to the potential harm an AI system could cause, with stricter rules for higher-risk applications and lighter or no regulations for lower-risk systems. All AI systems are categorised into three types based on their risk level:

    • Limited and Minimal Risk: AI with limited or minimal risk faces fewer obligations. In our opinion, the majority of AI systems or use cases will fall within this risk category. This group typically includes virtual assistants, spam filters, automation tools, and entertainment solutions.

    • High-Risk AI: Systems that significantly impact people’s lives, such as those used in critical infrastructure, education, employment, migration, or law enforcement, are classified as high-risk. Businesses using these systems must comply with strict requirements, including rigorous testing, documentation, transparency, and human oversight.

    • Unacceptable Risk: AI systems that pose a clear threat to safety, livelihoods, or rights are outright banned. Examples include AI used for social scoring or manipulation of behaviour that could cause harm.

    Transparency and Documentation

    Transparency requires informing users when they are engaging with AI, particularly in non-obvious contexts. Transparency also extends to cases, where a content was generated by an AI system. Companies must maintain clear records and documentation for high-risk AI systems. This includes detailed descriptions of the system’s purpose, design, and functionality, as well as a risk management plan.

    Compliance and Accountability

    The AI Act introduces stringent requirements to ensure that businesses remain accountable for their high-risk AI systems. Companies that develop or deploy high-risk AI must implement robust compliance measures. This includes creating internal policies and processes to manage AI-related risks, regularly testing and monitoring AI systems for safety and accuracy, and ensuring that these systems operate within the legal framework. Additionally, companies must ensure human oversight of AI operations to prevent any unintended harmful consequences.

    Data and Privacy Protection

    Based on our experience, AI systems frequently process personal data. The AI Act emphasises the importance of protecting personal data used in AI systems, aligning with the General Data Protection Regulation (GDPR). Businesses must ensure that AI systems using personal data are designed and operated with privacy in mind, incorporating techniques such as data minimization and anonymization.

    General-Purpose AI Models

    The AI Act draws particular attention to general-purpose AI models. These AI systems are designed to perform a wide range of tasks and can be adapted to various applications across different sectors, posing unique regulatory and legal challenges due to their broad applicability. If a general-purpose AI system represents systemic risks, the AI Act demands adherence to more stringent rules.

    What Businesses Need to Do

    1. Assess Your AI System: Identify the risk level category of your AI system. If it falls within the limited or minimal risk category, the regulatory requirements will likely be minimal. If it falls within the high-risk category, prepare for thorough documentation and compliance measures. An AI system with an unacceptable risk level is prohibited and must be modified before being brought to market. In the case of a general-purpose AI system, it is also crucial to determine whether it carries systemic risk.

    2. Review and Update Compliance Programs: Ensure that your existing compliance frameworks are equipped to handle the new requirements. This might include appointing new roles or updating processes.

    3. Invest in Transparency: Transparency is not just a regulatory requirement; it’s also key to building trust with consumers. Make sure your AI systems are explainable and that users are informed when they interact with AI.

    4. Prepare for Audits and Inspections: The AI Act allows for regular audits and inspections of AI systems, particularly those classified as high-risk. Being prepared for these will minimise disruption.

    Liability

    The AI Act establishes a range of penalties and corrective measures to ensure compliance with its regulations. Businesses that fail to adhere to the requirements may face significant consequences, which include the following:

    • Financial Penalties: The penalties can reach up to €35 million or 7% of the company’s total annual worldwide revenue, whichever is higher, depending on the severity of the violation. In addition to one-time fines, ongoing violations may incur daily fines until the company rectifies the issue.

    • Suspension of AI Systems: Regulatory authorities have the power to suspend or restrict the use of AI systems that do not comply with the Act.

    • Prohibition of Market Access: In severe cases, companies may be banned from placing their AI systems on the market or providing them to users within the EU until compliance is achieved.

    • Mandatory Remedial Actions: Companies may be required to take specific corrective measures to address non-compliance. This can include enhancing safety features, improving transparency, or modifying the AI system to align with regulatory standards.

    • Increased Scrutiny: Organisations found in violation of the AI Act may face heightened scrutiny from regulatory bodies in the future, leading to more frequent audits and inspections of their AI systems.

    Beyond formal penalties, non-compliance can result in reputational harm, leading to a loss of trust from consumers and business partners. This can have long-term implications for a company’s market position and profitability.

    Timelines

    The AI Act has a phased implementation timeline to allow businesses adequate time to prepare for compliance. Here is a brief overview of the key dates:

    1. Entry into Force: The AI Act officially became law in August 2024.

    2. Six Months Post-Entry (Early 2025): Prohibitions on AI systems deemed to present unacceptable risk will come into force.

    3. Twelve Months Post-Entry (Mid 2025): Requirements for general-purpose AI models will begin to apply.

    4. Two years Post-Entry (2026-2027): The full compliance requirements for high-risk AI systems and other critical provisions are expected to come into effect.

    Conclusions and Remarks

    The AI Act marks a significant development in the regulation of artificial intelligence. While the Act introduces new challenges, it also offers an opportunity for businesses to lead in the ethical and responsible use of AI. By understanding the requirements and proactively addressing them, companies can navigate this new regulatory landscape with confidence.

    The Aurum team is here to help you understand and comply with the requirements of the AI Act. We offer comprehensive legal support, from assessing the risk levels of your AI systems to implementing compliance strategies that align with the latest regulations. Whether you need assistance with documentation, risk management, or ensuring that your AI technologies meet all legal standards, our team is equipped to guide you through every step of the process.

    Related publications