The EU AI Act: What to Expect as New Legislation Takes Better Effect From August 2024

One landmark piece of global AI regulation, the EU’s trail-blazing AI Act, will come into effect on August 1, 2024. This landmark legislation will affect all companies using artificial intelligence — regardless of whether the algorithms are in development or already deployed. Here’s what you need to know about the EU AI Act, and what it will mean for A.I. in Europe and beyond.’

AI act 
europe

In Image: AI Brain In a Robot Hand


The EU AI Act is hailed as the world’s first law to govern artificial intelligence based on the risk it poses. It was formally published in the official journal of the European Commission in July 2024 and established a baseline for how countries should handle the complex and rapidly changing arena of artificial intelligence.

The Act creates four categories of risk for AI systems: no risk, low risk, high risk and prohibited systems. It has a detailed and structured schema which outlines how AI must be supervised.

Specific regulatory requirements and deadlines for compliance are linked to each risk category. “Because no-risk systems pose no significant risks to rights or public safety, they are mostly exempt from regulation. The bulk of AI companies operate in low-risk systems, which are under light-touch regulation intended to deliver a degree of accountability and transparency while avoiding stifling innovation.

High-risk systems face strict rules aimed at ensuring safety, fairness and transparency. That covers AI in any of the vital industries like health care, law enforcement or critical infrastructure. These systems must fulfill specific requirements, such as the provision of detailed documentation of training data sources and proof that output was overseen by a human.

Finally, the Act lists a number of computer systems that are deemed sufficiently dangerous or morally questionable to be banned under any circumstances. These are especially AI programs that have the ability to influence the behavior of users or they can participate in activities like internet scarping to create face recognition database.

Yet, to prevent further losses and protect basic rights, the Act also provides information on deadlines for the phase-out or ban of specific activities beginning February 2025. This kind of categorization enables an appropriate level of regulation that is positioned between enabling innovation and the kind of monitoring that will need to occur in the name of public safety.

  • No Risk: AI systems for AI Act that are in this category are assessed to present no known risk to people or the public. Such technologies tend to operate in contexts where their benefits are limited or non-existent and are built for benevolent reasons. As a result, the regulatory framework of the EU AI Act largely does not cover them.
  • Technologies of this kind might be those technologies of AI that can be applied for trivial data analysis, or for low-level automated activities that do not handle sensitive personal information, or do not rely on informed decisions with significant impacts. This approach is exemplified by the exemption of device businesses using low-risk AI from stricter requirements of the Act so that they can keep innovating without a heavy burden on compliance.
  • Minimal Risk: This includes about 85 percent of AI companies, such as those that are using AI for applications that pose not a significant threat on people’s health, safety or fundamental rights. These may include recommendation systems or customer support chatbots that do not have to make mission-critical decisions or process sensitive information. This minimal-risk AI policy is soft-touch, focusing on basic accountability and transparency.
  • It is not subject to the more demanding requirements applied to higher-risk categories, but still lays out fundamental principles on data protection, and the need to make sure that A.I. systems do not inadvertently harm people. This approach seeks to enable innovation, but with a proper set of regulatory guard rails that correlate with actual risk.
  • High Risk: This class of AI systems has deployed in situations where their decisions or actions could realistically have serious consequences for people’s lives or public safety. Examples of these applications are biometrics/identity (e.g., face detection in security scenarios), employment (e.g., automated hiring systems), and critical mass infrastructure control (e.g., from the energy grid to transportation systems).
  • There are strict regulations that govern these systems, designed to make sure they’re safe and fair. High-risk AI must have operating requirements to prevent harm. This includes evidence of strong human-monitored systems and documentation of all training data used. Given the potential risk these applications pose, the Act calls for businesses to establish stringent controls, regardless of industry, that help mitigate risk, and for businesses to be open about the manner in which their AI systems operate.
  • Prohibited Systems: The EU AI Act constructs a unique boundary by prohibiting specific AI methods determined to be too dangerous or ethically questionable. The banned systems include AI programs that affect user decision making, including those for political manipulation and fraudulent ads, as well as practices such as the collection of face data via internet scraping to enlarge the databases used in face recognition without consent.
  • In order to safeguard personal freedoms and prevent the potential abuse of AI technology, the British Government is imposing a long list of actions that will be criminalized, effective February 2025. This provision will ensure that the AI systems used will not undermine privacy, trust, or other civil liberties principles and that technology will be applied more socially responsibly.
  • AI Act, AI technologies used for basic data analysis or low-level automated activities that don’t include sensitive personal data or judgments with significant consequences might be examples. The Act’s emphasis on higher-risk applications is reflected in the exemption from strict requirements, which frees businesses using low-risk AI to operate without further compliance costs.

Enterprises will take around 3 – 6 months to get an entire channelized system to work with the newly formed regulations of EU AI Act, as mentioned by UST’s VP and Lead for Responsible AI Program, Heather Dawe. (Note that this timing will differ widely for many factors here, from the size of the company, to the complexity of their AI Act for AI systems, to how deeply AI is embedded into its operations.)

Systems where the AI being used poses a low risk to the company can adapt to the process much better; however, it is much more common for enterprise organizations where AI is more complexly deployed or ones that fall under riskier categories to be latent in adapting to the old requirements.

The EU AI Act

In Image: Europe National Flag


To reach that seamless transition, companies have to set up internal AI governance committees, Dawe said. A board of experts in the areas of technology, security and law should oversee the governance process determining and coordinating compliance. This cross-industry approach ensures that all aspects of AI governance are inclusively addressed from risk mindset to data integrity and transparency & accountability. “The boards can conduct enterprise-wide audits of the business’ AI systems, identify issues and develop pathways to bring their systems into compliance with the requites of the Act.”

Sanctions for violations of the EU AI Act can be hefty, up to 7% of a company’s annual global revenue. This is a huge financial gamble that ultimately reinforces the case for planning, and sophisticated planning.” This gives companies time to comprehend the new statute and establish effective compliance frameworks to avert any possible legal and monetary implications. Recognizing these recommendations early and at scale will promote satisfaction of the Act’s obligations and should limit exposure to costly penalties and operations delays.

The EU AI Act

In Image: European Parliament


To help ensure oversight and coherence of the application of the Act across all Member States, the European Commission is establishing an AI Act Office. The newly established entity will oversee direct enforcement of the Act, oversee AI systems operating across all of the member states, and ensure consistent enforcement of the standards throughout the EU. The new AI Office will be responsible for ensuring that this requirement under the Act is met, and for adjudicating inconsistencies or violations.

The new AI Board, will be comprised of representatives from each of the 27 EU Member States, to assist the AI Office. This body also enacts policies, fosters member state collaboration and ensures that the Act is applied uniformly across the EU. That means the AI Board wants to establish a harmonized, coordinated approach to AI legislation, made up of the top-level representatives from each member state, which would tackle cross-border issues and ensure that every nation abides by the same rules.

In addition to improving its regulatory oversight, the European Commission has also been significantly increasing its spending on artificial intelligence technologies. The Commission plans investment of up to €20 billion until the year 2030 or €1 billion for 2024 targeting AI Act (-based) initiatives. This seminal investment will serve to catalyze innovation, further enable AI R&D, and provides a sound foundation upon which to build alongside the new regulatory framework.

This is aligning its investment strategy with the Commission’s objective of being in the leading position of AI technology, but to ensure that the technology is developed and deploying in an ethical and responsible way. It will thus seek to promote the development of AI, while ensuring high compliance with rules in these sensitive sectors, be it with the public interest in view, or balancing a hand that supports — through funds to aid innovation — and a regulatory claw, when and if needed!

The EU AI Act is described as a groundbreaking law on AI, but it too is not immune from criticism. “The high-risk technologies laid out in the Act {{{{(cite)}}}}} would have to be made clearer, said Risto Uuk of the Future of Life Institute. Uuk et al write that the criteria for high-risk AI systems may be overbroad or vague, allowing for ambiguity and divergence in their uptake. This ambiguity can lead to overregulation for lower-risk technology and low regulation on usage for higher-risk applications.

So too are the Act’s biometrics and national security provisions. Critics say that facial recognition standards may do too little to protect privacy, and civil rights. Some say that greater oversight is also needed over AI systems deployed in national security, whose guidelines are “vague enough” that they risk creating gaps in regulation and monitoring within sensitive realms.

There are also grumbling calls for more strict rules and stiffer penalties on tech giants associated with generative AI. Most of those companies would be considered moderate-risk, a designation that critics say may be inadequate given their scale and the societal implications of their innovations. That oversight — and a willingness to punish large companies for legal compliance — is required by the EU to ensure that AI systems are accountable and transparent. Some hope that tougher laws will rein in the outsized power of large companies over disinformation, data privacy and economic inequality.

While the EU AI Act is historically significant on many levels, implementation of such a complex law designed to combat serious future challenges will no doubt come under a great deal of question as it is put into practice.

The effective implementation of this new law and its regular updating with the EU AI Act will be essential. The Act marks a major advance in the regulation of responsible and ethical AI development and in the balancing of technical progress with public and commercial interest. This is one way that complicated problems that arise from rapidly changing technology could be tackled in other fields.

How well the Act has been implemented will be central to its effectiveness. They include ensuring that the AI Office and AI Board are those monitoring compliance and tackling issues that arise. And the success of the Act will also be contingent on how well corporations adapt to and incorporate new laws. Organisations with or planning to enter the EU must stay abreast of regulation changes In order to get ahead of the new laws, companies will have to adopt a more proactive approach to compliance that includes regular audits to track and ensure compliance of the different Act obligations, including upgrading AI governance structures and the timelines of compliance.

With technology evolving and new AI applications emerging, the Act needs to be updated. Future changes should also be underpinned by input from businesses, academics and civil society. The Continuous Method will abolish unexpected problems and adapt the regulatory framework to a technology and society that accelerates changes and transformations.

“The EU AI Act is a significant step towards an organized, responsible approach to AI. Implementation will require agile coordination between regulators, companies, and other stakeholders. Thoughtful discussion on responsible development and deployment of AI across the industry can be a step forward toward assuring compliance, and companies can keep proactive and involved in these discussions.”

Leave a Comment