The EU AI Act: What to Expect as New Legislation Takes Better Effect From August 2024

Photo of author

By Aashik Ibrahim

“The ground-breaking AI Act of the European Union formally goes into effect on August 1, 2024, which is a major turning point in the regulation of AI worldwide. This groundbreaking law is going to have an effect on all businesses that use artificial intelligence, regardless of whether their systems are in development or are now in use. What you need to know about the EU AI Act and how it will affect AI in Europe and beyond is provided here.”

AI act 
europe

In Image: AI Brain In a Robot Hand


Praise is given to the EU AI Act for being the first law in the world to govern artificial intelligence according to the danger it poses. The European Commission’s official journal officially published this historic piece of legislation in July 2024, setting a standard for how countries should approach the complex and rapidly evolving field of artificial intelligence.

With the introduction of the Act, there will be four different risk classifications for AI systems: no risk, low risk, high risk, and forbidden systems. This creates a comprehensive and organized framework for AI regulation.

Specific regulatory requirements and compliance deadlines are linked to each risk category. No-risk systems are mostly free from regulatory restrictions since they don’t represent a substantial danger to rights or public safety. Most AI businesses utilize minimum-risk systems, which are subject to mild regulation designed to provide basic accountability and transparency without impeding innovation.

High-risk systems are governed by strict regulations intended to maintain security, equity, and openness. These include AI technology used in vital industries like healthcare, law enforcement, or essential infrastructure. Strict requirements, such as thorough training data documentation and proof of human supervision, must be met by these systems.

Lastly, the Act lists several prohibited systems that are considered too dangerous or morally dubious to be allowed in any situation. These include artificial intelligence (AI) programs that influence user choices or take part in activities like using internet scraping to build face recognition databases.

To reduce possible damages and safeguard basic rights, the Act establishes precise timeframes for the phase-out or outright prohibition of certain activities, beginning in February 2025. This thorough classification enables a customized regulatory strategy that aims to strike a balance between innovation, the required monitoring, and public safety.

  • No Risk: AI systems falling under this category are thought to pose no known danger to people or the general public. These technologies usually work in contexts where their influence is little or nonexistent and are intended for benign reasons. As such, the regulatory framework established by the EU AI Act primarily does not apply to them.
    • AI technologies used for basic data analysis or low-level automated activities that don’t include sensitive personal data or judgments with significant consequences might be examples. The Act’s emphasis on higher-risk applications is reflected in the exemption from strict requirements, which frees businesses using low-risk AI to operate without further compliance costs.
  • Minimal Risk: This category includes around 85% of AI firms, including those that use AI for applications that don’t significantly endanger people’s health, safety, or basic rights. These systems might be recommendation engines or chatbots for customer care that don’t need to make important decisions or handle sensitive information. Minimal-risk AI policies are light-touch, with an emphasis on fundamental responsibility and transparency.
    • Businesses in this group are exempt from meeting the stricter requirements placed on higher-risk categories, but they still have to abide by basic guidelines like protecting data and making sure AI systems don’t unintentionally hurt people. This strategy seeks to promote innovation while maintaining a level of regulatory control commensurate with the real danger.
  • High Risk: AI systems in this category are used in settings where their choices or deeds have the potential to gravely affect people’s lives or the safety of the general public. Examples include biometric identity (e.g., face recognition for security), employment choices (e.g., automated recruiting systems), and critical infrastructure management (e.g., energy grids or transportation networks) applications.
    • Strict rules aimed at ensuring safety and equity apply to these systems. High-risk AI must adhere to certain operating requirements in order to avoid damage, provide proof of strong human monitoring systems, and provide thorough documentation of the training data utilized. Given the high stakes associated with these applications, the Act requires businesses to maintain strict controls to reduce possible hazards and to be transparent about how their AI systems operate.
  • Prohibited Systems: By outlawing some AI techniques judged too hazardous or morally dubious, the EU AI Act creates a distinct border. These banned systems include AI programs that influence user decision-making, such as those used in political manipulation or misleading advertising algorithms, as well as actions like using internet scraping to increase face recognition databases without permission.
    • These forbidden behaviors will be outlawed as of February 2025 in order to safeguard individual liberties and stop the improper use of AI technology. By prohibiting the use of AI systems in ways that compromise privacy, trust, or basic freedoms, the restriction ensures that the technology is used responsibly and ethically.

According to Heather Dawe, the leader of UST’s responsible AI program, it may take businesses three to six months to fully comply with the new EU AI Act standards. This anticipated timeline is subject to substantial variation based on a number of variables, such as the company’s size, the complexity of its AI systems, and the degree to which AI is integrated into its operations. Smaller businesses or those with AI systems that pose less risk may find the procedure easier to handle, while larger enterprises with more intricate AI installations or those in high-risk categories may take more time to handle the demanding criteria.

The EU AI Act

In Image: Europe National Flag


Dawe advises businesses to set up specialized internal AI governance committees to enable a more seamless transition. Experts in a range of disciplines, including technology, security, and law, should serve on these boards in order to supervise and coordinate the compliance procedure. This interdisciplinary approach guarantees that all facets of AI governance are fully handled, including risk management, data security, and compliance with transparency standards. The boards have the authority to carry out thorough audits of the business’ AI systems, pinpoint problem areas, and create plans for bringing them into compliance with the Act’s regulations.

Penalties for breaking the EU AI Act may be severe; they can amount to as much as 7% of a company’s yearly worldwide revenue. This substantial financial risk emphasizes how important it is to plan ahead and be thorough. To prevent any legal and financial ramifications, businesses must not only acquaint themselves with the new legislation but also put strong compliance mechanisms in place. To ensure that AI systems comply with the Act’s requirements and to reduce the possibility of paying high penalties or experiencing operational delays, early and comprehensive planning is essential.

The EU AI Act

In Image: European Parliament


In order to improve supervision and guarantee uniform application of the EU AI Act, the European Commission is creating a special AI Act Office. This new organization will oversee the Act’s compliance, evaluate AI systems in each of the member states, and make sure the rules are implemented consistently throughout the EU. The AI Office will be essential in monitoring the application of the Act’s requirements and resolving any inconsistencies or problems with non-compliance.

A recently established AI Board, with members from all 27 EU member states, will assist the AI Office. This body is in charge of establishing policies, encouraging member state collaboration, and guaranteeing that the Act is applied uniformly across the EU. The AI Board seeks to promote a unified and coordinated approach to AI legislation by including high-level representatives from each member state, resolving cross-border challenges, and guaranteeing that all nations follow the same standards.

Apart from enhancing its regulatory monitoring, the European Commission is considerably augmenting its allocation towards artificial intelligence technologies. The Commission intends to spend up to €20 billion by 2030, having set aside €1 billion for AI Act-based projects in 2024. The goal of this significant financing is to promote innovation, assist AI research and development, and create a strong infrastructure that will function in tandem with the new legislative framework.

The Commission’s commitment to promoting AI technology while making sure it is developed and used in an ethical and responsible way is reflected in its investment strategy. The Commission wants to advance AI while preserving high standards of compliance, protecting the public interest, and striking a balance between financial assistance for innovation and regulatory measures.

The EU AI Act has been lauded as a pioneering AI law, yet it has also been criticized. Risto Uuk of the Future of Life Institute suggests that the Act’s designation of high-risk technologies may require more explanation. Uuk and others suggest that high-risk AI system criteria may be overly broad or unclear, causing uncertainty and discrepancies in implementation. This lack of specificity might lead to overregulation for low-risk technology or insufficient monitoring for high-risk applications.

The Act’s biometrics and national security provisions are likewise questionable. Facial recognition standards may not be strict enough to safeguard privacy and civil rights, say critics. Some believe the handling of AI systems employed in national security is too vague, which might lead to loopholes in regulation and monitoring in sensitive areas.

Increasing requests for stricter rules and greater penalties for big tech corporations engaging in generative AI are also present. Many of these firms are considered moderate-risk, which opponents say may be inadequate given their prominence and the possible social implications of their innovations. To guarantee that AI systems are accountable and transparent, the EU is pushing for stronger oversight and stiffer sanctions on these large companies. Some feel that tougher laws would reduce the disproportionate influence of major firms on disinformation, data privacy, and economic inequality.

The EU AI Act is a big advance in AI legislation, but its efficacy and fairness in handling these complicated challenges will undoubtedly be questioned when it is implemented.

This innovative law will rely on its proper implementation and continuing revisions when the EU AI Act commences. The Act marks a major move toward responsible and ethical AI development, balancing technical innovation and public and commercial interests. This comprehensive AI policy provides an example of how other areas might address complicated issues caused by fast-expanding technology.

How effectively the Act is implemented will determine its success. This involves making sure the AI Office and AI Board monitor compliance and resolve difficulties. How effectively corporations adapt and incorporate new laws will also determine the Act’s success. Organizations operating in or entering the EU must remain abreast of regulatory changes. Companies may handle the new laws by actively following the Act’s obligations, including frequent audits, AI governance structure upgrades, and compliance schedules.

Summary

As technology advances and new AI uses arise, the Act must be refined. Businesses, academics, and civil society must provide ongoing input to shape future modifications and keep the law current and effective. This continuous method will solve unexpected issues and maintain the regulatory framework current with technology and society.

The EU AI Act is a major step toward an organized and responsible AI strategy. Its execution will need dynamic coordination between regulators, companies, and other stakeholders. Companies can assure compliance and contribute to a responsible AI development and deployment discussion by keeping proactive and involved.

Leave a Comment