Artificial Intelligence (AI) has become a big part of our lives, showing up everywhere in our daily routines. From voice recognition on our phones to self-driving cars, AI is changing how we live, work, and interact with the world. But as AI becomes more common, it raises important questions about how it affects our rights and freedoms.

In this vein, the European Union (EU) has taken a major step by adopting the AI Act, a forward-thinking and pioneering legislation that aims to control how AI is developed, sold, and used in the EU. This regulation, the first of its kind worldwide, is meant to make sure AI in the EU is safe, respects people’s rights, and encourages new ideas and investments in this field.

Moving towards rules for AI

The EU’s AI Act represents a significant milestone in controlling this booming technology. By using a risk-based approach, the AI Act categorizes AI systems based on their risk level and imposes specific obligations for each category. Consequently, high-risk AI systems will encounter stricter requirements concerning transparency, data governance, risk management, and human oversight.

Risk classification and regulatory framework

The AI Act divides AI systems into groups based on how risky they are, separating those with less risk from those with more risk. This is important because it decides what rules developers and users of these systems have to follow.

At the core of the AI Act lies a classification of AI applications into four risk categories, from ‘minimal’ to ‘unacceptable’. This segmentation enables the application of regulation proportional to identified risks:

Unacceptable risk: prohibition of applications deemed contrary to fundamental ethical values, such as social scoring systems that could create dystopian societies.

High risk: strict requirements for applications that could significantly impact individuals’ lives, such as those used in healthcare, recruitment, or police surveillance.

Limited risk: transparency obligations for intermediate applications, such as voice assistants and chatbots, ensuring that users are aware of interacting with AI.

Minimal risk: lighter regulation for applications considered of little concern, such as video games.

Responsibilities for high-risk AI systems

AI systems seen as more risky have to follow stricter rules about being clear, handling data, and having people watch over them. These rules include checking how they affect people’s rights, having plans for risks and quality, and being listed in an EU database.

Banning certain AI systems

The act says no to some AI systems that could hurt people’s rights. This includes systems that use sensitive information for things like identifying people, making profiles about their behavior, or trying to change how people think without their consent. This ban aims to protect people from possible harm from AI and make sure it’s used fairly and legally.

Encouraging new ideas

Even though the AI Act imposes strict regulations on high-risk AI systems, it also seeks to stimulate innovation in this field. The EU has sometimes been criticized for overregulating, a concern highlighted by the experience with the General Data Protection Regulation (GDPR). While the GDPR has brought essential contributions to consumer protection, it has also been perceived as a potential hindrance to innovation. This balance between regulation and innovation is crucial for the EU, especially as it strives not to fall behind in the tech race compared to countries like the US. With the AI Act, the EU seems to have recognized the importance of striking the right balance between citizen protection through regulation and business freedom to foster innovation, thus avoiding lagging behind its global counterparts. »

For example, it aims to create designated environments where AI developers can safely and methodically experiment with new concepts before introducing them to the market. These environments provide a structured and secure setting for testing new AI ideas, ensuring that they meet safety standards and regulatory requirements before being made available to the public.

Setting up a control system

To make sure the AI Act works well, there will be a new control system at the EU level. This system will include an AI office to watch over the most advanced AI systems, a group of experts to give advice on general AI models, and a board to give technical advice. This system will make sure the AI Act is used the same way in all EU countries.

Impact on businesses and organizations

The EU’s AI Act significantly impacts businesses and organizations involved in AI development or utilization. It necessitates compliance with new regulations, particularly regarding high-risk AI systems. Companies must prioritize risk management, invest in employee training, and collaborate closely with regulatory bodies for effective implementation.

Next steps: drafting standards

Now that the AI Act has been established, the next step is to develop the necessary standards and guidelines to effectively implement it. It’s important to note that the AI Act provides a framework, but detailed implementation guidelines are needed to translate its requirements into practical actions for businesses and organizations. These standards will detail specific requirements, best practices, and processes to ensure compliance with the law. Industry experts, in collaboration with regulatory authorities, will be tasked with drafting these implementation documents to provide clear guidance to relevant businesses and organizations.

Conclusion

The EU’s AI Act has big impacts for companies and groups involved in making or using AI systems in their work. To follow the AI Act and make the most of what it offers, companies need to work with trusted advisors like United4, set up good ways to follow rules and manage risks, train their teams, and work closely with regulators and AI control groups.