Understanding the EU AI Act as a framework for responsible innovation

Expert Views

Eike-Gretha Breuer

Artificial Intelligence (AI) is transforming the way businesses operate, offering unprecedented opportunities for innovation and growth. However, the rapid advancement of AI technologies has also raised important ethical and security concerns. In response to these challenges, the European Union (EU) has proposed a new legal framework, the Artificial Intelligence Act (AI Act), to regulate AI technologies and ensure their responsible use.

The EU AI Act is considered the world’s first comprehensive regulation of AI. It is currently in the negotiation stage among EU legislative bodies. An agreement on the final version of the law is expected in 2024. A risk-based approach is proposed in the AI Act, categorizing AI technologies into different risk levels, and linking them to different compliance and information obligations. Some technologies with unacceptable risks, such as social scoring or certain aspects of biometric video surveillance and subtle behavioral influence, are proposed to be banned altogether.


Implications for companies implementing AI

This new legal framework has profound implications for companies implementing AI. These companies will need to navigate a series of new obligations and considerations to ensure compliance with the Act. The implications of the AI Act for companies are significant:


  1. Risk-based approach: Companies will need to determine the risk level of their AI applications and comply with the corresponding obligations. Learn more in our related article.
  2. Data quality and transparency: The law will strengthen data quality and transparency regulations. Companies will need to ensure that their AI systems are trained on high-quality data and that their decision-making processes are transparent . These decisions occur when AI systems use algorithms and data to generate decisions and are best described as the logical paths followed from input data through to final outcomes.
  3. Human Oversight: The Act emphasizes the importance of human oversight of AI systems. Companies will need to ensure that their AI systems are designed to allow for meaningful human oversight . The degree of oversight required depends on both the use case and its associated risk level.
  4. Accountability: The Act introduces strict accountability rules for companies. Companies will be held accountable for the outcomes of their AI systems, even if those outcomes were not explicitly programmed.


AI Act Ahead: Review, and refine AI practices

Given these implications, it’s crucial for companies to start preparing now for the changes that the AI Act will bring. This could involve conducting a thorough review of current AI applications and their development process, investing in data quality and transparency initiatives, and developing robust accountability mechanisms.


A crucial move for ethical and industrial progress in AI technologies

The proposed AI Act is a significant step towards regulating AI technologies. It aims to ensure that AI technologies respect our values and rules while harnessing their potential for industrial use. As a B2B provider of customized software, we have a responsibility to understand these regulations and adapt our practices accordingly.

Steering through these changes can be complex. Start the conversation today.