The European Artificial Intelligence Act (AI Act), touted as “the world’s first comprehensive statute on artificial intelligence,” entered into force on Thursday, according to a press release issued by the European Union (EU).
The AI Act is designed to guarantee the reliability of AI technologies developed and utilized within the EU, while also including safeguards to protect individuals’ fundamental rights, according to the EU.
It also establishes a “harmonized internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment.”
The AI Act introduces a comprehensive and standardized framework for AI regulation across EU countries, using a forward-thinking definition of AI and a risk-based approach.
The act exempts minimal-risk AI systems, such as spam filters and AI-enabled video games, from certain compliance obligations the act imposes but allows them to adopt voluntary codes of conduct.
Chatbots and other AI systems with specific transparency risks are required to inform users that they are interacting with a machine. Similarly, AI-generated content must be appropriately labeled.
High-risk AI systems, such as AI-based medical software and recruitment tools, are required to comply with strict standards requiring risk mitigation, high-quality data sets, clear user information, and human oversight.
The act also prohibits the use of AI systems that pose unacceptable risks, such as systems that facilitate “social scoring” by governments or companies, to ensure the protection of people’s fundamental rights.
The European Commission introduced the AI Act in April 2021. It was approved by the European Parliament and Council in December 2023 for the purpose of safeguarding citizens’ health, safety, and rights against AI-related risks.