The European Parliament approves the landmark AI Regulation Framework
The European Parliament has endorsed the world’s first comprehensive set of regulations aimed at mitigating the risks associated with artificial intelligence (AI), marking a pivotal moment in AI governance.
Amidst the explosive growth of the AI sector, concerns have mounted regarding issues such as bias, privacy infringement, and existential threats to humanity. The AI Act, designed to classify AI products based on risk levels, seeks to address these apprehensions while fostering a more “human-centric” approach to technology, according to MEP Dragos Tudorache.
By adopting the AI Act, the European Union (EU) positions itself as a global leader in AI regulation, surpassing efforts made by other regions such as China and the United States. Enza Iannopollo, a principal analyst at Forrester, emphasised the significance of the AI Act, asserting that it sets a new standard for trustworthy AI worldwide, leaving other regions, including the UK, to “play catch-up.”
Unlike the UK, which hosted an AI safety summit in November 2023 but has not proposed legislation akin to the AI Act, the EU takes a proactive stance in regulating AI based on its potential societal harm. The law categorises AI applications into various risk levels, imposing stricter regulations on high-risk systems used in critical sectors such as healthcare, law enforcement, and elections.
Moreover, the AI Act addresses concerns surrounding generative AI tools and chatbots like OpenAI’s ChatGPT by requiring transparency in the data used to train these models and compliance with EU copyright law. This provision has been heavily scrutinized, particularly by AI firms facing lawsuits over data usage.
Although the AI Act has gained parliamentary approval, it must undergo further scrutiny and translation by lawyer-linguists before becoming law. The European Council, comprising representatives from EU member states, is expected to endorse the legislation as well.
Meanwhile, businesses are already grappling with the implications of compliance with the AI Act. Kirsten Rulf, a partner at Boston Consulting Group, revealed that numerous firms are seeking guidance on scaling AI technology and ensuring legal compliance.
The approval of the AI Act signifies a significant step towards regulating AI responsibly and ethically, safeguarding fundamental rights and societal well-being in the digital age.