you are being redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH
In a historic move, the European Parliament (EP) has adopted its position on the Artificial Intelligence (AI) Act with a significant majority, on Wednesday, 14 June. The move initiates interinstitutional negotiations between the European Commission (EC), the European Parliament and the Council of the European Union, intended to finalize the world's first comprehensive regulation of Artificial Intelligence. This draft proposal comprises some significant changes compared to the EC's initial draft. Here's to name some:
The EP embraced a revised definition of an AI system in line with the OECD’s definition, diverging from the EC's original draft. The new definition stipulates that an AI system is "a machine-based system, operating with variable autonomy levels, and can generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments." This definition narrows the scope of AI compared to the EC's original suggestion.
Given the fact that Large Language Models have become "visible" to the greater majority, the EP has introduced the concept of foundation models, which are subject to stricter regulations. Foundation models represent an advanced stage in AI development, featuring algorithms optimized for broad applicability and versatility. These models are trained on a diverse range of data sources and large volumes of data, enabling them to undertake a variety of tasks, including some for which they were not specifically designed or trained. Foundation models can be unimodal or multimodal, trained through various methods like supervised learning or reinforcement learning. Importantly, AI systems, regardless of their specific intended purpose, can be implementations of foundation models. This flexibility means that a single foundation model can be repurposed for a myriad of downstream AI applications, enhancing their relevance and value across a diverse range of applications and systems. However, before these foundation models are made available, they must comply with stringent requirements (according to the EP's suggestion of Art 28b, Recitals 60e subsequ.). These include testing and mitigating foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, all involving independent experts. Rigorous data governance measures are also obligatory. Furthermore, foundation models must be registered on the EU database, and those classified as generative AI must adhere to additional transparency obligations.
The EC’s draft Act set out several harmful AI system applications, including manipulative techniques and social scoring. The EP proposes amendments to this list to include bans on intrusive and discriminatory practices, such as "real-time" remote biometric identification systems in public spaces, predictive policing systems based on profiling, and emotion recognition systems in various environments, including workplaces and educational institutions. Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases, which violates individuals' rights and privacy, is also included.
The EP's draft emphasized the need for the proportionate sharing of responsibilities along the AI value chain, particularly for economic operators that integrate AI models into various applications without control over their development. Downstream operators such as AI deployers or importers will be accountable for compliance if they substantially modify an AI system, qualifying it as a high-risk model. The EP also proposes to develop non-binding standard contractual clauses that regulate rights and obligations in line with each party's control level, ensuring protection for SMEs and start-ups against unfair contractual obligations (cf Art 28 – paragraph 2 a [new]).
The adoption of this position by the EP marks a significant step towards defining and governing the use and development of AI, maintaining a keen focus on human rights, privacy, and the fair sharing of responsibilities. Let's hope that the Trilogue discussions will be swift and efficient, with the aim of having a final version of the legislation by the end of this year.
Veronika
Wolfbauer
Counsel
austria vienna