The term artificial intelligence ("AI") has no finite definition but is generally understood to describe a system that can analyse input and, based on this input, take actions to achieve certain goals. Karen Hao, senior AI editor at MIT Technology Review, defined it as follows: " AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can."
AI is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence. Most of us already use AI-based systems such as translation tools, voice assistants like Alexa or Siri, or face recognition functions on a regular basis. However, AI can also appear in tangible products like autonomous cars, robotic lawnmowers and drones. The potential of AI is endless and continuously evolving.
Why does AI need regulation?
On the one hand, regulation is usually associated with restriction. In the dynamic and ever-evolving technology sector, this will prove problematic for many users. An AI Act with too many constraints would severely hinder the EU from becoming an industry leader in the field of AI. On the other hand, the unregulated use of such powerful and effective technology could have a serious impact on our society. Without regulation of the fair competition of AI, the rights of consumers and workers would not be sufficiently protected in this rapidly changing sector.
What exactly should AI be able and not be able to do? Are there any areas where AI must not be used? Who is liable for damages caused by AI? These are some of the questions the act must cover.
Scope of the AI Act
The definition of AI in the regulation is relatively broad. It covers software that uses machine learning, the older rule-based AI approach as well as the traditional statistical techniques that have long been used in building models for credit scoring or recidivism. Although users of AI systems are also covered, the regulation largely addresses vendors, i.e. the companies that develop AI systems and either bring them to market or commission them for their own use.
Aim of the AI Act
The AI Act aims to protect individuals by establishing harmonised regimes for AI systems which either interact with natural persons, autonomously intervene in photo, video or audio files, or operate in regulated high-risk areas. This regulation is not limited to the territory of the EU, but rather aims to protect European citizens who may suffer adverse consequences in the use of AI systems. It is also intended to establish a certain level of trust in AI systems in the longer term.
Thus, the regulation also imposes several information obligations on companies that use AI-based systems in the B2C area. In future, customers will need to be clearly and transparently informed about the use of this technology. The European legislator obviously wants to offer companies a clear framework and legal certainty for the use of AI systems with this regulation. To promote innovation, the AI Act calls on national authorities and the European Data Protection Supervisor to establish AI regulatory sandboxes. Therein, it would allow SMEs and start-ups (in a controlled environment) to develop, test and validate their AI before launching on the market or starting to operate it. The proposal also includes specific measures to reduce regulatory burden and provide support for SMEs, users and start-ups.
What did the European Commission propose?
The EC White Paper suggested a risk-based approach whereby AI applications should be classified into four categories according to their potential risk: unacceptable risk, high risk, low risk and minimal risk. According to the proposal, only the first two categories will have to comply with strict rules.
Prohibited AI applications
According to Art 5 of the proposed AI Act, this includes applications that manipulate human behaviour and can thus harm people (lit. a and b). The explanatory memorandum of the proposal speaks about manipulation through "subliminal techniques". Additionally, AI must not be allowed to exploit vulnerable groups such as children or persons with disabilities.
Another type of prohibited application should be one that enables authorities to assess the credibility and reliability of persons based on their social behaviour or personality-related characteristics and to treat them unfavourably as a result (lit. c). This part refers, for example, to social credit systems, as they are already practiced in various forms in China. Such systems are correctly considered to not be compatible with European values.
Finally, the provision prohibits the use of real-time remote recognition systems in public spaces for the biometric identification of persons for the purpose of law enforcement (lit. d). However, the provision provides for several exceptions. For example, AI may be used to prevent terrorism or detect serious crimes. This provision is expected to cause the most disagreements in the legislative process, as the EU Member States have very different ideas about the relationship between freedom and security.
Strong regulation of high-risk AI applications
The second category of AI applications (Art. 6 and 7) of the proposed regulation are those that pose a high risk to health, safety or fundamental rights of people. High-risk AI applications are specified in Annex III of the proposal in a list that can be updated on an ongoing basis. Currently the list names AI systems in the areas of "Biometric identification and categorisation of natural persons", "Management and operation of critical infrastructure", "Education and vocational training", "Employment, workers management and access to self-employment", "Access to and enjoyment of essential private services and public services and benefits", "Law enforcement", "Migration, asylum and border control management" and "Administration of justice and democratic processes".
AI systems that work or have the intended purpose of working in these areas will have to undergo ex-ante conformity assessments and comply with strict rules in relation to data protection and data governance, documentation, transparency, human oversight, accuracy and security.
Hardly any regulation of other AI applications
While the proposal provides for the prohibition of systems with unacceptable risk and the extensive regulation of systems with high-risk, other AI applications with low or minimal risk should remain largely unregulated in order to encourage innovation.
The European Commission's perception of the dangers of unregulated AI is reflected in the surprisingly high penalties, which exceed even the already high GDPR penalties (up to EUR 20m or 4 % of global turnover). Member States can fine companies that market AI systems for prohibited purposes (e.g. social scoring) or that fail to comply with data training requirements up to EUR 30m or 6 % of global turnover, whichever is higher. Less serious violations, e.g. non-compliance with information obligations, may be punished with lower penalties, which gradually increase according to the severity of the violation.
Conclusion and forecast
The proposed AI Act is welcome and attempts to provide at least a rough minimum standard for all companies that want to use AI systems. It provides a nuanced regulatory structure that bans some uses of AI, heavily regulates high-risk uses and lightly regulates less risky AI systems.
Providers and users of AI systems will have to meet some requirements in the future to use their high-risk AI systems on the market. They will also be subjected to numerous compliance rules for data and data management, documentation and records, transparency and provision of information to users, human supervision as well as robustness, accuracy and security.
It is therefore advisable for companies to start raising awareness among their employees about the regulations that are bound to be introduced soon to ensure the trouble-free deployment of AI systems and avoid potentially draconian penalties.
"Providers and users of AI systems will have to meet some requirements in the future to use their high-risk AI systems on the market. They will also be subjected to numerous compliance rules for data and data management, documentation and records, transparency and provision of information to users, human supervision as well as robustness, accuracy and security."
1 Proposal for regulation of the European Parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts, COM(2018) 237 final (25 April 2019), COM(2018) 795 final (7 December 2018)