The GDPR established a high legal standard which already affects AI systems processing personal data and could soon be expanded by the AI Act. The latter sets out specific restrictions and security measures for AI systems and prohibits some particularly harmful ones. So, while the GDPR enshrines a general principle of prohibition that allows data processing only on limited grounds of allowances, the AI Act establishes a horizontal regulation. In contrast to the GDPR, unlike the processing of personal data, the placing on the market, commissioning and use of most AI systems is not generally prohibited but regulated.
Another distinction is in how companies approach the AI Act. Since most businesses process personal data, the GDPR clearly demonstrated its broad applicability. But as the AI Act adopts a risk-based approach, many companies appear to believe that their current computer systems do not fall under its scope, or, more precisely, are not covered by the provisions related to high-risk AI systems.
That is not necessarily the case. The current draft AI Act defines an AI system as a "machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate output such as predictions, recommendations, or decisions influencing physical or virtual environments." This broad definition encompasses more systems than currently demonstrated in the media as AI systems. In our digitalised world, most corporate IT systems meet this definition. The AI Act distinguishes between prohibited AI practices, high-risk AI systems and general-purpose AI systems. This means that while certain AI systems are banned within the EU, most provisions address high-risk AI systems, setting out specific obligations for providers, importers, traders and users.
Is it high-risk?
The definition of an AI system must be read in conjunction with the definition of high-risk AI systems. Annex III of the AI Act includes an ever-expanding list of high-risk AI systems, which is why the provisions on high-risk AI systems will apply to most companies. But companies placing on the market, commissioning or using general-purpose AI systems should also pay attention. Although these AI systems face rather limited transparency and information obligations, such as flagging the use of an AI system when interacting with humans, they should not be underestimated. Ongoing discussions on transparency in the context of the GDPR show that compliance cannot be granted easily.
Therefore, even though the draft AI Act provides for a transition period, companies are well-advised to start addressing it promptly, especially given the substantial penalties envisioned. First, companies should assess the risk category of their AI systems. Understanding whether an AI system falls into the high-risk category is crucial, as it dictates the regulatory requirements and obligations. Companies should consider the potential harm and the specific products or applications they are used for or which they themselves represent. Once the risk category is determined, companies can systematically implement the corresponding regulations.
For high-risk AI systems, this means, for example, implementing a risk management system, data governance, technical documentation, human supervision, cybersecurity safeguards and the like. For less harmful AI systems, companies should consider how to provide transparency and information. This may also include a lessons-learned approach from the execution of the GDPR. Therefore, companies should stay informed about transparency discussions regarding GDPR and be prepared for potential changes in the AI landscape. By tackling this process head on, companies will not only demonstrate compliance, but also position themselves to adapt seamlessly to the evolving regulatory environment in the digital age.