you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH : www.schoenherr.eu

01 February 2024
roadmap
austria

AI's cloudy affair

Artificial intelligence (AI) is transforming various aspects of the global business landscape. As a result, the concept of AI as a Service (AIaaS) has become an important tool for developing innovative business solutions. AIaaS essentially refers to the delivery of AI services via the cloud. The use cases are countless, ranging from speech recognition to machine learning systems. But while the potential of this concept is seemingly limitless, it is also fraught with legal obstacles that require careful navigation.


AI experiment

As part of our AI experiment in roadmap24, we have curated a few prompts and asked AI about this article. Take a look and find out what ChatGPT responded*:

 


 

AIaaS, which mirrors cloud services, provides on-demand AI solutions tailored to businesses' needs. This model allows businesses to use AI tools through cloud networks without having to make large up-front investments in equipment or expertise, as these services are often provided in the form of APIs. Businesses can obtain any AI services, such as machine learning models, natural language processing and image processing. AIaaS is offered with flexible payment models such as subscriptions or pay-per-use, and offers businesses benefits such as the elimination of specialised AI skills, access to high-performance infrastructure, transparent payment structures, improved usability and scalability.

Is AIaaS legally speaking "just" SaaS?

From a legal perspective, AIaaS and SaaS (Software as a Service) share some basic similarities, as both involve the provision of software functionality to end users via the cloud. However, there are nuances and specificities associated with AIaaS that may necessitate distinct legal considerations.

One of the most important differences relates to data protection and the intellectual property of information. AIaaS often involves the processing of large amounts of data, which may include personal information. In Europe, the handling of this data must always comply with the GDPR. In contrast to SaaS contracts, data protection requirements must already be taken into account when the AI is trained, not "only" when it is marketed. Furthermore, the proprietary nature of algorithms, machine learning models and the data used to train them can lead to complex intellectual property issues. This complexity extends to the question of who owns the rights to the results of an AI, especially when it produces novel content or solutions.

The decision-making capabilities of AI, especially in cases where it operates autonomously, give rise to questions of liability and accountability. If an AIaaS solution makes a decision that causes harm or financial loss, determining responsibility may be more difficult than with traditional SaaS offerings. Is it the AI model, the input data or the operation that is to blame?

The AI Act

In addition, transparency and accountability are no longer just ethical considerations, but central to AI compliance. The emerging legal framework known as the AI Act has placed these elements at its core. The AI Act emphasises that AI systems must operate transparently and provide explanations for their operations and decisions. This is particularly important so that end users and those affected by AI decisions can understand the rationale behind the AI's actions, to ensure accountability and promote trust.

The AI Act introduces a risk-based approach where AI applications are classified according to their potential impact on society and individual rights. Depending on the risk rating of the AI used, there are a number of obligations. Not only does this mean that AI solution providers must be diligent in ensuring that their systems are compliant, but users of these AI solutions also share this responsibility.

Meanwhile, the issue of bias and fairness in AI systems has moved from academic discourse to the real world. As AI becomes more integrated into society, from hiring processes to loan approvals, its impartiality is increasingly being questioned. AIaaS providers are now under scrutiny, with stakeholders demanding proof that their systems are designed and trained to be as impartial as possible. Any unintentional perpetuation or reinforcement of bias, whether based on gender, race, socio-economic status or other factors, can lead to significant reputational damage and legal consequences. Providers must therefore prioritise rigorous testing, continuous monitoring and iterative refinement of their AI models to maintain standards of fairness and neutrality.

A shifting landscape

After all, in SaaS, traditional metrics such as uptime and accessibility are at the forefront of service-level agreements. AIaaS shifts the paradigm by introducing a unique set of performance-based metrics. Given the inherent complexity of AI, factors such as the accuracy of predictions, the precision of algorithms or the speed of response for AI-driven functionality become critical. Users expect not only seamless access, but also superior performance, making the evaluation of AIaaS offerings a more nuanced and complex process.

In essence, while AIaaS broadly can be viewed as a subset of SaaS, its unique characteristics and potential risks mean that it may be treated differently under certain legal or regulatory frameworks. As technology rapidly evolves, legal norms and standards for AIaaS will continue to develop, highlighting the need for businesses to seek specialised legal advice.

author: Veronika Wolfbauer

Veronika
Wolfbauer

Counsel

austria vienna

AI experiment

* The AI add-on to this article ..

... has been curated by our legal tech team prior to publication.

... has been compiled by AI. Its results may not accurately reflect the original content or meaning of the article. 

... aims to explore AI possibilities for our legal content.

... functions as a testing pilot for further AI projects.

... has legal small print: This AI add-on does not provide and should not be treated as a substitute for obtaining specific advice relating to legal, regulatory, commercial, financial, audit and/or tax matters. You should not rely on any of its outputs as (formal) legal advice. Schoenherr does not accept any liability to any person who does rely on the content as (formal) legal advice.

roadmap24