you are being redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH
Artificial intelligence (AI) is shaping our world, and the European Union (EU) aims to become a world-class hub for AI and to ensure that AI is human-centric and trustworthy.
The EU is further trying to foster a European economy that is well-positioned to benefit from the potential of AI, not only as a user, but also as a creator and producer of this technology. It encourages research centres and innovative start-ups to take a world-leading position in AI and competitive manufacturing and services sectors, from automotive to healthcare, energy, financial services and agriculture.
AI is not only a great opportunity but also a huge challenge for businesses, from facing the need to transform to contending with new risks, liabilities and regulations. AI will also affect almost every industry.
Due to complex, multidisciplinary legal issues, new and innovative approaches must be taken. Join us on this journey!
In the last decade, AI has increasingly gained awareness on the consumer end. promising stories in the news media to the ubiquitous use of large search engines, AI has not only become part of our everyday lives but seems to be all encompassing. From cars to toasters, everything these days is "powered by AI" (or at least claims to be so). And everyone seems to be talking and thinking about the problems and opportunities associated with AI, no matter the field. Once relegated to niche scientists and science fiction aficionados, AI has now gone fully mainstream.
This has not gone unnoticed by eager regulators (and us, of course):
The EU Commission has drafted a tightly knit regulation on AI. But other jurisdictions are also seeing the need for specific rules. In the US, 2022 saw the emergence of an initial approach to regulate AI, focused on specific AI use cases. But more general AI regulatory initiatives may arrive in 2023, including the state data privacy law, FTC (Federal Trade Commission) rulemaking, and new NIST (National Institute for Standards and Technology) AI standards. In June 2022, the Canadian government introduced Bill C-27, the Digital Charter Implementation Act, 2022. Bill C-27 proposes, among other things, to enact the Artificial Intelligence and Data Act (AIDA). Although there have been previous efforts to regulate automated decision-making as part of federal privacy reform efforts, AIDA is Canada's first effort to regulate AI systems outside of privacy legislation. In October 2022, the Israeli Ministry of Innovation Science and Technology released its "Principles of Policy, Regulation and Ethics in AI" white paper for public consultation, which sets forth policy, ethics and regulatory policy recommendations.
AI as a field of scientific research has existed for quite some time. Still, no definition of AI has emerged that even computer scientists can agree on. In their textbook Artificial Intelligence: A Modern Approach, Russell and Norvig offer four rough groups of definitions:
In the end, Russell and Norvig settled on AI thinking rationally being the most helpful group of definitions in the light of the modern-day approach to the term. This definition demands of a program to react in a rational way to input it has never seen before by giving the output with the highest probability of success. As such an input-output relationship would usually not be hard coded, programs fulfilling this definition would usually have to learn and extrapolate based on previous input.
However, there are still a wide range of algorithms that check this box. Even linear regressions could therefore lead to an AI (e.g. a program that extrapolates an estimated house price based on house size via linear regression based on historic – known – price-size value pairs in the same region).
The modern-day discussion has, however, been triggered by one particular class of algorithms: neural networks.
in application in the last 10-20 years. Originating from the idea of algorithmically implementing a simplified model of how human brain cells work, it has meanwhile been proven that a sufficiently advanced neural network may implement any mathematical function to an arbitrary degree of precision. In other words, given enough calculating power and time, almost any real-world problem can be solved by a neural network.
As originally indicated, a simple neural network consists of multiple "neurons" mapping multiple inputs (x) to a single output (usually between 0 and 1). This is achieved by applying a certain weight factor (w) to every input (w*x), the sum of which together with a constant ("b" – called the bias) is then fed into the neuron (z = Σ wi x xi + b).
The neuron's job is then to apply a certain fixed function (the activation function) to the weighted sum of all inputs (z) producing an output (y) between 0 and 1.
This simple algorithm could run in parallel, forming a layer, and multiple such layers could be chained so that the inputs x would be the original outputs from other neurons and output y would further be one of the inputs of multiple neurons of the next layer until the final layer (the "output layer is reached"). Layers in between the first layer (the "input layer") and the output layer are called hidden layers, since their behaviour is usually not visible to the informed user or any program utilising the neural network.
While the topology of the number of layers and the number of neurons per layer are determined by a skilled developer, the biases and weights applied to each neuron's output are to be trained by a program implementing a machine learning algorithm.
As already alluded to with regards to neural networks, the heavy lifting in the last decades of the AI revolution was enabled not by neural networks themselves but by means of their efficient training. This is the field of machine learning.
Machine learning is a crucial component of AI that focuses on the automated examination finding of patterns in seemingly arbitrary data sets. Therefore, it enables any AI trained by a machine learning program to implement the logic (or more specifically, the mathematical function) inherent in any data set to be analysed, without any detailed human instruction.
Therefore, the field of machine learning enables computers to comprehend data and their relationships, thereby empowering them to execute specific tasks. By analysing multiple data points to recognise patterns over time, machine learning provides the foundation for technology to eventually make decisions or offer suggestions. Unlike traditional computers that necessitated explicit directions for every aspect of a task, AI-powered machines can now learn on their own, eliminating the need to be instructed for every task. This represents a significant departure from the previous model, where machines were required to be taught every aspect of a task.
Example:
Source: TensorFlow
ChatGPT is a large language model developed by OpenAI. It utilises machine learning to answer text-based questions and perform tasks relevant to human language. ChatGPT is capable of understanding a wide range of topics and responding to them, including history, technology, science, art and more. It can also engage in conversations and generate text on request.
Natural language processing (NLP) focuses on the ability of computers to understand and process human language. It allows computers to extract and analyse language data, making it easier for users to search for information using natural language phrases and sentences, rather than specific keywords. NLP can improve search results by providing a more intuitive and human-like search experience, and by incorporating machine learning algorithms that can understand the context of the user's query and return results that are more relevant to their needs.
ChatGPT and search engines are both AI-powered systems, but they serve different purposes and work in different ways.
ChatGPT is a language model that is trained to generate text based on the input it receives. It uses a neural network architecture called a transformer, which allows it to understand the context and generate a response that is relevant to the input. ChatGPT has been trained on a large corpus of text data, which enables it to generate responses to a wide range of questions and topics.
On the other hand, a search engine is a tool that helps users find information on the internet. When a user enters a query into a search engine, it uses a complex algorithm to search through billions of webpages and return the most relevant results. The algorithm takes into account various factors, such as the relevance of the page content, the authority of the website, and the user's location and search history. The search engine then ranks the results based on their relevance and displays them to the user.
In summary, ChatGPT is a language model that generates text, while a search engine is a tool that helps users find information on the web.
Data mining is the process of discovering patterns, correlations and insights in large data sets by analysing and modelling the data. Technology systems scour data and recognise anomalies within the data at a scale that would be impossible for humans. It helps organisations make more informed decisions by uncovering patterns and insights that may not be immediately apparent. The benefits of data mining can be seen in areas such as online recommendations, document review, healthcare and finance.
On the one hand, lawyers analyse the legal issues that arise with the use of AI. They deal with the direct AI regulation and with the indirect AI regulation through liability. For example, the risk classification of the AI software used or the contractual relationship between the user and the AI manufacturer.
On the other hand, lawyers are always looking to improve their own services and are trying to do so by using AI solutions. Thus, the number of legal tech AI applications is steadily on the rise. For example, lawyers are using machine learning software for document analysis. This helps them to analyse contracts and other legal documents efficiently and quickly. AI is also being used to automate and standardise contract drafting.
For regulators, AI refers to the use of advanced computer algorithms and machine learning techniques to simulate human intelligence and automate decision-making processes. Regulators are concerned with ensuring the safety, fairness and ethical use of AI in various industries, such as finance, healthcare and transportation. This may involve setting standards and guidelines for the development and deployment of AI systems, monitoring their performance, and taking action to address any negative impacts they may have on society.
Overall, the goal of regulators is to ensure that AI is developed and deployed in a responsible and ethical manner that promotes innovation while also protecting public interests.
AI is seen as a potentially transformative technology that can bring both benefits and risks. Regulators are tasked with balancing the promotion of innovation and the protection of public interests. In order to do so, they often focus on the following areas related to AI:
Overall, the goal of regulators is to ensure that AI is developed and deployed in a responsible and ethical manner that promotes innovation while also protecting public interests.
The AI Act has the potential to become a global standard, shaping the impact of AI on individuals across the world, similarly to how the EU's General Data Protection Regulation (GDPR) did in 2018. The EU's regulation of AI is already having a significant impact internationally, with Brazil recently passing a bill that establishes a legal framework for AI use. AI has a major impact on people's lives by affecting the information they see online through personalisation algorithms, analysing faces for law enforcement purposes, and aiding in the diagnosis and treatment of illnesses such as cancer.
The Act takes a "horizontal" approach and sets out harmonised rules for developing, placing on the market and using AI in the EU. The Act draws heavily on the model of "safe" product certification used for many non-AI products in the new regulatory framework. It is part of a series of draft EU proposals to regulate AI, including the Machinery Regulation and product liability reforms. The law needs to be read in the context of other major packages announced by the EU, such as the Digital Services Act, the Digital Markets Act and the Data Governance Act. The first two are primarily concerned with the regulation of very large commercial online platforms. The AI Act does not replace the protections offered by the General Data Protection Regulation (GDPR), but will overlap with them, although the scope of the former is broader and is not limited to personal data. The AI Act also draws on the Unfair Commercial Practices Directive for parts relating to manipulation and deception. Existing consumer protection law and national laws, such as tort law, are also relevant.
In a nutshell, the AI Act aims to govern the development and utilisation of AI systems deemed as "high risk" by setting standards and responsibilities for AI technology providers, developers and professional users. Certain harmful AI systems are also prohibited under the Act. The Act encompasses a broad definition of AI and distinguishes it from traditional IT. There is ongoing debate in the EU Parliament on the need for a definition for General Purpose AI. The Act is designed to be technologically neutral and future-proof, potentially affecting providers as greatly as the GDPR did. Non-compliance with the Act could result in penalties of up to EUR 30m or 6 % of the provider's or user's worldwide revenue for violations of prohibited practices.
Businesses need to determine if their AI systems fall within the scope of the legislation and conduct risk assessments of their AI systems. If they are using high-risk AI systems, they must establish a regulatory framework, including regular risk assessments, data processing impact assessments and detailed record-keeping.
The AI systems must also be designed for transparency and explainability. The terms of use for these systems are deemed crucial for regulating high-risk AI systems, requiring a review of contracts, user manuals, end-user licence agreements and master service agreements in light of the new legislation.
The AI Act outlines key terms and definitions related to AI systems and their usage. Below you will find a detailed explanation of the most material terms:
It is crucial to understand these definitions in order to fully comprehend the AI Act and its implications for AI systems and their usage.
The Regulatory Framework defines four levels of risk in AI:
Many countries are currently working on regulatory frameworks for AI, including the USA, Canada, Israel and the European Union. The latter has published (and amended) its draft AI Act. The AI Act splits AI into four different bands of risk based on the intended use of a system. Of these four categories, the AI Act is most concerned with "high-risk AI", but it also contains a number of "red lines". These are AIs that should be banned because they pose an unacceptable risk.
Prohibited AI applications are considered unacceptable as being in conflict with the values of the Union, for example due to the violation of fundamental rights. These include AI that uses subliminal techniques to significantly distort a person's behaviour in a way that causes or is likely to cause physical or psychological harm. AI that enables manipulation, social scoring and "real-time" remote biometric identification systems in "public spaces" used by law enforcement is also prohibited.
The Act follows a risk-based approach and implements a modern enforcement mechanism, where stricter rules are imposed as the risk level increases. The AI Act establishes a comprehensive "product safety framework" based on four levels of risk. It requires the certification and market entry of high-risk AI systems through a mandatory CE-marking process and extends to machine learning training, testing and validation datasets. For certain systems, an external notified body may participate in the conformity assessment evaluation. Simply put, high-risk AI systems must go through an approved conformity assessment and comply with the AI requirements outlined in the AI Act throughout their lifespan.
What are examples of high-risk AI?
Examples of high-risk AI systems that will be subject to close examination before being put on the market and throughout their lifespan include:
The AI Act outlines a four-step process for the market entry of high-risk AI systems and their components. Those steps are:
But that's not all…
After the high-risk AI system has received market approval, ongoing monitoring is still necessary. Authorities at both the EU and Member State levels will be responsible for market surveillance, while end-users will ensure monitoring and human oversight, and providers will have a post-market monitoring system in place. Any serious incidents or malfunctions must be reported. This means ongoing upstream and downstream monitoring is required.
The AI Act also imposes transparency obligations on both users and providers of AI systems, including bot disclosure and specific obligations for automated emotion recognition systems, biometric categorisation and deepfake/synthetic disclosure. Only minimal risk AI systems are exempt from these transparency obligations. Additionally, individuals must be able to oversee the high-risk AI system, known as the human oversight requirement.
Limited risk AI systems, such as chatbots, must adhere to specific transparency obligations. The AI systems under this category must be clear about the fact that the person is interacting with an AI system and not a human being. The providers of such systems must make sure to notify the users of the same.
The users of biometric categorisation and emotion recognition systems must inform the natural persons who are being exposed to the system's operation. Meanwhile, users of AI systems that create or manipulate audio, photos or video content (deepfake technology) must inform others that the content has been artificially generated or changed.
However, these transparency obligations do not apply to AI systems used by law enforcement agencies that are authorised by law, as long as they are not made available to the public for reporting criminal offences:
Low or minimum risk AI systems encompass AI systems such as spam filters or video games that utilise AI technology but pose minimal to no risk to the safety or rights of individuals. Many AI systems belong to this category, and the regulation allows for their unrestricted use without any additional obligations.
The European Commission has proposed two sets of liability rules regarding AI – the Revised Product Liability Directive and the New AI Liability Directive – aimed at adapting to the digital age, the circular economy and global value chains.
The Revised Product Liability Directive will modernise existing rules on the strict liability of manufacturers for defective products to ensure that businesses have legal certainty to invest in new and innovative products, and victims can receive fair compensation when defective products, including digital and refurbished products, cause harm. The revised rules will cover circular economy business models and products in the digital age and will help level the playing field between EU and non-EU manufacturers. It will also ease the burden of proof for victims in complex cases involving pharmaceuticals or AI.
The core elements of the Revised Product Liability Directive include:
The second proposal is the AI Liability Directive, which will establish broader protection for victims of AI-related damages and encourage growth in the AI sector by increasing guarantees. The directive simplifies the legal process for victims when it comes to proving fault and damage caused by AI systems. It introduces the "presumption of causality" in circumstances where a relevant fault has been established, and a causal link to the AI performance seems reasonably likely. The directive also introduces the right of access to evidence from companies and suppliers, in cases where high-risk AI is involved. The new rules strike a balance between protecting consumers and fostering innovation while removing additional barriers for victims to access compensation.
The proposed AI Liability Directive will have an impact on both the users and developers of AI systems. For developers, the directive will provide more clarity on their potential liability in case of the failure of an AI system. Individuals (or businesses) who suffer harm due to AI-related crimes will benefit from streamlined legal processes and easier access to compensation. The two key aspects are:
Other than to the regulator, IP issues specific to "AI" only arose more recently with the emergence of large neural network-type models and their training.
Since in many cases, the very essence of such AI is not having humans involved in their creation and activity, usual IP approaches of dealing with any other kind of software are partially invalidated here. Neither their training nor their output may require any human intervention or allow for human creativity to be involved.
In addition, their reliance on vast bodies of training data may pose IP risks, which, while not new as such, may affect an entire industry.
As laid out above with regards to AI and machine learning, creating a modern (neural network-type) AI requires vast amounts of data in order to train the subject AI model. Of course, this often involves crawling and scraping large parts of websites as well as online databases. This comes with some potential pitfalls from an IP perspective:
1. In most cases, such data predominantly include copyrighted works such as images, text, website layouts or the source code of websites.
The reproduction of which at the scraping client PC's data storage usually interferes with any right holder's exclusive right to reproduction as set out in Art 2 InfoSoc Directive (Directive 2001/29/EC) and Art 4 Software Directive (Directive 2009/24/EC) and its national transposition laws.
2. Web scraping may infringe exclusive rights in databases under the EU Database Directive (Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases) in the following ways:
Of course, scraping parts of the internet is usually not a major part of the AI development process anymore, since large datasets of scraped and otherwise collected data are often made available free of charge.(eg. see commoncrawl.org)
However, retrieving any copyrighted works or protected databases via such readily available data sets does not mitigate the risk from an IP perspective, as acts of reproduction of any protected works included therein are still necessary.
There is no such thing as a fair use doctrine within the EU copyright regime, but directives (and respective national transposition) rather contain strict and narrow limitations that Member States may impose (e.g. Art 5 InfoSoc Directive). Defending such acts against infringement claims of right holders is potentially challenging.
In short: usually not.
Usually, software is mostly made up of source code written by human programmers implementing the abstract mathematical algorithm or business logic of the program. The written code may then enjoy protection as a copyrighted work of literature in any form (as binary, in object form or as source code) to the degree that such code is not determined by external (technical) factors but makes creative use of some leeway within the perimeter of the program's functionality and external technical limitations.
This is different for AI models:
1. Implementing code
Although code implementing the execution of an AI model may be protected by copyright, it is rarely of particular value, since code mostly relies on multiple open source databases and any AI model (which is almost always stored as a separate file) could often be executed based on independently developed code with relative ease.
2. The model itself
However, AI models do not usually provide any room for creativity on the part of the data scientists creating them. The overall architecture is to a certain extent determined by humans. The resulting design of the architecture (which form of neural network is to be used, how many parameters/weight and bias values should the model have, etc.) is almost entirely driven by the attempt to achieve a more precise result, leaving almost no margin for creative decisions that would not diminish the output of the trained algorithm.
In addition, the training of the model often requires immense capital in the form of computing power, resulting in a trained model that produces desired results. Aside from this being entirely optimisation-driven (leading to the same result when the machine-learning algorithm is run again on an untrained model), the training occurs automatically. However, as only works created by a human individual may enjoy copyright protection in most jurisdictions (including Austria), there is no copyright protection for a trained model.
1. Know-how
In many cases the main means of protection for a valuable AI model will be mere know-how protection based on confidentiality. All parties having access to the model would have to be contractually bound to refrain from any acts endangering exclusivity in such a model (such as reproduction and disclosure to third parties).
To avoid the unauthorised reproduction of any trained models, such are usually not provided for download but via API or an online interface. In that way, any user may only enjoy the functionality of a particular model while having no access to the enabling and valuable AI model.
In case confidentiality is effectively safeguarded, the trade secrets regime imposed by the Trade Secrets Directive (Directive (EU) 2016/943) and their national transpositions concers some protection against unauthorized use. This regime also allows for a legal remedy against third parties to which such a confidential model has been disclosed in violation of an NDA, while being aware that such access likely violated the discloser's confidentiality obligations.
2. Patents
Any inventions in all technical fields that are novel may be patented according to Austrian patent law and the European Patent Convention. To a limited degree, certain aspects of AI models concerning their architecture and the way they are used may be patented if they
This approach, however, is limited by multiple factors:
The rise of generative AI, in particular diffusion models such as Stable Diffusion and Midjourney (creating realistic images) as well as large language models such as GPT4, are taking the world by storm with their apparent creativity and increasing accuracy.
In particular, large language models seem to be capable of much more than generating entertaining texts and helpful suggestions for coding software. OpenAI has reported impressive abilities of GPT4 in drug discovery.[1] DeepMind's AlphaFold (although not a large language model) has demonstrated a fascinating capacity to predict the structures of proteins, thereby helping to discover proteins with novel and desired properties.
All this shows that information and works generated by AI can be of considerable value, necessarily raising the question of whether such works may enjoy protection. The most prominent and important questions here concern copyright and patent law.
[1] https://cdn.openai.com/papers/gpt-4-system-card.pdf, pp 16 et seq.
At least Austrian copyright law requires a human creator whose decisions and input lead to a creative and original work. Therefore, any works generated entirely by an AI would not enjoy protection under the Austrian copyright regime.
In reality, however, the situation is not always so clearcut:
As already laid out above, according to Austrian law any technical teaching may be patented under Austrian patent law as long as it is novel, inventive and commercially applicable. However, there is some discussion internationally about whether a technical solution created by an AI is or should be patentable.
Sec 1 para 1 APA defines an invention by merely objective criteria irrespective of any human intellectual effort necessary for an inventive step. However, pursuant to Sec 4 para 1 APA, inventorship serves as a starting point for the right to be granted a patent. Thus, in theory, an invention may not be patented if there is no human inventor involved at all.
At least according to Austrian law, the inventor must always be a natural person who discovered the main inventive idea behind a patentable invention by creative conduct. This is mainly of relevance under Austrian patent law, as only the inventor (and their legal successors) are entitled to be granted a patent. It is disputed whether such a finding must be the result of wilful creative conduct by the inventor, and one may easily argue that any natural person operating the AI, and thus discovering the technical solution proposed by the AI, may be deemed the inventor.
However, even if no inventor is determined, this would not preclude the patentability of the invention. Pursuant to Sec 99 para 1 of the Austrian Patent Act, the Austrian Patent Office will not examine whether the subject applicant of the patent is actually entitled to be granted a patent for an invention mostly made by AI. As there is no legal remedy under the Austrian patent regime based on the mere absence of a human inventor, any such patent can hardly be challenged on such grounds.
The question of inventorship is slightly more relevant with regards to European Patent applications. The Board of Appeals of the European Patent Office has already laid out (EPO BoA J 0008/20) that it is mandatory to designate a human inventor according to Article 81 EPC. Thus, any application designating a program or no inventor at all would be rejected. However, as the BoA expressly pointed out, this may ultimately make no difference with regards to overall patentability, as no provision would prevent any applicant from simply naming the user of the AI as the inventor.
Of course, in the same way as AI models themselves, any AI's output may be kept confidential and thus enjoy limited protection against third-party use under the Austrian (and EU) Know-how regime.
Furthermore, certain other IP rights do not require a human creator, in particular most ancillary copyrights (such as sui generis database protection) as well as trademarks. In these cases the output may indeed be protected by such IP rights without the need for human invention, if the other requirements (investment, registration, etc.) are met.
Example of an AI reproducing copyrighted works: ChatGPT provides a pre-existing work of literature upon a user's request.
This problem is the basis of some controversy as regards recently released powerful generative AI tools such as Stable Diffusion, Midjourney or GitHub Copilot/Codex. Diffusion models in some cases deliver images closely resembling copyrighted images that are part of the training data. GitHub Copilot produces code snippets almost identical to code found in other programs made by human programmers.
The best example of this would be any automated translation service. Here the user provides an originally copyrighted text to the AI model, which simply translates the subject text, while leaving large parts of its semantics and structure untouched. As the main creative traits of the original work are thus preserved, the mere fact that an AI edited the work does not necessarily devoid it of any copyright.
The EU's AI Act is likely to have a massive impact on the way we use AI in our business world. It is already having a significant impact internationally, e.g. in Brazil.
Die KI in den Fesseln des Datenschutzes?
2023 begeht die Datenschutz-Grundverordnung (DSGVO) ihr fünfjähriges Jubiläum. Seit ihrem Inkrafttreten hat sich viel getan.
Serbia adopts ethics guidelines for artificial intelligence
On 23 March 2023, the Serbian government adopted Ethics Guidelines for the Development, Implementation and Use of Reliable and Responsible AI ("Guidelines"), which may be seen as yet another step in the process of harmonising Serbia's legislative framework with the European Union, following the Proposal for an AI Regulation announced by the EU Commission two years ago. The Guidelines largely rely on UNESCO's Recommendation on the Ethics of AI adopted in 2021, which Serbian representatives also helped create. Since the EU is awaiting its regulatory framework on AI, Serbia took the first step down this road as well.
The rise of the machines: The EU is getting ready to regulate AI
In April 2021, the European Commission (EC) released its long-awaited proposal for an Artificial Intelligence Act.1 But what is artificial intelligence?
What is AI and why should lawyers care?
Artificial intelligence (AI) and machine learning are familiar buzzwords when it comes to future technology and fundamental societal shifts. But what is it really all about and why is it so difficult to apply common legal concepts to these developments?
Artificial Intelligence is shaping our world, and the way we conduct business, and it will continue to do so. With that, regulation and legal consideration will become more important as well.
Our AI Task Force, consisting of experts from various legal areas, is supporting you from start to finish in all your AI matters.
Contact us! We promise you won't get a chatbot's reply.
Veronika
Wolfbauer
Counsel
austria vienna
Günther
Leissler
Partner
austria vienna
Tullia
Veronesi
Attorney at Law
austria vienna
Alexander
Pabst
Attorney at Law
austria vienna
Thomas
Kulnigg
Partner
austria vienna
Marija
Vlajković
Local Partner
serbia