you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu

29 November 2023
Schoenherr publication
austria poland serbia

to the point: technology & digitalisation l November 2023

Welcome to the November edition of Schoenherr's to the point: technology & digitalisation newsletter!

We are excited to present a selection of legal developments in the area of technology & digitalisation in the wider CEE region.

Artists are fighting back against AI

This past spring, we witnessed strong criticism of ArtStation, which quickly snowballed into a mass "No to AI-generated images" protest joined by thousands of professional and amateur artists demanding the removal of AI-generated content. The antagonism of the artistic community towards AI is on the rise, as the initial opposition swiftly escalated into dozens of class action and individual lawsuits in the USA. Various individual artists and authors' guilds are suing providers of generative AI on account of copyright infringement, and more and more are joining the battle every day. Today, there are dozens of ongoing lawsuits brought by different types of artists in the USA dealing with this matter.

The artistic community has already voiced several fundamental concerns, some of a moral nature, others purely legal. The ethical arguments revolve around the fear of human work becoming devalued. They can be summed up as AI-generated content is contrary to art because simply clicking on a button to generate an image is not a creative endeavour. In that sense, AI does not really "create" anything new, but simply looks at available art and then mixes it into something else. Against that backdrop, artists feel that AI-generated images demean their work, skills and efforts, and undermine the time and talent that goes into art.

Beyond the philosophical and moral aspects of creativity, there are also deeper legal issues. One aspect concerns the potentially unfair competition that traditional artists are now facing. Mass-produced AI products can be created at the click of a button and cost nothing compared to classic art, something artists will struggle to compete with in the long run. But above all, it seems that all the commotion around AI might boil down to IP-related issues. The concerns voiced by aggrieved artists already encapsulate burning IP issues. What happens when AI art infringes the IP rights contained in prior human art? To generate images from prompts, AI relies on available databases that comprise billions of images and text trawled from the web. While those databases ought to contain content from the public domain, many artists argue that databases also host many copyrighted images. So, at least in part, AI relies on pirated human-made art, thereby infringing the IP rights of various rightsholders.

Artists also have stressed that many AI tools use human art without the knowledge or permission of rightsholders. Not only is AI "trained" on account of intellectual property, but it also effectively mashes elements thereof without observing the attribution and integrity rights of the copyright holders. Finally, a fundamental critique concerns the status of AI output, with many at odds as to whether intellectual property rights can apply to AI-generated content.

While the courts are yet to give their say in this matter, the artists have not remained idle. They are starting to rely on tools that directly contaminate and confuse AI systems, such as Glaze, Nightshade and Kudurru. Nightshade, for example, confuses the matching of images with textual prompts by creating a discrepancy between the image and text, thereby causing the AI to pair, for example, the prompt "car" with an image of a cow. "You can think of Nightshade as adding a small poison pill inside an artwork in such a way that it's literally trying to confuse the training model on what is actually in the image," says Zhao, the leader of the research team that built the tool. Working on the same principle, Glaze modifies the pixels in an artwork so that AI cannot produce works in the style of a specific artist, while Kudurru tracks scrapers' IP addresses to either block them or send them back unsolicited content (such as the middle finger).

All in all, these digital tools allow artists to disrupt future AI by "poisoning" the copyright works that may be included in the training datasets. Data poisoning attacks manipulate training data to introduce unexpected behaviour into machine learning models at training time. As such, they will not help artists revert and "untrain" the existing AI models who have already digested tons of artworks, but they might be able to prevent future training of their creative works without permission. The idea is to eventually pollute and break future AI models to such an extent that AI companies will be forced to either stop training on copyright works or be forced to seek permission from authors for data scraping.

These tools have come at a time when the debate on the use of copyright works for training purposes has intensified between two contrasting positions. One, voiced by authors but also other stakeholders that such use requires the triple C (Consent, Credit, Compensate), and the other asserted by AI developers who rely on "fair use" or other legal grounds to train AI tools on vast amounts of copyright materials without the consent of the authors. But it seems that the authors are not entirely alone in their position. An executive of Stability AI, Ed Newton-Rex, recently resigned over the company's view that it is acceptable to use artwork for training purposes without the permission of the artists. He told the BBC he thought it was "exploitative" for AI developers to use creative work without consent. "I think that ethically, morally, globally, I hope we'll all adopt this approach of saying, 'you need to get permission to do this from the people who wrote it, otherwise, that's not okay'," he said.

The use of copyright materials for training purposes thus remains one of the key issues in the present IP vs. AI debate. While we await the first decisions of the US courts that should finally clarify whether the use of copyright works for training purposes will be allowed in the USA under the "fair use" principle, Europe remains silent, as there is still no indication of how this question will unfold in practice. At the same time, European countries may end up with a heavily fragmented approach, as certain countries (such as France) are considering an author-friendly approach requiring the AI to seek prior permission, credit all individual authors and pay back a fair tax for the works used for training purposes, as opposed to the current "text and data mining" exception introduced by the EU Digital Single Market Directive, which allows scraping of copyright works for certain purposes. And while it remains for legal theory and practice to address the issues raised by artists and resolve the identified problems, the underlying idea (that it is unfair to reap what you did not sow, to steal another's labour of mind) continues to be as relevant today as it was at the dawn of the printing press.

In navigating the venture capital landscape, it is crucial to grasp the distinctions between common shares and preferred shares. While at most Austrian companies all shares are officially equal in the Commercial Register and in the articles of association, the shares in start-ups and scale-ups are in most cases contractually classified as either preferred shares or common shares. The classification is typically agreed between shareholders in the (non-public) shareholders' agreement. Common shares are mostly allocated to the company founders, while preferred shares are issued to investors.

Common shares: Think of these as basic ownership units. Usually, the founders get these shares. Holders of such shares hold voting rights and a share in the company's profits and assets. But during an exit or dividend distribution, they are at the end of the line for distributions.

Preferred shares: Although they seem the same in official paperwork, preferred shares come with special rights defined in the shareholders' agreement. A notable feature of preferred shares is the liquidation preference. In the event of an exit or if the company distributes profits, holders of preferred shares typically have a claim to be paid first up to the amount of their investment or even up to a multiple of their investment (depending on the agreement between the founders and investors). The liquidation preference comes with a higher issue price of preferred shares. Externally, all shares are typically issued at nominal values in the venture capital world and these values are publicly disclosed in the Commercial Register. But in reality, when preferred shares are issued, there is often an extra payment agreement called a shareholder contribution. This means preferred shares are usually sold at a higher price to investors, reflecting that extra payment. This extra payment is calculated from the actual valuation of the company, i.e. the higher the non-public shareholder contribution, the higher the company's valuation.

A cap table, short for capitalisation table, is a comprehensive ledger that outlines a company's ownership structure, detailing the distribution of equity among its shareholders. It serves as a fundamental financial document for both founders and investors, offering a snapshot of the company's ownership, valuations, and the equity stakes held by various stakeholders. The importance of a cap table lies in its ability to provide clarity and transparency regarding ownership percentages, facilitating effective decision-making, fundraising, and strategic planning.

Key points:

  • Ownership transparency: A cap table provides a clear breakdown of who owns what percentage of the company. This transparency is crucial for founders, investors and employees, helping them understand the impact of equity grants, investment rounds and other transactions on ownership distribution.
  • Valuation insight: Venture capitalists and founders use the cap table to assess the company's valuation at different stages of its development. This insight is valuable for negotiating terms during fundraising rounds, mergers, acquisitions or other equity-related transactions.
  • Investor relations: For investors, the cap table is a tool to monitor their ownership and track the performance of their investments. It allows them to assess the dilution effect of subsequent funding rounds and make informed decisions regarding participation in future financing.
  • Strategic decision-making: The cap table influences strategic decisions, such as issuing new equity, stock options or considering exit strategies. Entrepreneurs use it to evaluate the impact of these decisions on existing shareholders and to structure deals that align with the company's long-term goals.
  • Employee equity management: Start-ups often use equity as a tool to attract and retain top talent. The cap table aids in managing employee stock options and other equity-based incentives, ensuring fair and transparent distribution while considering the company's overall ownership structure.
  • Due diligence and acquisitions: During due diligence processes or potential acquisitions, prospective investors or acquirers closely examine the cap table to understand the ownership landscape, uncover potential issues and assess the overall health of the company.

In summary, a well-maintained cap table is an indispensable tool that fosters transparency, aids in decision-making and builds trust among stakeholders. Founders should carefully keep track of their cap table and update it regularly to show any changes in who owns the company and how the start-up's finances are changing over time.

With the recently published draft of the Corporate Law Digitalisation Act 2023 (Gesellschaftsrechtliches Digitalisierungsgesetz 2023, GesDigG 2023), the Austrian legislator intends to transpose Article 13i of Directive (EU) 2019/1151 into national law. This article requires Member States to enact provisions that exclude persons with certain previous criminal convictions from being registered as authorised representatives of a company for a certain period of time, a condition referred to as "disqualification". The details of what constitutes a disqualification are left to the discretion of each Member State.

Under the proposed legislation, a person will be disqualified if they have been sentenced to more than six months' imprisonment for certain economic offences, such as fraud, grossly negligent damage to creditors' interests or money laundering. The disqualification will expire three years after the conviction became legally binding. Whether a disqualification applies will have to be assessed ex officio by the competent commercial register court.

According to the draft, this new stipulation will affect limited liability companies, stock corporations, cooperatives, SEs and SCEs. In addition, the eagerly awaited FlexCo is likely to be added to this list as soon as a political agreement is finally reached on this new type of company.

Although the name of the draft – which is derived from the title of the corresponding EU Directive – is somewhat misleading (the draft has nothing to do with digitalisation), it introduces noteworthy changes to keep in mind.

France, Germany and Italy have reached a consensus on an approach to AI regulation that emphasises self-regulation through codes of conduct for AI start-up models rather than prescriptive obligations. In doing so, three of the EU's largest countries are openly challenging the AI Act and, possibly, putting the brakes on the AI Act.

The AI Act is intended to become a cornerstone of EU legislation, regulating AI based on its potential for harm. It is currently in the final stages of legislation, with the EU Commission, Council and Parliament working under intense pressure in trialogue talks to finalise the law. The EU Commission's draft has been shaken by the emergence of general-purpose AI systems such as ChatGPT. A key point of contention in the trialogue negotiations is the foundation models.

The three countries are now proposing an alternative approach that focuses on the applications of AI rather than the technology itself. They are proposing that developers of foundation models should produce "model maps", technical documents that summarise information about the models they have trained, including data about the models' capabilities, limitations, biases and security assessments. It is questionable whether the risk-based approach of the AI Act is not sufficient for this, as it also covers the use of AI. In any case, the idea of those three Member States is to foster an environment where innovation and security coexist without initial sanctions. Sanctions would only be considered following systematic breaches of the code of conduct and a thorough analysis of the errors identified.

It remains to be seen whether this agreement between the three countries will be able to stop the AI Act.

This executive order, dated 30 October 2023, establishes a comprehensive framework for the safety and trustworthiness of AI, addressing a wide range of concerns from privacy and fairness to national security. Among its key points, it directs developers of powerful AI systems to share the results of safety tests and other important information with the US government. This is especially true for models that pose a risk to national security, economic security or public health and safety. The National Institute of Standards and Technology (NIST) will develop rigorous standards for testing AI systems, with an emphasis on extensive red team testing to ensure safety before public release.

The executive order also emphasises the protection of Americans' privacy in the age of AI, and calls for the development of techniques and technologies that protect privacy. This includes advances in cryptographic tools and the establishment of privacy standards for federal agencies. Equality and civil rights are also a focus, with the order mandating measures to prevent AI algorithms from exacerbating discrimination in areas as diverse as housing, healthcare and criminal justice. The Biden-Harris administration wants to ensure that AI promotes equality and civil rights, with the Department of Justice and federal civil rights agencies playing a key role.

In addition, the executive order addresses the impact of AI on consumers, patients, students and advocates for the responsible use of AI in healthcare and education. This includes the establishment of safety programmes to address harm or unsafe practices associated with AI in healthcare. In relation to the workforce, the regulation recognises the transformative impact of AI on jobs and workplaces. It calls for the development of principles and best practices to mitigate AI-related harms and maximise benefits for workers, including issues of job displacement and workplace justice. Innovation and competition in the AI sector will be encouraged, with the order boosting AI research in the US and promoting a fair and competitive AI ecosystem. This includes supporting small developers and entrepreneurs, while expanding opportunities for highly skilled AI professionals to work in the US. To ensure the government's effective use of AI, the executive order provides guidelines for federal agencies' use of AI, including standards to protect rights and security and improve AI procurement.

The executive order aims to position the US as a leader in global AI development, with plans for international collaboration on AI standards and safe deployment. It introduces a comprehensive set of measures to ensure the safe and secure development of AI. It reflects the government's focus on harnessing the potential of AI while mitigating its risks, reflecting the administration's goal of balancing innovation with safeguards. In comparison, the European Union's AI Act similarly emphasises regulating AI based on potential risks.

On 14 November 2023, the European Data Protection Board adopted its Guidelines 2/2023 on the Technical Scope of Art. 5(3) of the ePrivacy Directive regulating cookie consent and requirements. The Guidelines aim to provide greater legal certainty to data controllers and individuals by clarifying which technical operations, particularly new and emerging tracking techniques, are covered by the Directive. The Guidelines analyse, among others, the definitions of four key elements of the discussed Article: "information", "terminal equipment of a subscriber or user", "gaining access" and "stored information and storage". Other than that, the document considers some of the potential use cases, such as URL and pixel tracking, local processing, tracking based on IP only, intermittent and mediated IoT reporting, Unique Identifier. The Guidelines will be submitted for public consultation for a period of six weeks (until 28 December 2023).

The Communication Platforms Act came into force in Austria in 2021. Back then, the Austrian legislator aimed to impose a set of obligations relating to content moderation and take-down systems of providers of communication platforms, whether established in Austria or in any other country. The Digital Services Act was still in the process of being created.

In the preliminary ruling proceedings between Google, Meta and TikTok on one side and the Austrian communications authority on the other, the Austrian Administrative Supreme Court asked the CJEU whether a Member State may impose general and abstract measures on communication platforms based in another Member State.

Short answer: no. The CJEU held that authorising Member States to adopt measures of a general and abstract nature applying without distinction to any provider of a category of information society services would call into question the principle of control in the Member State of origin. To allow the Member State of destination to adopt general and abstract measures aimed at regulating the provision of information society services by providers not established on its territory would undermine mutual trust between Member States and would be in conflict with the principle of mutual recognition.

The CJEU's decision has far-reaching consequences, meaning that general and abstract national provisions regulating intermediaries, such as the Austrian Communication Platforms Act or the German Network Enforcement Act, will become dead law. The Austrian legislator seems to have recognised this in the meantime, as the draft of the Austrian DSA Accompanying Act revokes the entire Austrian Communication Platforms Act. 

In any case, the Digital Services Act will be applicable for all intermediary services from 17 February 2024. If you are unsure whether it applies to your eCommerce service and what obligations your business might face, we are happy to provide you with the necessary information. 

The Austrian Supreme Court has published its opinion on the national accompanying legislation to the Digital Services Act (Regulation (EU) 2022/2065). Among other things, the Austrian legislator proposed a provision entitling persons subjected to "substantial insult to honour" online to compensation for personal injury suffered. Remarkably, the Austrian Supreme Court states that insults to honour are only one part of online hate speech. Often victims experience credit-damaging, defamatory or otherwise untrue allegations as well as coercion, threats or incitement to commit criminal offences. Therefore, the Supreme Court suggests extending the proposed compensation for a significant violation of personal rights infringing human dignity in an electronic communications network and not just for insults to honour.

more insights

publication

13 September 2023

austria

T.Kulnigg M.Czernin

The Venture Capital Law Review: Austria

more
press release

Austria: Schoenherr advises Erste Bank on implementation of first AI based financial tool in Austria

Schoenherr advised Erste Bank der oesterreichischen Sparkassen AG on the legal implementation of the 'Financial Health Prototype'. The 'Financial Health Prototype' is a text-based chatbot that combines Erste Bank’s financial expertise with AI technology from OpenAI and ChatGPT to answer 24/7 financial-related questions.

more