you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH :

27 June 2024
Schoenherr publication
austria poland serbia

to the point: technology & digitalisation l June 2024

Welcome to the June edition of Schoenherr's to the point: technology & digitalisation newsletter!

We are excited to present a selection of legal developments in the area of technology & digitalisation in the wider CEE region.

Summer and the holiday season are just around the corner, but the world of technology and law is not slowing down. Before the summer break, we would like to share with you some exciting news.

At its Worldwide Developer Conference in June, Apple announced that it would be implementing the first generative artificial intelligence, which it is calling Apple Intelligence, into its iPhones. The AI is expected to work together with Siri, broadening the range of the voice assistant's skills and marking a new chapter in the sector. Of course, the solution will come with new personal data and other challenges (especially in the EU, which may delay the launch in the region), so stay tuned for our future updates!

Meanwhile, the Polish data protection authority has published a survey addressed to controllers and developed by data protection authorities on behalf of the European Data Protection Board (EDPB). Its purpose is to determine how the right of access to data is implemented in the relevant EU countries. The EDPB will then analyse the results of the survey, which will be received from all over Europe. Depending on the answers, appropriate action will be taken at the EU level.

Additionally, from 1 June, a new law came into force in Poland that allows any adult to effectively block their PESEL (Personal Identity Number) via an application on their phone or website. This allows Polish citizens to avoid risk and liability if a criminal attempts to use this PESEL and the citizen's identity to take out a loan or credit or to make a duplicate SIM card. Thus, if a financial institution grants credit to a criminal using a blocked PESEL, it will not be able to seek any payment from the holder of the PESEL.

Keep reading below for all the news we have prepared for you in this edition of To The Point: Technology & Digitalisation.

First it was the artists who initiated a legal clash with AI enterprises. Then writers launched a series of legal actions against companies specialising in generative AI, soon joined by various publishers. Now, while the entertainment industry grapples with the implications of AI, it appears that actors might join the battle. Potential disputes centre on the contentious use of copyrighted material and personal information to fuel chatbots that emulate human behaviour.

The recent controversy involving OpenAI and Scarlett Johansson sparked a legal and ethical debate over the use of AI to generate or imitate human voices and other forms of expression. The dispute highlights the challenges and risks of creating and regulating AI systems that can potentially infringe on the rights and interests of artists.

OpenAI recently developed a new model that reads things out loud called GPT-4o. One of the voices they created, "Sky", is very similar to the voice of Scarlett Johansson. OpenAI had previously approached Johansson to license her voice for Sky, which she declined.  After the release of the Sky demo, Johansson's legal counsel sent two letters to OpenAI, accusing them of violating her right of publicity, right of privacy, and possibly federal trademark law. OpenAI took the demo down. Interestingly, it seems the choice of Johansson's voice was not incidental but due to her role as Samantha, a digital assistant who develops a deep connection with a human user in the 2013 film Her. Irrespective of whether there was an attempt to replicate Johansson's voice, OpenAI boss Sam Altman's own tweet suggests a desire to associate their product with the concept of general artificial intelligence, reminiscent of those depicted in science fiction narratives.

Johansson may be able to rely on copyright. Although the style of voice itself is not subject to copyright, the legal implications may vary depending on whether the materials used to train a synthetic voice model are based directly or predominantly on the genuine voice of the individual being replicated. Had OpenAI utilised Johansson's movie clips or other copyrighted materials to craft Sky, they could have encountered legal issues for copyright infringement if they did not secure the necessary permissions. However, this does not seem to be the case, at least from what OpenAI has previously communicated. The organisation has stated in a blog post[1] that it did not use Johansson's actual voice but instead employed "a different professional actress using her own natural speaking voice".

While this may serve to avert a copyright infringement claim, it is unlikely to shield OpenAI from the second legal concern – the infringement of Johansson's right of publicity. Voice actors cultivate a distinctive style characterised by their rhythm, pace, inflections and accents, which effectively becomes their trademark within the industry, and as such is safeguarded under the right of publicity. The unauthorised replication of a voice actor's distinctive style could potentially violate their right of publicity.

However, even in the USA, the rules and available remedies for voice imitation are not clear or consistent across the states. The right of publicity is not a nationwide law and varies in scope and application. SAG-AFTRA, the union representing actors and voiceover artists, is advocating for a federal right of publicity law, and has introduced three bills in Congress to protect performers' rights.

The situation is far more complex in the EU, as there is no EU-wide right of publicity. The right of publicity is not explicitly recognised as a distinct legal right in the same way as it is in the United States. Instead, the right of publicity is governed by a combination of different national laws and EU regulations that can vary significantly from one Member State to another. For example, the EU's General Data Protection Regulation (GDPR) provides individuals with certain rights over their personal data, which can include their image and name, but also audio recordings of their voice. The GDPR's broad definition of personal data can encompass elements that are also protected under the right of publicity. Some EU countries protect the image of a person through copyright or related rights. Then, if a person's name or likeness has been registered as a trademark, they may be able to prevent unauthorised commercial use under trademark law. In some circumstances, the unauthorised commercial use of a person's identity may be challenged under unfair competition laws, which protect against misleading practices and the exploitation of the reputation of others. Finally, many EU countries have civil law protections that can be invoked to protect personality rights, such as the right to one's own image or the right to privacy. However, while most of these rules protect the person's name or image, it is questionable if and to what extent they can be used to protect a person's voice.

The OpenAI and Scarlett Johansson affair raises important questions about the future of AI and the entertainment business. How can AI be used responsibly and ethically to enhance or complement human creativity and expression? How can the rights and interests of artists, authors and publications be protected and respected in the age of AI? How can laws and regulations keep up with the rapid and complex developments of AI technology? These are some of the issues that need to be addressed and resolved as AI becomes more powerful and pervasive in the entertainment sector.



The new Austrian company form, the Flexible Company or FlexCo, is now six months old. Introduced at the beginning of 2024, the FlexCo promises more flexibility compared to the most popular company form in Austria, the limited liability company (LLC). The FlexCo responds to the growing demand, especially from the start-up sector, to modernise Austrian corporate law. Six months on, approximately 300 FlexCos have been established. What have we learned so far?

Everyone's talking about it

We have spoken with a huge number of clients about transforming an existing LLC into a FlexCo or about whether to establish a FlexCo or an LLC. While the interest is great, clients are still hesitant to transform their companies, usually because they have not had the opportunity to broach the subject of such a large corporate structure change with shareholders. Some clients are put off by the novelty of the FlexCo, fearing risks due to perceived legal uncertainty, since it remains unclear whether the FlexCo will gain market acceptance. However, many newly established start-ups have opted for the new company form. There is even a law firm in the form of a FlexCo, so the new company form seems sufficient for lawyers themselves!

Employee participation

The FlexCo introduces a new share class aimed at employee participation. However, since the accompanying tax provisions combined with the employee-focused rights and privileges of the new share class are not convincingly attractive, the new share class is rarely used in practice. The new tax provisions apply to all capital stock, not only to the new FlexCo share class.


The FlexCo promises no transaction costs due to the absence of formalities for share transfers and capital increases. This is not entirely true, as a lawyer or notary is still required for share transfers and a notary for capital increases. Moreover, uncertainty remains about the details of implementing transactions using the new transfer formalities.


The FlexCo has earned its title. It provides a flexible set of rules and regulations, which have proven to be effective in practice. For instance, the implementation of authorised capital and the option to redeem own shares are extremely practical features. The possibility to do circular resolutions in text form without the need for feedback from every shareholder is a key factor in choosing the FlexCo over the LLC.


While 300 FlexCos is a small number compared to the LLCs established over the same timeframe, it is an important milestone for this young company form. Happy half-year birthday!

The Polish Ministry of Digital Affairs has announced plans to implement the "Cybershield Poland" programme, a comprehensive initiative aimed at bolstering the country's cybersecurity infrastructure. This ambitious programme is a response to the evolving cyberthreat landscape and aligns with the updated National Cybersecurity System Act, incorporating the EU's NIS2 directive. The initiative is set to receive an investment of PLN 3bln over the next two years, sourced from both EU funds and the national budget.

Key projects under Cybershield Poland include the Cybersecure Municipality initiative, which aims to enhance cybersecurity at the local government level, and the development of Local Cybersecurity Centres (LCC) by 2025. These centres will serve as IT Shared Services Centres, offering high-level cybersecurity services to multiple municipal institutions.

Additionally, the programme will see the establishment of the NASK Cybersecurity Centre by 2029. This centre is designed to increase the National Research Institute's capacity to detect and combat cyberthreats. Other notable projects include the expansion of the S46 System, which will facilitate information sharing on cyberthreats across the National Cybersecurity System, and initiatives to support the fight against cybercrime.

Cybershield Poland also emphasises continuous improvement of cybersecurity standards through regular security reviews, audits and certification processes. There will be a strong focus on training and capacity building, with programmes aimed at both technical and non-technical personnel. Open courses on cybersecurity and digital hygiene will be offered, alongside the development of new programmes such as PWCyber and PWSkills.

To enhance incident response, the programme introduces automatic and rapid information exchange on incidents between the Office for Personal Data Protection and CSIRTs. Regular national-level incident response exercises will also be conducted.

Overall, Cybershield Poland represents a significant step forward in fortifying Poland's cybersecurity defences, ensuring the country remains resilient in the face of increasing cyberthreats.

On 1 July, a law that implements the provisions of the DAC7 EU Directive will come into force in Poland. DAC7 is Council Directive (EU) 2021/514 of 22 March 2021, amending Directive 2011/16/EU on administrative cooperation in the field of taxation. It aims to prevent tax avoidance in the digital economy by obliging digital platforms to report data on their users and revenues and enabling the exchange of such data by EU Member States.

Pursuant to the EU directive, reporting obligations will apply to platforms where users sell goods, provide services, rent property or provide access to vehicles. Undoubtedly, the largest platforms, such as Allegro, E-bay, OLX, Booking, AirBnB, Uber and Bolt, will fall under these obligations in Poland.

The reporting will cover sellers who, in accordance with EU law, have performed at least 30 activities or generated total revenue from these activities exceeding the equivalent of EUR 2,000 in a given year. However, the reporting obligation will not apply to all sellers/users using a digital platform, i.e. (i) sellers who have not exceeded the aforementioned thresholds, (ii) a governmental entity, (iii) an entity whose shares are regularly traded on a recognised securities market or is an affiliate of an entity whose shares are regularly traded on a recognised securities market, or (iv) an entity that has exceeded 2,000 real estate rental transactions in a specific reporting period.

Any seller on a platform that is recognised by its operator as fulfilling the above criteria will be required to provide the platform operator with all the data required by the law. The platform operator will be able to request the seller to provide the relevant data at any time. The seller will have to make the data available no later than on the day the seller exceeds the thresholds set out above. If the seller refuses to provide the data, the seller will be blocked by the platform operator.

The first reporting in Poland in 2025 includes data for 2024 as well as for the previous year 2023, even though the regulations will enter into force on 1 July this year. This is due to the provisions of the directive, which required Member States to implement the provisions from January 2023. However, Poland has implemented this obligation with a delay.

The Austrian Supreme Court recently ruled on a landmark defamation case involving a police officer who became the target of a social media "shitstorm". The case stemmed from a video posted on Facebook depicting the officer during a protest, accompanied by a defamatory caption that incited negative engagement. The plaintiff, who was wrongly implicated in the incident, sought compensation for the immaterial damage caused by the ensuing shitstorm.

In its ruling, the Supreme Court addresses the inherent difficulty in tracing individual contributions to collective harm and allows the victim to seek full compensation from any one participant. Thus, a victim of a shitstorm does not need to identify the specific source of every defamatory statement to claim damages. It is sufficient to prove participation in the shitstorm (here: the defendant shared the original posting). Therefore, any participant in the shitstorm can be held responsible for the entire damage caused.

The Supreme Court also underscored that the difficulty of locating other liable parties and the risk of non-recovery (from individual infringers) must be borne by the infringers. Thus, the Supreme Court awarded the plaintiff the entire damages suffered against the defendant (EUR 3,000). After all, the infringers are at least partially connected and know to whom they have forwarded the post. Therefore, the infringers must handle the distribution of damages among themselves through recourse.

In a landmark decision, the French competition authority, Autorité de la concurrence, has imposed a hefty fine of EUR 250m on Alphabet Inc., Google LLC, Google Ireland Ltd and Google France. This penalty is a consequence of Google's failure to adhere to commitments it had agreed to as part of a settlement regarding the use of copyrighted content in its AI service, Bard, now rebranded as Gemini. This development is a pivotal moment for copyright law and generative AI, highlighting the legal obligations of tech companies in data utilisation and underscoring the importance of robust compliance strategies and ethical guidelines as the industry evolves.

Background of the case

The roots of this decision can be traced back to the French law of 24 July 2019, which was designed to ensure fair negotiations between press agencies, publishers and digital platforms. This law was a response to the shifting landscape of the press sector, characterised by the rise of digital audiences and the decline of print circulation, alongside the concentration of advertising revenue in the hands of major digital platforms.

In April 2020, the Autorité de la concurrence issued interim measures against Google, which were followed by a EUR 500m fine in July 2021 for non-compliance. Google was then ordered to comply with the initial injunctions under penalty payment.

Commitments and breaches

In June 2022, the Autorité accepted Google's commitments to resolve competition concerns, which included good faith negotiations, transparency in remuneration for related rights, and the assurance that these negotiations would not interfere with other economic relationships. These commitments were to be monitored by the appointed trustee, Accuracy. Back then, Google avoided this very same fine by pledging to enter good-faith negotiations with news providers about compensation for training on their content and other issues, but failed to do so.

However, Google has been found in breach of these commitments, particularly in its cooperation with the monitoring trustee and in failing to comply with four out of seven key commitments. Notably, Google's Bard AI service used content from press agencies and publishers to train its model without proper notification or the option for these entities to opt out, thereby hindering their ability to negotiate remuneration. The French competition authority concluded that "Google trained its Bard AI on copyright news articles without giving publishers sufficient information about remuneration or an opportunity to opt out" and therefore fined Google for "using news content to train its Bard AI service without the permission of the publishers, and without providing access to an opt-out tool that would have let them contest the AI usage".[1]

Google's response and corrective measures

Despite describing the fine as "disproportionate", Google has chosen not to contest the facts and will pay the fine, expressing a desire to move forward. Additionally, Google has proposed corrective measures to address the breaches identified by the Autorité.

Significance of the fine

The intersection of copyright law and generative AI has been marked by numerous lawsuits, particularly in the United States, with ongoing debates about the applicability of copyright laws to AI. Nevertheless, this penalty represents the first known instance of a company in Europe being fined for using data without authorisation for AI training purposes.

Although publicised in March, the significance of the fine seems to have been overlooked, as many are only now becoming aware of its implications. This fine could potentially be a catalyst for change, prompting other nations to take similar actions, influencing ongoing copyright lawsuits, and encouraging companies to refine their AI training methodologies.

The fine's importance is underscored by its novelty; it is the first known case where training on copyrighted data has been declared illegal. The cumulative effect of potential additional fines and the role of this precedent in supporting global court cases are noteworthy. The involvement of a competition authority in issuing the fine opens the discussion on the use of other legal frameworks to regulate AI training practices.

Concerns regarding the fine's effectiveness

While the fine's impact is acknowledged, it is important to note that it does not constitute legal jurisprudence, nor is it the first instance of Google being fined by a competition authority in France. Although significant, it may not be a "game-changer" in the broader context of copyright law. The outcomes of Getty Images' lawsuits in the UK and the US are anticipated to be more influential in shaping the future of copyright in AI.

The allocation of the fine's proceeds raises questions about the benefits to copyright owners. If companies can continue to use copyrighted data in their current models after paying a regulatory fine, the interests of copyright holders may not be fully served. There are concerns about whether fines alone can prevent future infringements or rectify past violations, and whether they offer any protection to copyright owners seeking the removal of their data from existing models.

Wider implications for the digital content landscape

The imposition of a fine on a major tech company for using copyrighted material in AI training sets a significant precedent, marking the first known instance where such an action has been deemed illegal by regulatory bodies. Although a EUR 250m penalty may seem insubstantial for a corporation of Google's scale, the implications are far-reaching. This could lead to a cascade of similar fines or bolster legal actions worldwide. Notably, the fine was levied by a competition authority, suggesting that there may be alternative legal avenues, such as fair competition laws, to restrict the use of copyrighted content in AI training, beyond traditional copyright laws.

The decision underscores the ongoing tension between large digital platforms and content creators over the use and monetisation of copyrighted material. It highlights the need for transparent and fair practices in the digital economy, especially as AI technologies become more prevalent in content generation and distribution, while serving as a reminder of the importance of adhering to regulatory commitments and the potential consequences of failing to do so. It also emphasises the role of national and European laws in shaping the interactions between digital giants and the press, aiming to protect the interests of content creators and maintain a balanced digital marketplace.

As the digital landscape continues to evolve, this case will likely influence future negotiations and regulatory actions concerning the use of copyrighted content by AI services and other digital platforms.