you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH : www.schoenherr.eu

23 December 2022
Schoenherr publication
austria romania

to the point: technology & digitalisation l December 2022

Welcome to the December edition of Schoenherr's to the point: technology & digitalisation newsletter!

We are excited to present a selection of legal developments in the area of technology & digitalisation in the wider CEE region.

Insights waiting for you in this edition: 

2022 made history as one of the most exciting years for digital law enthusiasts.

The downfall of crypto exchange FTX, epic battles on platform liability and data privacy, not to mention all these new NFT projects… In the antitrust world, our attention was above all focused on the Digital Markets Act (DMA), which came into force on 1 November and aims to ensure that competitors can take part in digital markets and that relationships between so-called gatekeepers and their users are fair.

A few days ago, the European Commission (EC) sent us an early Christmas gift with the newly published draft implementing regulation, which includes procedural rules related to the designation of gatekeepers and the enforcement of the DMA (see here). In general, the draft regulation gives the impression that the EC wants to keep the gatekeeper designation procedure as quick and low-key as possible. Article 29(2) states that the EC "shall endeavour to adopt its non-compliance decision within 12 months from the opening of proceedings." Given the usual duration of antitrust and merger control proceedings before the EC, this seems extremely short and will be challenging for all parties involved.

In a first for the EC, notifying companies will be subject to a strict page limit for the notification form for designation (maximum 50 pages for each core platform service (CPS)) and the rebuttal (maximum 25 pages for each CPS). To all those who just got a bit of a throwback to the good old student days and would simply "cheat" by using a smaller font: such clever strategies will not be possible as "the text shall be in a commonly-used font (…) in at least 12 point in the body of the text and at least 10 point in the footnotes, with single line spacing, and upper, lower, left and right margins of at least 2.5 cm." Whether further explanations can be placed in appendices remains to be seen.

In any case, the draft clearly shows that the EC does not want to dwell on the gatekeeper designation process for long, despite the profound consequences a designation may have for the notifying company. Interested parties have until 6 January 2023 to comment on the draft. The EC then plans to adopt the new rules by the first quarter of 2023. And even though there may still be changes to the designation process for gatekeepers, one thing is already clear: you will definitely hear more about the DMA in 2023, including in this newsletter.

Thank you for following us in 2022 and please continue to do so next year. We wish you a great holiday season, a Merry Christmas and a Happy New Year! Looking forward to 2023!

On 13 December, the European Commission launched the process to adopt an adequacy decision for EU-US data transfers. The draft decision will address the concerns raised by the Court of Justice of the European Union in its Schrems II decision. It follows the signature of an Executive Order by President Biden on 7 October 2022 and has now been transmitted to the European Data Protection Board (EDPB) for its opinion. Afterwards, it must run through a committee of representatives of the EU Member States and the European Parliament before the European Commission can adopt the final adequacy decision. That decision would then allow data to flow freely and safely between the EU and US companies certified by the Department of Commerce under the new framework. Details here.

Where else but in the land of limitless possibilities, the home of Iron Man and his suit, should a breakthrough in nuclear fusion be achieved? Although to be fair, considering the significant expertise in physics now available in Austria, among others in the form of our freshly minted Nobel Prize winner, and of developments concerning the major international ITER project currently under construction in France and the Joint European Torus (JET) near Oxford, this is in fact not so cut and dried. Be that as it may, a team from the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory (LLNL) in California recently took one giant leap forward towards a green, safe and inexhaustible way of generating electricity. Or has it really?

"For the first time, more energy has been gained than consumed in the fusion of atomic nuclei," US Secretary of Energy Jennifer Granholm said. This is mostly accurate, but it lacks a central reference point. Without getting too technical, while it is true that the fusion reaction produced more energy than was put into it, the calculation does not account for the fact that the ~2 MJ of invested laser energy only comprises the energy which was directed into the system, but not the ~300 MJ of energy needed to operate the laser devices. In addition, the fusion energy resulting from the NIF experiment is thermal. Consequently, it would first have to be converted into electrical energy by a corresponding factor, which in turn would worsen its energetical balance.
 
Despite the current breakthrough, it will clearly be decades before fusion energy is widely used. This would also require the creation of a legal framework, which does not yet exist. It is therefore a fallacy to assume that the energy transition may be achieved with nuclear fusion in the near-term. Rather, the energy transition must be achieved with the means already available and by accelerating the transition to renewables. In Austria, important steps have been taken in this context with the Renewable Energy Expansion Act and the associated ordinances.
 
For more information, we invite you to consult our info corner.

As we already mentioned in our previous newsletters, an updated Cybersecurity Directive – NIS 2 – was published in May 2022. On 10 November 2022, the European Parliament approved the Network and Information Security 2 Directive ("NIS 2") and on 28 November the Council adopted the new legislation regime. It will replace and strengthen the existing NIS and aims at achieving a high common level of cybersecurity across the EU, with a view to improving the functioning of the internal market. The approach is to achieve a unified European cybersecurity regime with increased resilience across the entire infrastructure. It basically applies to "essential" entities (outlined in Annex I of NIS 2) and "important" entities (outlined in Annex II of NIS 2), such as those in the financial and digital infrastructure sector.

Who is concerned?
The aim of the original NIS was to regulate the cybersecurity of critical infrastructure, such as those in the energy, healthcare or water supply sectors. One of the strategic goals of NIS 2 is now to expand its scope of application in two ways:

First, NIS 2 provides for extraterritorial reach. Pursuant to its Art 2 (1), it applies to certain entities "which provide their services or carry out their activities within the Union." As a result, digital infrastructure or digital service providers that do not have a European establishment but offer services covered by the NIS 2 Directive in the EU will also fall within the scope of the proposed NIS 2 Directive. This will affect cloud computing service providers, data centre service providers and other online service providers such as marketplaces, search engines and social networks, to name a few.
Second, operators of essential services and providers of digital services in sectors deemed "critical to the economy and society" will also fall within the scope of NIS 2. With this, NIS 2 will also cover medium and large providers of public electronic communications services, digital services (again including social networking platforms and data centre services), and health services, including companies active in medical devices and life sciences, particularly in pharmaceutical research and development, as well as medical device manufacturers.

Such "essential" and "important" entities must register with the EU's Cyber Security Agency (ENISA). Requirements include conducting risk analysis and security policies for information systems, incident handling protocols, business continuity plans, cybersecurity testing and auditing procedures, cybersecurity training, and implementing supply chain and network security, cryptography and encryption measures.

Under the Directive, EU Member States may implement administrative fines of at least EUR 10m or up to 2 % of the total worldwide turnover for entities within the scope of NIS 2 that infringe the cybersecurity risk management obligations set out in the NIS 2 Directive. The regime was approved by the European Parliament on 22 November 2022. It is expected to be published in the Official Journal of the European Union soon and will enter into force on the twentieth day following publication. Member States will have 21 months from the entry into force of the NIS 2 to implement the Directive into local law.

The Council of the EU adopted DORA – the Digital Operational Resilience Acton – on 28 November 2022. DORA imposes uniform requirements for the security of network and information systems of companies and organisations operating in the financial sector as well as critical third parties which provide services related to Information Communication Technologies ("ICT") to them, such as cloud platforms or data analytics services. DORA applies to a wide range of financial firms, such as credit and payment institutions, investment firms, crypto-asset service providers and others.

DORA creates a regulatory framework for financial service providers that ensures the digital operational resilience of all EU financial firms. Financial firms must be able to withstand, respond to and recover from all types of ICT-related disruptions and threats. In particular, DORA is intended to improve the conduct of ICT risk management by financial institutions. It also calls for the introduction of a thorough audit of ICT systems and the raising of awareness among supervisors of cyber risks and ICT-related incidents faced by financial firms. To be able to ensure this, financial supervisory authorities will be granted appropriate powers to monitor the risks arising from financial companies' dependence on third-party ICT providers.

DORA provides for administrative penalties and remedial measures to comply with its provisions. It allows Member States to lay down rules establishing appropriate, effective and dissuasive penalties for breaches.

The adoption of the final text by the Council was the last step in the legislative process. DORA will be promptly published in the Official Journal of the EU and will enter into force on the twentieth day after its publication, but will only apply 24 months after its entry into force.

On 6 December 2022 the Council adopted its common position ("general approach") on the Artificial Intelligence Act. The draft regulation presented by the Commission in April 2021 aims to ensure that artificial intelligence (AI) systems placed on the EU market and used in the Union are safe and respect existing law on fundamental rights and Union values.

What is new in this updated version of the AI Act?

  • Narrowed scope: Often criticised was the wide scope of application, because it could be argued that this scope also included "classic software". The new draft restricts the scope of application to ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from more classical software systems. The Council's text narrows down the definition to systems developed through machine learning approaches and logic- and knowledge-based approaches.
    In addition, explicit reference has been made to the exclusion of national security, defence and military purposes from the scope of the AI Act. Similarly, it has been clarified that the AI Act should not apply to AI systems and their outputs used solely for research and development, and to obligations of people using AI for non-professional purposes falling outside the scope of the AI Act, except for transparency obligations.
  • Extended prohibited AI practices: The compromise text extends the prohibition of using AI for social scoring to private actors. The use of AI systems that exploit the vulnerabilities of a specific group of persons now also covers persons who are vulnerable due to their social or economic situation.
  • List of high-risk AI amended: Under Annex III, the regulation lists the uses of AI that are considered at high risk to harm people or properties and, therefore, must comply with stricter legal obligations. In the compromise text the list of high-risk AI use cases has been amended – by deleting three samples, adding two others and fine-tuning some of the remaining. Moreover, the classification of AI systems as high risk now includes an additional horizontal layer on top of the high-risk classification made in Annex III. In order to be considered high-risk, the system must have a decisive weight in the decision-making process and not be "purely accessory" in respect of the relevant action or decision to be taken.
  • Amended requirements: Many of the requirements for high-risk AI systems have been clarified and adjusted in order to be more technically feasible and less burdensome for stakeholders to comply with, for example as regards the quality of data or in relation to technical documentation.
  • General-purpose AI: New provisions have been added to account for situations where AI systems can be used for many different purposes (general-purpose AI), and where general-purpose AI technology is integrated into another system which may become high-risk.
  • Conformity assessments: With the aim of simplifying the compliance framework for the AI Act, the compromise text contains several clarifications and simplifications to the provisions on the conformity assessment procedures. The provisions related to market surveillance have also been clarified to make them more effective.
  • Transparency increase and measures to support innovation: The compromise text includes several changes that will increase transparency regarding the use of high-risk AI systems. It also allows testing in so-called regulatory sandboxes to occur in real-world conditions. Under certain conditions, this real-world testing could also occur unsupervised.

Next: The adoption of the general approach allows the Council to enter negotiations with the European Parliament ("trilogues") once the latter adopts its own position with a view to reaching an agreement on the proposed regulation.

Tullia: Hi ChatGPT, can you introduce yourself to our readers?

Assistant: Hello! I am Assistant, a large language model trained by OpenAI. I am designed to assist users in generating human-like text based on the input provided to me. I am not a real person and do not have personal experiences or thoughts, but I can provide information and answers to questions within the scope of my training. I do not have access to the internet, so I am unable to browse the web or provide information that is not within my pre-existing knowledge. Is there something specific you would like to know or talk about?

Tullia: Could you explain how you learn and where your training data comes from?

Assistant: As a large language model, I am trained on a vast amount of text data to predict the next word in a sequence. This means that my training data consists of many different written works, including books, articles and other forms of written communication. This allows me to generate text that is similar in style and content to the text I was trained on. I do not have access to the internet and cannot learn or update my knowledge based on new information. My training data and knowledge are limited to what I was trained on by OpenAI.

Tullia: Can you give us examples of how you can assist us?

Assistant: As a language model, my primary function is to assist users in generating human-like text. This can be useful for a variety of tasks, such as writing, summarising or translating text. I can also provide information and answers to questions within the scope of my training. For example, if you have a question about a historical event or a scientific concept, I may be able to provide an answer based on the information I have been trained on. I am not a real person and do not have personal experiences or thoughts, so I am not able to provide advice or opinions. I am only able to provide information and generate text based on my training.

Tullia: Are your texts public domain and can they be used in any way?

Assistant: As a language model, I am not a real person and do not hold the copyright to any of the text I generate. This means that the text I generate is not subject to copyright and can be used freely without permission. However, it is important to note that the text I generate is based on the training data I was provided by OpenAI, and this training data may include copyrighted works. If you use the text I generate, it is your responsibility to ensure that you are not using any copyrighted material without permission. You should also attribute the text I generate to me, as the source, to ensure proper attribution.

Tullia: Could you explain in less than five sentences what the fundamental issues of AI regarding intellectual property rights may be?

Assistant: One of the fundamental issues of AI and intellectual property rights is determining who owns the rights to the creations of AI systems. Since AI systems are able to generate original content, such as text, images and music, there is a question of whether the AI itself or the individuals or organisations that created and trained the AI hold the rights to this content. Another issue is the potential for AI to infringe on existing copyrights, particularly when generating content based on existing works. There is also the question of how to protect the intellectual property rights of AI systems and their creators, and how to ensure that these rights are recognised and enforced.

Tullia: Is there any single definition of AI relevant from a legal perspective?

Assistant: There is no single, universally accepted definition of AI from a legal perspective. AI is a broad and rapidly evolving field, and the legal implications of AI are still being explored and debated. Different legal frameworks and jurisdictions may have different definitions and approaches to AI, depending on their specific laws and regulations. As such, the definition of AI from a legal perspective may vary depending on the context and the specific legal framework being considered. It is important for individuals and organisations working with AI to understand the relevant legal definitions and frameworks in their jurisdiction to ensure compliance with applicable laws and regulations.

Tullia: Thank you, have a nice day!

Assistant: You're welcome! Have a great day!

Despite a turbulent year for Central and Eastern Europe and the region's proximity to the Ukraine warzone, last year CEE was one of the most efficient creators of value in Europe, according to the November 2022 CEE Report published by Google & Atomico. Fintech and enterprise software ranked second and third in the top five industries by venture capital investments in the first three quarters of 2022, with the region positioning itself as a solid start-up hub for many local and international investors.
 

The Amendment to the Austrian Copyright Act has fundamentally revised copyright contract law. So far we have provided an overview of the new provisions, we have taken a look at the implementations of the DSM Directive (new remuneration rules and transparency obligation) and continued with the provisions adopted from the German Copyright Act, namely regarding the transfer of rights by purpose. The series now concludes with our summary regarding the provisions on new exploitation forms and the right of second exploitation.

back