you are being redirected

You will be redirected to the website of our parent company, Schönherr Rechtsanwälte GmbH: www.schoenherr.eu

02 March 2026
newsletter
romania

When the chatbot becomes a witness: how a simple conversation with AI can prove costly

An employee who asks a chatbot, "How can we align our prices with our main competitor without being detected?" may be handing the competition authority exactly the evidence it needs. Chat histories with artificial intelligence models can be seized during a dawn raid, and their content may be interpreted as evidence of anti-competitive intent. The consequence? Fines of up to 10 % of the company's registered turnover.

For example, a court in the United States recently ruled that such conversations are not protected by attorney-client privilege. The mere fact that they were subsequently forwarded to lawyers does not retroactively transform them into privileged communications.

Thus, what may seem like an efficiency tool – turning to AI to generate options in situations involving potential competition law risks – can become a major vulnerability in the context of investigations conducted by the competition authority.

Can a conversation with a chatbot cost a company up to 10 % of its turnover?

The record number of dawn raids carried out by the Romanian Competition Council (RCC), as reflected in its 2025 activity report, signals an intensification of its enforcement activity. Inspection powers are not limited to traditional documents; any information stored or archived in electronic form can be checked and seized, regardless of the medium on which it is stored. An exception applies to documents protected by attorney-client privilege (legal professional privilege).

The use of AI models, now a routine in the course of professional activities, may become the core body of evidence in an investigation conducted by the RCC if the history of such interactions is collected during a dawn raid and analysed in conjunction with e-mails or other materials, potentially resulting in fines of up to 10 % of the undertaking's registered turnover.

Implications for companies in Romania

The experience of using a chatbot is inherently conversational, creating the impression of a private dialogue carried out away from the watchful eyes of the authorities.

From a legal standpoint, however, the discussion takes place through a third party, under specific contractual terms. In the absence of clear safeguards and well-defined internal policies, these interactions will generally not benefit from legal professional privilege.

The risk is not limited to major market players or company management. From multinational groups to companies with a small market share, any company can be exposed if AI models are used without precautions and without internal guidelines. Even seemingly simple statements such as "What are the risks if we align our prices with our main competitor?" or "How can we avoid detection of a market-sharing agreement?" may, under certain circumstances, constitute evidence in an investigation.

Such interactions may be interpreted as revealing intent, a degree of risk awareness, or even the existence of a strategy, even where the chatbot is used merely to generate documents or rephrase ideas.

What is to be done?

Essentially, the internet remains a public space. From a competition law perspective, a simple test should therefore be applied before using a chatbot: if the information is current or future-oriented, individualised and capable of directly influencing commercial conduct (prices, margins, volumes, customer lists, strategies), it should not be shared in an informal setting. Data with potential competitive impact requires careful analysis, while historical, aggregated or public information is, in principle, less problematic. At the same time, implementing a clear internal policy on data confidentiality in interactions with AI models, supported by regular training programmes for team members, can mitigate these risks.

authors: Georgiana Badescu, Teodora Burduja, Mara Nedelcu