Who is Liable when AI Makes a Mistake? The Lack of Internal Policies Can Be Costly. Checklist for Companies

Who is Liable when AI Makes a Mistake? The Lack of Internal Policies Can Be Costly. Checklist for Companies

December 9, 2025 / Irina BustanAI Act compliance

We completely agree – using AI professionally solves one of the biggest problems entrepreneurs face: lack of time. With AI tools, the time allocated to drafting a price quote, writing an email or even recruiting staff is reduced by half. In other situations, AI becomes a medical second opinion or solves a log level critical incident in record time. No one denies the tangible advantages.

But although AI increases efficiency, productivity or even expertise, AI can also increase legal risk. And if your team uses AI tools without clear rules, you are already exposed. No, it is not enough to warn them not to enter personal data of collaborators or the specifications of the product developed by the company into ChatGPT, nor to teach them various prompting methods so that they receive the most accurate answers. Although extremely useful, AI literacy courses or workshops are not sufficient either.

The discussion takes place on a more advanced level, that of legal liability. If employees use AI without clear limits, if they enter sensitive data into tools, if they take the answers given by AI as absolute truths or do not document how they used AI for a certain task, we enter the area of legal risk.

In practice, we are already facing the first cases in which the involvement of a non-human factor (the AI tool) generates damage – in most cases, to the employer, business partner or client/patient. The Achilles heel: there are no internal remedies for managing this situation.

Sometimes we are struck by the ”what could happen?” or ”there is legislation that protects us” type mentality. On the other hand, our clients understand that there is no room for naivety in business, and internal remedies are the only ones that truly protect. We explain further.

At the moment, the framework regulation on the use of AI on the territory of the European Union is the AI Act (Regulation no. 2024/1689). It imposes certain obligations on providers, deployers (users), importers, distributors and manufacturers of AI systems, including of a documentary nature.

Because the question ”who is liable when AI makes a mistake?” naturally arose, and the AI Act does not cover this subject specifically, the European Commission had an attempt to regulate such hypotheses. But, following the AI Action Summit event in February 2025, specifically the controversial speech of the American delegation, the Commission's draft directive (the AI Liability Directive) was withdrawn. Therefore, the subject remained unregulated.

What does this mean for AI developers and users? In colloquial language, it means they will manage with what is available, namely: rules on contractual or tort liability, rules from labor law, competition law and consumer protection.

The naivety we discussed above is to believe that these are sufficient. And this is where we intervene, with a simple message: AI requires control, and if you don't have it, you are vulnerable. How does vulnerability translate in practice? Liability for errors generated by AI, various sanctions, financial losses and, ultimately, reputational risk.

And so, a pragmatic and diligent approach is checking off a list for any company that uses AI:

(1) Internal classification of how AI is used – which uses represent safety and which represent risk;

(2) Establishing the method of verification and approval of results given by AI use;

(3) Documenting AI use – through reports and evaluation procedures;

(4) An internal policy for compliance with the AI Act, depending on the classification of risks – to establish the limits of use and control mechanisms;

(5) An update of the internal regulations and, possibly, of the collective labor agreement – to establish the facts that constitute disciplinary violations in the context of AI use, applicable sanctions and accountability mechanisms;

(6) Internal trainings for legal and ethical use of AI – for a personalized approach;

(7) Contractual clauses regarding liability in B2B contracts.

Of course, in the case of the other categories (providers, importers, distributors and manufacturers of AI systems), additional documentary measures are required.

Cooperating with the AI consultants of developer clients or with our own collaborators/external collaborators (for user clients), our team ensures compliance with the AI Act and provides companies with the necessary leverage to manage legal risks before problems appear – not generic models, but adapted and personalized for each individual business.

Arrow Up