AI Act and European Commission Ethical Guidelines - Soft Law, Hard Consequences

AI Act and European Commission Ethical Guidelines - Soft Law, Hard Consequences

Insert Subtitle

Violetta Matusiak, Data Protection Officer, Crowe Poland
3/3/2026
AI Act and European Commission Ethical Guidelines - Soft Law, Hard Consequences
The ethical guidelines on the use of artificial intelligence published by the European Commission, although formally non-binding, are in practice becoming an important reference point when assessing whether an organization has exercised due diligence in designing and deploying AI systems. The document prepared by the High-Level Expert Group on AI sets a standard for responsible use of technology and is increasingly treated as a compliance benchmark — both by regulators and business partners.

AI Act

What do the European Commission guidelines mean in practice?


The guidelines prepared by the Commission are increasingly treated as a measure of due diligence. In practice, this means the need to implement specific procedures: risk assessments, human oversight of AI systems, algorithmic transparency, data protection, and accountability mechanisms. Organizations are more frequently expected to document that they have conducted risk analyses and implemented appropriate mitigation measures. In this context, the guidelines function as a due diligence standard similar to industry standards or information security best practices. Their application helps demonstrate that an organization acted responsibly and in line with current regulatory expectations.

Although the ethical guidelines are not a source of law, they constitute an important interpretative tool when assessing compliance with regulations. They can also help demonstrate that an organization exercised due diligence in managing technological and legal risks. Failure to refer to these standards may be regarded as a lack of due diligence, particularly in the event of an incident involving an AI system (e.g., an incorrect algorithmic decision, discrimination, or a privacy breach).

AI Act

Consequences of failing to exercise due diligence


  1. Regulatory and legal consequences

  2. Failure to comply with ethical AI standards may increase the risk of:

    • violations of personal data protection regulations,
    • liability for damages caused by AI systems,
    • administrative sanctions resulting from the AI Act (in the case of high-risk systems).

    In the event of a dispute or regulatory inspection, documentation confirming adherence to the guidelines may serve as key evidence of due diligence.

  3. Contractual consequences

  4. Commercial contracts increasingly include provisions — especially in B2B relationships and regulated sectors — concerning compliance with EU regulations and standards, responsible use of AI, and obligations to conduct risk assessments and audits. Failure to demonstrate due diligence in this area may lead to serious consequences, such as breach of contract claims, liability for damages, or refusal of cooperation by partners requiring compliance standards. In particular, in relationships with large international entities or public institutions, applying ethical AI guidelines is often treated as a mandatory element of contractual due diligence.

  5. Reputational and business risks

  6. Failure to implement ethical standards may lead to a loss of trust among customers, investors, and partners. In practice, this creates risks such as:

    • loss of projects,
    • difficulties obtaining financing,
    • negative impact on brand reputation.

AI Act

Summary


The European Commission’s ethical guidelines on AI constitute a due diligence standard. Organizations using artificial intelligence should not only be familiar with these guidelines but also be able to demonstrate their practical implementation through procedures, audits, and documentation.

Their significance extends beyond ethics: they may have a real impact on the assessment of legal liability, contractual relationships, and financial risk. Implementing the guidelines should therefore be viewed as an element of risk management and protection of organizational interests, rather than merely a voluntary declaration of values.

Do you have a need to prepare your organisation for the upcoming challenges of AI usage?

Contact us

Violetta Matusiak
Violetta Matusiak
Data Protection Inspector

See also