Publications

Labour Law and Artificial Intelligence: Risks and Compliance (2025 Guide)

Categories: Insights, Publications, News, Publications | Tag: compliance, Artificial Intelligence

13 Jan 2026

Artificial intelligence and work: what it means for companies 

For companies, the application of artificial intelligence to the workplace represents a profound transformation of decision-making processes that directly affect individuals. AI does not merely automate operational tasks; it intervenes in personnel management, influencing recruitment, evaluation, work organisation and, at times, decisions with significant legal effects. This shift of decision-making power towards algorithmic tools requires a rethinking of work organisation not only from a technological perspective, but also from a legal one, taking into account the protections provided by labour law and data protection legislation. 

The definition: what AI applied to HR processes is 

AI applied to HR processes consists in the use of systems that analyse the personal data of candidates and employees in order to generate assessments, forecasts or recommendations that guide corporate decision-making. From a legal standpoint, these tools often fall within the concept of profiling and, in certain cases, automated decision-making. As a result, their use cannot be driven solely by efficiency considerations, but must be framed within a set of rules ensuring fairness, proportionality and respect for individual rights. 

Practical examples: AI in recruitment, performance evaluation and shift scheduling 

In recruitment processes, AI is used to screen and rank candidates, often already at the early stages of hiring. In performance evaluation, it may support the analysis of quantitative and behavioural indicators, influencing assessments, bonuses or career paths. In shift scheduling, AI enables the optimisation of resources and workloads. In all these areas, algorithmic outputs are not neutral: they steer choices that produce concrete effects on the employment relationship and must therefore be legally sustainable. 

Human decision-making vs automation: the principle of “human oversight” 

The principle of human oversight represents a key element in the use of AI in employment contexts. European legislation requires that decisions producing legal effects or significantly affecting individuals must not be based exclusively on automated systems. This means that human intervention must be genuine, informed and capable of influencing the final outcome. From a legal perspective, human oversight is also the mechanism through which responsibility is attributed, decisions are reasoned, and their legitimacy can be defended in the event of disputes. 

The risks of AI: beyond “job losses” 

Job losses are only one aspect of the broader debate. For companies, the most immediate risks concern exposure to legal liability and sanctions. The uncontrolled use of AI may undermine the lawfulness of decision-making processes, generate litigation and negatively affect corporate reputation. In this sense, AI is not merely a technological issue, but a legal risk factor that must be carefully managed. 

From “jobs at risk” to “legal risk”: the real threats for companies 

The real risk for companies arises when AI is used without a prior assessment of its legal impacts. Opaque, non-explainable decisions or decisions based on improperly processed data can be easily challenged. Legal risk materialises when the company is unable to demonstrate that it has adopted a diligent and proportionate approach in the use of technology. 

Risk no. 1: Algorithmic discrimination (bias in recruitment and evaluations) 

AI systems may reproduce or amplify biases present in training data, producing discriminatory effects even unintentionally. In the HR context, this risk is particularly significant as it affects access to employment, career progression and contractual conditions. From a legal standpoint, algorithmic discrimination may result in violations of anti-discrimination legislation, with significant consequences in terms of liability and litigation. 

Risk no. 2: GDPR violations and employee privacy 

AI applied to HR processes involves complex and often large-scale processing of personal data. Where purposes are not clearly defined, legal bases are inadequate or information notices are insufficient, the risk of GDPR violations is high. In the employment context, the employee’s position of subordination requires a particularly stringent level of protection, making a cautious and structured approach indispensable. 

Risk no. 3: Automated decisions and sensitive areas (sanctions, Article 9 data) 

The use of AI in decisions that significantly affect the employee’s position, such as disciplinary sanctions or decisive individual assessments, raises serious concerns. In such cases, automation is hardly compatible with the principles of proportionality and fairness that characterise labour law. human intervention must be substantive and documented in order to ensure the lawfulness of the decision. 

How to use artificial intelligence in a compliant manner: 7 legal tips 

1.Audit and impact assessment (DPIA) with the DPO

When AI enters HR processes, the first question is not “what can it do”, but “what impact does it have on people’s rights”. Because recruitment, evaluation and work organisation directly affect the employee’s professional sphere, a DPIA often becomes the most prudent step (and, in many cases, a necessary one) to demonstrate that the company has assessed risks, mitigation measures and less intrusive alternatives ex ante. 

The involvement of the DPO is essential not only for GDPR compliance, but also because it allows the correct structuring of assumptions, legal bases, data retention and security measures, avoiding the emergence of critical issues only once disputes or inspections arise. 

2. Process mapping and documentation (AI processing records) 

Mapping processes means identifying where AI intervenes, which data it uses, which outputs it produces and, above all, how those outputs are used in HR decisions. Documentation (including through a dedicated internal register of AI processing activities) serves to demonstrate accountability and to make decisions defensible: in inspections or court proceedings, the typical question is whether the company is able to explain the logic, purposes and safeguards adopted. 

3. Policies and information notices for employees and trade unions 

The use of AI in the employment context requires transparency, not only as a regulatory obligation but also as a means of reducing the risk of conflict. Internal policies clarify rules of use, limits, responsibilities and prohibitions (for example, which data must not be entered into AI tools and which decisions may not be automated). Privacy notices, in turn, must explain in a clear and understandable manner the purposes of processing, data categories, general logic and impacts on workers. Where technology affects the organisation of work, the proper management of industrial relations and information/consultation obligations also becomes a factor of stability: timely and effective communication prevents AI from being perceived as an opaque tool of control. 

4. Governance: an internal committee (HR, IT, Legal, DPO) 

The most common mistake is treating AI as an IT project. In reality, within HR processes AI is a governance issue, as it combines decisions about people, data processing and work organisation. An internal committee involving HR, IT, the legal function and the DPO makes it possible to manage the entire lifecycle of the system: tool selection, risk assessment, definition of permitted use cases, periodic controls, incident management and updates. From a legal perspective, such governance is also the most effective way to allocate responsibility and demonstrate organisational diligence. 

5. Management training for “effective” human oversight 

Human oversight is not a mere sign-off. To be “effective”, decision-makers must be able to understand what they are validating: which variables the system has considered, what its known limitations are, and when the output is unreliable or potentially biased. Management training is precisely aimed at preventing the drift towards psychological automation, namely the tendency to trust the algorithm because it appears “objective”. In the employment context, this is a crucial issue, since decisions on performance, shifts or recruitment must remain contextualised and proportionate, which requires at least a basic ability to critically interpret algorithmic outputs. 

6. Explainability and traceability of decisions 

In HR, it is not sufficient for a decision to be “correct”: it must also be capable of being justified and reconstructed. Explainability concerns the ability to understand, at least at a general level, the logic that led to a given output; traceability concerns the ability to document who consulted the system, which data were used, which output was generated and how it was used in the final decision. These elements are essential for handling employee requests, complaints, inspections and litigation. In practice, explainability and traceability transform the use of AI from an “act of faith” into a controllable and defensible process. 

7. Contractual alignment with AI providers 

The contract with the vendor is often the point where compliance fails, because a “turnkey” solution is accepted without requiring adequate safeguards. In an HR context, it is essential to regulate in detail aspects such as privacy roles (controller/processor), security measures, data localisation and access, system logic, support for DPIAs, audit rights, incident management and liability in the event of errors or bias. From a legal standpoint, a well-structured contract is also a risk governance tool: if the vendor does not ensure transparency and controllability, the company remains exposed, even where a solution is formally compliant on paper. 

Frequently Asked Questions (FAQ) 

Can I use ChatGPT to evaluate employees? 

Tools such as ChatGPT may be used as operational support, for example for analysis or summarisation activities, but they cannot constitute the sole basis for individual evaluations. Where the output affects the employment relationship, it is necessary to ensure effective human oversight and a decision-making structure capable of justifying and defending the choices adopted. 

Can artificial intelligence decide on a disciplinary sanction? 

A disciplinary sanction cannot be the direct result of an automated decision. Even where AI supports the collection or analysis of information, the final assessment must remain human and must take into account the context and the specific circumstances of the case, in compliance with the fundamental principles of labour law. 

What should be included in the privacy notice regarding the use of AI? 

The privacy notice must clearly indicate whether and how AI systems are used, explaining the purposes, the general logic of operation and the consequences for workers. Transparent communication helps reduce the risk of disputes and demonstrates the adoption of a correct and responsible approach. 

Who is liable if AI makes an error or discriminates? 

Liability remains with the company using the system. Even where the AI is provided by third parties, the employer is responsible for the decisions taken and their effects. This makes careful management of relationships with vendors and a clear allocation of internal responsibilities essential. 

Case law and insights

Artificial Intelligence in the workplace: Guidelines for HR and managers on the use of AI Systems in employment contexts (Econopoly of Il Sole 24 Ore, 27 August 2025 – Martina De Angeli, Alesia Hima)

Artificial Intelligence in the Workplace: Opportunities and Risks to Know

How to manage AI in business: a guide for companies’ management (Agenda Digitale – 4 April 2025, Martina De Angeli)

Artificial Intelligence and Human Resources: what challenges should an HR Manager prepare for? (AIDP, 27 March 2024 – Stefania Raviele, Martina De Angeli)

Did you know that since Friday, October 10, employers are required to inform workers about the use of artificial intelligence in employment relationships?

DOWNLOAD NOW

Enter your email address to receive these contents in pdf format.

We have sent the PDF to your email address

Would you like to know more about other topics?