Law No. 132/2025 – aimed at ensuring transparency, fairness, and protection of workers’ dignity while promoting the ethical and responsible use of artificial intelligence in the workplace – establishes that both public and private employers and contractors must provide written notice to employees and to workplace union representatives (RSA/RSU, i.e. company-level trade union bodies) regarding the use of A.I. systems in processes that affect the management of the employment relationship. 

The obligation applies whenever A.I. is used for activities such as – by way of example – recruitment, task assignment, performance evaluation, or termination of employment. 

When and how to comply 

  • The notice must be delivered before the start of the employment activity, or in any case before the use of the A.I. system selected by the employer for the management, even partial, of the employment relationship. 
  • It must be drafted in a transparent, structured, and machine-readable format, in accordance with Legislative Decree No. 104/2022 (i.e. “Decreto Trasparenza”). 
  • The notice must also be sent to company trade union representatives or, in their absence, to the local branches of the relevant trade unions

What companies should do to ensure compliance 

  • Conduct a mapping of all A.I. systems used in the management of employment relationships. 
  • Update internal policies and privacy notices concerning data processing and personnel management. 
  • Prepare a standard information template to be provided to employees and trade union representatives. 

*****

Our professionals are at your disposal for any further information. 

On Wednesday, September 17, 2025, the Italian Senate definitively approved the bill containing “provisions and delegations to the Government on artificial intelligence,” connected to the national budget law.

For Italy, this represents the first national legislation specifically addressing artificial intelligence.

The bill, consisting of 28 articles divided into six chapters, does not directly regulate the use of A.I., but delegates to the Government the responsibility to adopt implementing decrees for different sectors.

The key points of the new law include:

  • a human-centered approach: A.I. must be a tool to support decision-making without replacing human intervention,
  • proper, transparent, and responsible use of A.I.,
  • the need to guarantee fundamental rights, non-discrimination, gender equality, safety, human accountability, personal data protection, privacy, accuracy, and sustainability.

The competent national authorities are: AgID (Italian Agency for Digitalization), as the notification and regulatory authority, and the National Cybersecurity Agency (ACN), responsible for supervision and inspections.

The legislator identified four main sectors: healthcare, employment, public administration and justice, education and sports.

Main provisions on employment

Article 11 – Provisions on the use of artificial intelligence in employment

  • Artificial intelligence must be used to improve working conditions, safeguard the physical and mental integrity of workers, enhance the quality of performance, and increase productivity, in line with EU law.
  • The use of A.I. in the workplace must be safe, reliable, and transparent, and cannot conflict with human dignity or infringe personal data privacy. Employers or contractors are required to inform workers of the use of artificial intelligence, in the cases and according to the procedures provided by applicable law.
  • The use of A.I. in the organization and management of employment relationships must always respect the inalienable rights of workers, ensuring non-discrimination on the basis of sex, age, ethnic origin, religion, sexual orientation, political opinions, and personal, social, or economic conditions, in accordance with EU law.

Article 12 – Observatory on the adoption of artificial intelligence in the workplace

  • A ministerial Observatory will be established to define a national strategy on the use of artificial intelligence in employment, monitor its impact on the labor market, and identify the sectors most affected by its adoption.

Article 13 – Provisions on intellectual professions

  • The use of artificial intelligence in intellectual professions must be limited to instrumental and support activities, with the intellectual work of the professional remaining predominant.
  • To safeguard the fiduciary relationship between professional and client, information on the A.I. systems used must be communicated to the client in clear, simple, and comprehensive language.

Other related insights:

Artificial Intelligence (AI) is now a concrete component of business processes, increasingly applied in Human Resources management. Algorithms promise efficiency and impartiality in complex tasks such as recruiting, performance evaluation, or task allocation. However, this promise comes with significant risks. Precisely because of their potential impact on fundamental rights, AI systems used in the workplace are now considered “high-risk technologies,” subject to stringent obligations regarding transparency, human oversight, and impact assessment on workers. 
 
In this context, the “Guidelines for the use of Artificial Intelligence Systems in the workplace” recently published by the Italian Ministry of Labour offer a strategic – not legally binding – document. It represents a clear political and cultural orientation for the Italian business community, with particular attention to SMEs. The goal is twofold: to promote responsible AI adoption and ensure full protection of workers’ rights. 

The guidelines are based on a clear message: AI must not become a “black box” that makes opaque and unchallengeable decisions. Even when technology is developed by third parties, ultimate responsibility always lies with the employer. Companies must move from experimental adoption to structured management in line with compliance and sustainability principles.

The four pillars of the AI guidelines 

The Ministry’s document is structured around four key principles, which reinforce obligations already found in current legislation and case law, strengthening their preventive application. 
 
1. Mandatory Human Oversight 

Decisions affecting the legal status of workers—such as hiring, promotions, evaluations, disciplinary actions, or dismissals—must not rely solely on algorithmic judgment. A competent and authorized person must exercise effective, informed, and traceable human control, capable of understanding, validating, or overriding the algorithm’s recommendations and bearing full responsibility for the final decision. 

2. Algorithmic transparency 

Companies must clearly inform workers about the use of AI in processes that affect them. Generic communication is not enough. Workers must be told what data is processed (e.g. CVs, performance, aptitude tests), what logic and criteria the algorithm uses, and what influence it has on the outcome. The guiding principle is “intelligibility”: systems must be explainable, understandable, and contestable. 

3. Impact assessment and risk mitigation 

In line with the EU AI Act, the use of AI in HR is considered a high-risk activity. Therefore, employers are required to carry out a prior impact assessment, considering risks such as discrimination, privacy breaches, and data quality. The guidelines also encourage regular audits and systematic controls of algorithm performance. 

4. System mapping and accountability 

Organizations must know what AI systems are being used, where, for what purpose, and who is responsible. Responsibility cannot be outsourced to technology providers. Internal governance must ensure ethical and correct use of AI. Mapping systems and defining internal roles are essential to uphold the principle of accountability. 

Continue reading the full version published on Econopoly de Il Sole 24 Ore.

Negli ultimi anni, complici l’evoluzione tecnologica e la pervasiva informatizzazione del lavoro, le realtà produttive di tutto il mondo si sono trovate ad affrontare importanti cambiamenti, spesso in assenza di un apparato di regole entro cui muoversi. In questo contesto, l’avvento dell’intelligenza artificiale ha rappresentato un elemento di novità, con rischi e potenzialità inesplorate che le imprese dovranno debitamente tenere in considerazione per il futuro.

Il processo di trasformazione del mondo e del mercato del lavoro è ormai in corso da diverso tempo a causa delle frequenti innovazioni tecnologiche che hanno interessato tali ambiti; fino a pochi anni fa, tuttavia, non era immaginabile un’accelerazione così forte a causa dello sviluppo e della diffusione dei sistemi basati sull’intelligenza artificiale (IA).

L’intelligenza artificiale nel mercato del lavoro

Per comprendere la portata di tale fenomeno, basti considerare che l’IA ha già modificato i contenuti di tutte quelle mansioni che consistono in processi decisionali basati su dati (e big data) e sull’elaborazione delle informazioni, sostituendo rapidamente le attività manuali che fino a poco tempo fa richiedevano risorse aggiuntive e risultavano particolarmente dispendiose in termini di tempo ed energie. Il fatto che le macchine elaborino informazioni e dati, per trarre conclusioni o formare decisioni, spiega da sé quanto profonda possa essere questa rivoluzione.

Un recente studio del Gruppo Adecco ha rilevato che in media il 70% dei dipendenti in tutto il mondo utilizza già strumenti di IA generativa, come ChatGPT e Google Bard, sul posto di lavoro. Adecco prevede che circa 300 milioni di profili lavorativi saranno soggetti a trasformazioni a causa dell’implementazione di strumenti di IA nei prossimi anni.

L’IA nella selezione del personale: opportunità e rischi

Ma non solo. Uno degli ambiti in cui i sistemi basati sull’IA sono maggiormente diffusi è quello della selezione del personale.

L’uso dell’IA nel processo di recruiting apre la strada a grandi opportunità per le imprese alla ricerca di nuove risorse, ma anche a rischi da non sottovalutare. I sistemi di IA sono utilizzati per esaminare e valutare automaticamente curriculum, video colloqui e dati disponibili riguardanti i candidati, essendo in grado di creare un profilo dettagliato delle loro attitudini professionali e consentendo, in questo modo, di individuare i profili più adeguati alle esigenze aziendali.

La raccolta e la gestione automatizzata dei dati in fase preassuntiva risulta più rapida, accurata e incomparabilmente conveniente in tema di costi, rendendo improvvisamente obsoleti e dispendiosi i processi di reclutamento tradizionali. Una ricerca condotta dall’HR Research Institute ha rivelato che il 10% dei responsabili delle risorse umane intervistati utilizzava già nel 2019 sistemi di intelligenza artificiale, anche se le proiezioni suggerivano un aumento significativo e rapido di questa percentuale nei due anni successivi.

Protezione dei dati e compliance nel reclutamento automatizzato

Le insidie, tuttavia, sono dietro l’angolo e dimostrano quanto sia ancora fondamentale l’intervento umano. I processi di reclutamento automatizzati, infatti, devono necessariamente essere governati al fine di garantirne la compliance con la normativa vigente in materia di protezione dei dati personali e divieto di discriminazione.

Su questo e su molti altri temi è intervenuto anche il legislatore europeo con l’Artificial Intelligence Act, approvato lo scorso 13 marzo, con la finalità di regolamentare l’uso e lo sviluppo dei sistemi basati sull’IA negli Stati dell’UE, garantendone un utilizzo etico, sicuro e responsabile che tuteli i diritti fondamentali e la sicurezza dei cittadini europei.

L’Artificial Intelligence Act: un quadro normativo per l’IA

L’AI Act trova applicazione in tutti i settori, ad eccezione di quello militare, ed interessa pertanto anche l’ambito lavoristico.

Il modello introdotto dal legislatore europeo si basa sulla gestione del rischio; ciò significa essenzialmente che vengono individuati diversi sistemi di IA entro diverse categorie di rischio, a ciascuna delle quali corrisponde un diverso grado di regolamentazione.

I sistemi di IA utilizzati nel settore dell’occupazione, nella gestione dei lavoratori e nell’accesso al lavoro autonomo sono considerati ad alto rischio. Tra questi, sono inclusi tutti quei sistemi utilizzati per l’assunzione o la selezione di personale, per pubblicare annunci di lavoro mirati, analizzare o filtrare le candidature e valutare i candidati, adottare decisioni riguardanti le condizioni dei rapporti di lavoro, la promozione o cessazione dei rapporti stessi, per assegnare mansioni sulla base del comportamento individuale, nonché per monitorare e valutare le prestazioni e il comportamento delle persone nell’ambito di tali rapporti di lavoro.

L’applicazione dell’AI Act nel settore dell’occupazione

Per i sistemi ad alto rischio è richiesta l’osservanza di specifici obblighi per il soggetto utilizzatore. In primo luogo, infatti, deve essere garantita la massima trasparenza, prima ancora che detti sistemi siano immessi sul mercato o messi in servizio e, dunque, sin dalla fase di progettazione (considerando n. 72; art. 50).

Continua a leggere la versione integrale pubblicata su Agenda Digitale.

Altri insights correlati:

In a context where technology is advancing rapidly, Artificial Intelligence (AI) is revolutionising the global work landscape, driving profound changes and opening up horizons that were previously unimaginable.  

The law is therefore called, once again, to regulate new scenarios that do not conform to traditional legal paradigms. The first step in this direction comes from the European Union. In fact, the European Parliament’s website states that “as part of its digital strategy, the EU wants to regulate artificial intelligence to ensure better conditions for the development and use of this innovative technology”. Thus, on 9 December 2023, the Commission, the Council and the Parliament reached a political agreement on the content of the AI Act – proposed by the Commission in 2021 – the final text of which is currently being finalised. 

The European legislature’s priority is to ensure that the AI systems used are safe, transparent, traceable, non-discriminatory and environmentally friendly.  

There is therefore a growing awareness, also at a regulatory level, that AI (i) is the engine of a change that raises ethical, social and legal questions around its use and its consequences and (ii) represents one of the most important and complex challenges facing companies.  

It is on this last aspect that organisations need to prepare themselves to overcome the profound transformation that, more or less silently, is underway in the world of work. 

Automation of repetitive tasks, reliable measurement of performance and limiting the need for personnel: is AI a talent worth interviewing? 

Taking advantage of a technology that autonomously collects information, processes it, draws conclusions from it or makes decisions, contributes to the speed with which services can be carried out, improves operational efficiency and reduces the scope for error in routine activities. It also impacts the personnel needs of a business and may also affect the measurement of the performance of a human resource. 

While this represents a great opportunity to make business processes faster, more reliable and more cost-effective, there are also several issues lurking under the surface. From an employment law point of view these include (i) bias and (ii) the risk of intensive employer control.  

The technology, although artificially intelligent, is programmed by humans and as such can therefore be affected by the bias of their programmers, reflecting and amplifying any errors present in the processed information.  

As we know generative AI is programmed to learn and (self-)train itself to improve over time, and this is also based on the information that is provided to it. The risk of biasreplication is therefore very high. 

In addition, AI provides and processes an unimaginable amount of data and can also directly or indirectly enable intensive remote control of employees.  

That said, in the Italian legal system, remote control is regulated in detail and allowed only in the manner and in the presence of stringent conditions provided for by law, including full compliance with the provisions on data protection. This is a matter that plays an obviously fundamental role when it comes to AI.

Continue reading the full version published on AIDP.