Il nostro Managing Partner, Vittorio De Luca, ha contribuito al volume “Supervisor – i professionisti dell’AI”, curato da Filippo Poletti, insieme a illustri esponenti del mondo dei professionisti che hanno arricchito il libro con il loro punto di vista e la loro esperienza.
L’Intelligenza Artificiale sta trasformando la pratica legale, migliorando ricerca, analisi documentale e gestione dei processi. Per sfruttarne appieno le potenzialità servono consapevolezza, metodo e responsabilità.

Nel nostro contributo, affrontiamo il tema dell’utilizzo dell’AI in ambito forense, soffermandoci sui benefici operativi, sui profili di rischio legale e deontologico e sull’evoluzione del ruolo del professionista. L’obiettivo è valorizzare l’innovazione tecnologica senza mai perdere di vista il controllo umano, la tutela dei dati e la conformità al quadro normativo europeo, in particolare GDPR e AI Act, come elementi imprescindibili per un uso dell’AI realmente sostenibile e professionale.
The use of Artificial Intelligence in the workplace is steadily growing and offers significant opportunities to enhance efficiency, decision-making, and process management. However, introducing AI solutions without a clear strategy and full awareness of the associated risks may expose the organization to legal, reputational, and operational challenges.
For this reason, it is essential to adopt a structured approach that combines innovation and compliance, harnessing the potential of technology while ensuring adherence to data protection regulations, transparency, and workers’ rights.

A responsible, ethical, and compliant corporate approach, aligned with the GDPR, the AI Act, and best practices in governance and human oversight.
If you would like to receive our report by email, please fill out the form.
Law No. 132/2025 – aimed at ensuring transparency, fairness, and protection of workers’ dignity while promoting the ethical and responsible use of artificial intelligence in the workplace – establishes that both public and private employers and contractors must provide written notice to employees and to workplace union representatives (RSA/RSU, i.e. company-level trade union bodies) regarding the use of A.I. systems in processes that affect the management of the employment relationship.
The obligation applies whenever A.I. is used for activities such as – by way of example – recruitment, task assignment, performance evaluation, or termination of employment.

What companies should do to ensure compliance
*****
Our professionals are at your disposal for any further information.
On Wednesday, September 17, 2025, the Italian Senate definitively approved the bill containing “provisions and delegations to the Government on artificial intelligence,” connected to the national budget law.
For Italy, this represents the first national legislation specifically addressing artificial intelligence.
The bill, consisting of 28 articles divided into six chapters, does not directly regulate the use of A.I., but delegates to the Government the responsibility to adopt implementing decrees for different sectors.
The key points of the new law include:
The competent national authorities are: AgID (Italian Agency for Digitalization), as the notification and regulatory authority, and the National Cybersecurity Agency (ACN), responsible for supervision and inspections.
The legislator identified four main sectors: healthcare, employment, public administration and justice, education and sports.

Article 11 – Provisions on the use of artificial intelligence in employment
Article 12 – Observatory on the adoption of artificial intelligence in the workplace
Article 13 – Provisions on intellectual professions
Other related insights:
Artificial Intelligence (AI) is now a concrete component of business processes, increasingly applied in Human Resources management. Algorithms promise efficiency and impartiality in complex tasks such as recruiting, performance evaluation, or task allocation. However, this promise comes with significant risks. Precisely because of their potential impact on fundamental rights, AI systems used in the workplace are now considered “high-risk technologies,” subject to stringent obligations regarding transparency, human oversight, and impact assessment on workers.
In this context, the “Guidelines for the use of Artificial Intelligence Systems in the workplace” recently published by the Italian Ministry of Labour offer a strategic – not legally binding – document. It represents a clear political and cultural orientation for the Italian business community, with particular attention to SMEs. The goal is twofold: to promote responsible AI adoption and ensure full protection of workers’ rights.
The guidelines are based on a clear message: AI must not become a “black box” that makes opaque and unchallengeable decisions. Even when technology is developed by third parties, ultimate responsibility always lies with the employer. Companies must move from experimental adoption to structured management in line with compliance and sustainability principles.

The four pillars of the AI guidelines
The Ministry’s document is structured around four key principles, which reinforce obligations already found in current legislation and case law, strengthening their preventive application.
1. Mandatory Human Oversight
Decisions affecting the legal status of workers—such as hiring, promotions, evaluations, disciplinary actions, or dismissals—must not rely solely on algorithmic judgment. A competent and authorized person must exercise effective, informed, and traceable human control, capable of understanding, validating, or overriding the algorithm’s recommendations and bearing full responsibility for the final decision.
2. Algorithmic transparency
Companies must clearly inform workers about the use of AI in processes that affect them. Generic communication is not enough. Workers must be told what data is processed (e.g. CVs, performance, aptitude tests), what logic and criteria the algorithm uses, and what influence it has on the outcome. The guiding principle is “intelligibility”: systems must be explainable, understandable, and contestable.
3. Impact assessment and risk mitigation
In line with the EU AI Act, the use of AI in HR is considered a high-risk activity. Therefore, employers are required to carry out a prior impact assessment, considering risks such as discrimination, privacy breaches, and data quality. The guidelines also encourage regular audits and systematic controls of algorithm performance.
4. System mapping and accountability
Organizations must know what AI systems are being used, where, for what purpose, and who is responsible. Responsibility cannot be outsourced to technology providers. Internal governance must ensure ethical and correct use of AI. Mapping systems and defining internal roles are essential to uphold the principle of accountability.
Continue reading the full version published on Econopoly de Il Sole 24 Ore.