DLP Insights

Artificial Intelligence in the workplace: Guidelines for HR and managers on the use of AI Systems in employment contexts (Econopoly of Il Sole 24 Ore, 27 August 2025 – Martina De Angeli, Alesia Hima)

Categories: Insights, Publications, News, Publications | Tag: Artificial Intelligence, AI

27 Aug 2025

Artificial Intelligence (AI) is now a concrete component of business processes, increasingly applied in Human Resources management. Algorithms promise efficiency and impartiality in complex tasks such as recruiting, performance evaluation, or task allocation. However, this promise comes with significant risks. Precisely because of their potential impact on fundamental rights, AI systems used in the workplace are now considered “high-risk technologies,” subject to stringent obligations regarding transparency, human oversight, and impact assessment on workers. 
 
In this context, the “Guidelines for the use of Artificial Intelligence Systems in the workplace” recently published by the Italian Ministry of Labour offer a strategic – not legally binding – document. It represents a clear political and cultural orientation for the Italian business community, with particular attention to SMEs. The goal is twofold: to promote responsible AI adoption and ensure full protection of workers’ rights. 

The guidelines are based on a clear message: AI must not become a “black box” that makes opaque and unchallengeable decisions. Even when technology is developed by third parties, ultimate responsibility always lies with the employer. Companies must move from experimental adoption to structured management in line with compliance and sustainability principles.

The four pillars of the AI guidelines 

The Ministry’s document is structured around four key principles, which reinforce obligations already found in current legislation and case law, strengthening their preventive application. 
 
1. Mandatory Human Oversight 

Decisions affecting the legal status of workers—such as hiring, promotions, evaluations, disciplinary actions, or dismissals—must not rely solely on algorithmic judgment. A competent and authorized person must exercise effective, informed, and traceable human control, capable of understanding, validating, or overriding the algorithm’s recommendations and bearing full responsibility for the final decision. 

2. Algorithmic transparency 

Companies must clearly inform workers about the use of AI in processes that affect them. Generic communication is not enough. Workers must be told what data is processed (e.g. CVs, performance, aptitude tests), what logic and criteria the algorithm uses, and what influence it has on the outcome. The guiding principle is “intelligibility”: systems must be explainable, understandable, and contestable. 

3. Impact assessment and risk mitigation 

In line with the EU AI Act, the use of AI in HR is considered a high-risk activity. Therefore, employers are required to carry out a prior impact assessment, considering risks such as discrimination, privacy breaches, and data quality. The guidelines also encourage regular audits and systematic controls of algorithm performance. 

4. System mapping and accountability 

Organizations must know what AI systems are being used, where, for what purpose, and who is responsible. Responsibility cannot be outsourced to technology providers. Internal governance must ensure ethical and correct use of AI. Mapping systems and defining internal roles are essential to uphold the principle of accountability. 

Continue reading the full version published on Econopoly de Il Sole 24 Ore.

More news