On 10 July 2023, the European Commission adopted an adequacy decision for the EU-US Data Privacy Framework ensuring that the United States of America guarantees an adequate level of protection of personal data comparable to that of the European Union.

The adequacy decision is one of the tools provided for by Regulation (EU) 2016/679 (the ‘Regulation’) to transfer personal data from the European Union to third countries that, upon prior assessment by the European Commission, offer ‘an adequate level of protection’, i.e. a level of protection of personal data equivalent to that guaranteed within the EU.

The consequence is that personal data can be transferred securely and can be managed in the same way as data transmissions that take place within Europe.

What does the new EU-US Data Privacy Framework entail?

The EU-US Data Privacy Framework is structured around a self-certification mechanism whereby US companies undertake to comply with a number of personal data protection obligations, including, but not limited to, compliance with the principles of purpose limitation, data minimisation and retention, as well as specific obligations regarding data security and data sharing with third parties.

The organisations’ undertakings will be renewed on an annual basis and are subject to checks and monitoring by the U.S. Department of Commerce, which will process certification applications and periodically verify compliance with the requirements by participating companies.

European citizens will benefit from several independent and impartial remedies in the event that their data is processed in a non-compliant manner, including the newly established Data Protection Review Court (DPRC).

US law will provide a number of safeguards, including limiting access to personal data by public authorities to what is necessary and proportionate to protect national security or to enforce criminal law.

In any case, the Data Privacy Framework will be subject to periodic revisions by the European Commission together with representatives of the European data protection authorities and the competent US authorities.  The first review will take place within one year of the entry into force of the adequacy decision.

The other instruments provided for by the Regulation

It is worth remembering that in addition to the adequacy decision, the Regulation also provides for other tools to ensure the correct transfer of data outside the European Union, including:

  • the adoption of Standard Contractual Clauses;
  • the adoption of Binding Corporate Rules (BCR) by large international groups following negotiations with the supervisory authorities of the countries involved;
  • adherence to specific Codes of Conduct or, in any case, to certification mechanisms which must be simultaneously applied by the entity to whom the data are transferred;
  • the consent of the data subject who must be adequately informed as required by the Regulation itself.

◊◊◊◊

As most recently pointed out in the information note of the European Data Protection Board (EDPB) of 18 July 2023, all the protections provided by the US government in the field of national security apply to all transfers of personal data made to companies in the United States, regardless of the transfer mechanisms used. Therefore, these guarantees also serve to facilitate the use of the other instruments provided for by the Regulation.

Other related insights:

Workers must be informed of the use of fully automated decision-making or monitoring systems. In particular they must be informed of the aspects of the relationship involved, the purposes and purposes of the systems, and how they operate.

The emergence of technologies using artificial intelligence systems and their increasing use has ushered in a new round of debate on the key ethical, social and legal issues surrounding the use of such technologies and their consequences.
At EU level, the need has emerged to ensure that new technologies develop while respecting the fundamental rights and dignity of individuals, to achieve goals that do not conflict with the interests of the community. To this end, the European Commission put forward a Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence launched in Brussels on 21 April 2021 and approved on 14 June 2023 (Artificial Intelligence (AI) Act).
The work environment is not immune to such changes, if we think, for example, of the systems used for logistics management in warehouses as well as the platforms employed by riders.

The Artificial Intelligence Act and the Transparency Directive.

The AI Act classifies as ‘high-risk systems’ those used ‘in employment, workers management and access to self-employment, [intended to be used] for recruitment and selection of natural persons […] for making decisions on promotion and termination […] for task allocation, and for monitoring and evaluating performance […] of persons in such relationships’.  This classification stems from the fact that ‘those systems may appreciably impact future career prospects and livelihoods of these persons’.

In relation to the rapid development in the work environment of automated systems and the associated risks, the European Union has also stressed the importance of workers being fully and promptly informed of the fundamental terms and conditions of their employment. To this end, the national legislature implemented Directive (EU) 2019/1152 on transparent and predictable working conditions. This has resulted in employers being required to provide workers and employment organisations with information regarding the use of automated decision-making or monitoring systems (Article 1-bis of Italian Legislative Decree No. 152/1997, introduced by the Transparency Decree, Italian Legislative Decree No. 104/2022). The purpose, as outlined in the introduction and Article 1 of the EU Directive, is to improve working conditions by promoting more transparent and predictable employment, while ensuring labour market adaptability to new technologies. Specific disclosure is required when the manner in which workers’ services are performed is organised through the use of automated decision-making and/or monitoring systems, which provide relevant information regarding the recruitment, assignment, management or termination of employment, assignment of tasks or duties, and supervision, evaluation, performance, and fulfilment of workers’ contractual obligations.

The full version can be accessed at Norme e Tributi Plus Lavoro of Il Sole 24 Ore.  

1. DIGITAL REVOLUTION AND LAW

The emergence of technologies using artificial intelligence systems has ushered in a new round of debates on the key ethical, social and legal issues surrounding the use of such technologies and their consequences.

Modern technologies – with their increasing impact on society and customs – pose the issue of devising instruments to protect fundamental rights, security and data protection in order to ensure that technological advances are carried out in keeping with individual and collective protection needs, while at the same time ensuring a human-centred approach.

Indeed, it is clear that the development of new generation algorithms and increasingly sophisticated automated data processing techniques offer new opportunities but, at the same time, present complex challenges that affect almost every area of law.

Labour law is not immune to this profound transformation, which necessitates constant adaptation to new demands stemming from practical experience. It has been noted, in this regard, how this renders labour law ‘a necessarily dynamic law, since the basis of the employment contract is functionally connected to productive organisations and structured in such a way so that the contents of the employment relationship change in accordance with organisational and productive changes’.

One of the factors changing the organisation and performance of work is undoubtedly that particular IT branch known as artificial intelligence (now referred to as A.I.).

2. ARTIFICIAL INTELLIGENCE IN MANAGING THE EMPLOYMENT RELATIONSHIP

In a precise effort to focus on the endless variations and multiple applications of the phenomenon, several definitions of A.I. have emerged over time. The definition of Artificial Intelligence provided by the European Commission in its Proposal for a Regulation of the European Parliament and of the Council of April 2021 laying down harmonised rules on Artificial Intelligence (A.I.Act) is particularly interesting, in view of its origin.

The Proposal for a Regulation, in Article 3, defines the ‘artificial intelligence system’ as ‘a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions , which influence the environments with which the AI system interacts’.[LT1] 

The specific function of the Regulation, in the terms formulated by the Proposal, is to set out the specific requirements for A.I. systems and the obligations to be complied with by those who place this type of product on the market, right down to the user, in order to ensure that A.I. systems which are marketed and used are safe and respect the EU fundamental rights and values.

The relevant provisions are based on a ranking of the potential level of impact of the systems on the wider community, with particular attention to applications of A.I. formally qualified as ‘high risk’ (i.e. which have ‘a significant harmful impact on the health, safety and fundamental rights of persons in the Union.

For the purposes hereof, it is noted that the A.I. Act qualifies, inter alia, as ‘high-risk systems’ those used ‘in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships’.

This classification stems from the fact that ‘those systems may appreciably impact future career prospects and livelihoods of these persons’.

2.1 ARTIFICIAL INTELLIGENCE IN THE RECRUITING PHASE

Already in the preliminary phase of the employment relationship, A.I. is growing in importance: indeed, algorithmic hiring, understood as a personnel selection procedure wholly or partially entrusted to algorithms, is undergoing great development.

The widespread perception is that such automated procedures are faster, more reliable and cheaper than ‘conventional’ selections, thereby enabling the effective identification of candidates’ personal characteristics and aptitudes through analysing a large amount of data collected during virtual interviews.

While A.I. represents a great opportunity, when it is not properly controlled, it can be adversely affected by an inherent insidious issue, namely human prejudice that inevitably is reflected in the algorithms. In referring to the A.I. Act cited above, the following are in fact considered ‘High-Risk’:

  • AI systems for screening candidates;
  • the drafting of rankings and classifications;
  • matching systems;
  • systems that support candidate assessment during interviews or tests.

With reference to the risks associated with the use of artificial intelligence in the workplace, it was in fact found that ‘throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.

Depending on the way the software is constructed, even a company that has no discriminatory purposes could unwittingly introduce so-called biases in the processing, which, with a knock-on effect, would affect the outcomes of the process, thus resulting in discriminatory effects.

This is because software, however artificially intelligent it may be, is still programmed by human beings and is therefore affected by the judgmental dynamics of its own programmers.

In addition, data entered into the software remains stored within the programme, thus influencing future predictive analyses that will be affected by outdated data.

Interestingly, the well-known case of Amazon should be mentioned in this regard.

The renowned US giant had developed an experimental automated talent finding programme with the aim of assessing candidates according to a ranked scoring system. However, with specific reference to IT roles, the system did not select applications in a gender-neutral manner: female candidates were automatically excluded. The reason was due to the fact that the software was based on data collected over the last 10 years, and the majority of the resources hired during that time in the IT field were, in fact, male.

The algorithms thus identified and exposed the biases of their own creators, thereby demonstrating that automated systems training on unbiased data leads to future non-neutral decisions.

The case of Amazon is an interesting insight into the limits of Artificial Intelligence learning and the extent to which so-called human biases can be reflected in automated systems, thereby influencing their algorithms.

2.2 LEADERSHIP POWER THROUGH ALGORITHMIC MANAGEMENT

In addition to the pre-hiring phase, A.I. systems are also an important factor in organising work, e.g. systems for managing warehouse logistics as well as platforms used for managing riders.

In these sectors, decisions on how best to manage activities and human resources are increasingly being delegated to algorithms, which are able to analyse an infinite amount of data and identify the most effective management and organisational solution: algorithms that determine the assignment of tasks according to certain parameters, automated monitoring systems, geolocalisation systems that provide alerts or automatic intervention in case of danger.

In this rapidly changing working environment, the European Union has emphasised the need for workers to be fully and promptly informed as to the essential conditions of their work.

In order to ensure that employees and trade unions are aware of the digital systems in individual business organisations, the legislator, by transposing Directive (EU) 2019/1152 on transparent and predictable working conditions into national law, has introduced a disclosure obligation for the employer in cases where automated decision-making or monitoring systems are used (Article 1-bis of Italian Legislative Decree No. 152/1997 introduced by the so-called Transparency Decree, Italian Legislative Decree No. 104/2022).

The purpose of the new legislation was that, as can be seen from the reading of the recitals and Article 1 of the EU Directive, to ‘improve working conditions by promoting more transparent and predictable employment while ensuring labour market adaptability’.

A practical interpretation of a sometimes difficult jargon is that the worker must be able to know whether automated techniques are used, whether the employer uses algorithmic decisions and similar means; furthermore, the worker is entitled to know the way these techniques operate, their logic and their impacts, including in terms of security risks to personal data.

From a combined reading of Article 1(1)(s) and Article 1-bis, para. 1 of Italian Legislative Decree 152/1997, it results that the provision of such specific disclosure is required in cases where the manner in which workers’ services are performed is organised through the use of automated decision-making and/or monitoring systems, which are designed to ‘provide information relevant to the recruitment or assignment of the management or termination of the employment relationship, the assignment of tasks or duties as well as information affecting the monitoring, evaluation, performance and fulfilment of the contractual obligations of workers’.

The scope of the rule contained in Article 1-bis of the Transparency Decree created interpretative uncertainties and applicative difficulties relating to the identification of which systems were to be included among those that were subject to this additional disclosure as opposed to remote control instruments, with respect to which the disclosure obligations are conversely governed, as is widely known, by Article 4 of Italian Law No. 300/1970, i.e. by a provision expressly spared from the reform and which appears to retain some degree of its autonomy.

With reference to the types of tools to be regarded as automated systems, the Circular of the Italian Ministry of Labour and Social Policies (Ministero del Lavoro e delle Politiche Sociali) No. 19/2022 has attempted to provide some clarifications on the innovations introduced by Italian Legislative Decree. 104/2022. In particular, the Circular excluded the obligation to disclose information where badges are used, i.e. automated tools for recording the attendance of employees upon entry or exit, provided that such recording does not automatically trigger an employer’s decision, while, purely by way of example but not limited to, it provided for such an obligation in the case of the use of automated systems for managing shifts, determining pay, tablets, GPS, wearables and other devices.

Continue reading the full version published on Guida al lavoro of Il Sole 24 Ore.


In judgment of 26 April 2023 (case T-557/20), the Court of Justice of the European Union (‘CJEU’) ruled that pseudonymised data transmitted to a recipient who does not have the means to identify the data subject is not personal data. This means that such information does not fall within the scope of the legislation on the protection of personal data.

Before entering into the merits of the judgment in comment, it seems appropriate to define what is meant by ‘pseudonymisation’. According to Article 4 of Regulation (EU) 2016/679 (better known by the acronym ‘GDPR’) pseudonymisation means ‘the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organisational measures to ensure that the personal data are not attributed to an identified or identifiable natural person’.

The facts of the case

The case examined by the CJEU is examined below.

The case originates from several complaints received by the European Data Protection Supervisor (the ‘EDPS’) reporting specific conduct of the Single Resolution Board (‘SRB’).   

Specifically, the SRB, after collecting through an electronic form some opinions of shareholders and creditors (the ‘data subjects’), had transferred the answers obtained to a consulting firm. Before forwarding it to the consulting firm, however, the SRB had pseudonymised this data by replacing the names of the data subjects with alphanumeric codes. However, the latter complained to the EDPS that the information notices on the processing of personal data provided by the SRB did not specify that their personal data would be shared with third parties.

The EDPS stated that, although the data thus disclosed did not allow the company to identify the authors of the survey, the data, although pseudonymised, should nevertheless be considered personal data, also in view of the fact that the outsourcer received the alphanumeric code that allowed it to link the replies received.

For these reasons, the EDPS held the consulting firm (the recipient of personal data) and the SRB liable for the breach referred to in Article 15 of the GDPR – governing the right of access of the data subject – for not having provided, among other things, information about the recipients or categories of recipients to whom the personal data would be disclosed.

The decision of the Court of Justice of the European Union

The judges of the CJEU overturned the EDPS’s decision. The CJEU, in fact, stated that the decision taken by the EDPS on the nature of the pseudonymised data was incorrect, as the EDPS had not verified whether or not the company to which the data had been disclosed was able to re-identify the data subjects. That verification should have taken place on the basis of the instruments it held, or did not hold, enabling it to identify natural persons.

To identify whether or not pseudonymised information disclosed to a recipient constitutes personal data, it is necessary to ‘consider the recipient’s perspective’. If the recipient does not have additional information enabling him/her to identify the data subjects or does not have legal means to access it, the disclosed data are considered to be anonymous data and therefore are not personal data. Therefore, they are excluded from the scope of application of the principles in force regarding data protection. On the contrary, the fact that the party disclosing the data has the means to identify the data subjects is irrelevant.

On these grounds, the Court of Justice annulled the EDPS’s decision and ordered it to pay the costs of the proceedings.

Other related insights:
GDPR: security measures to support data protection

An order of the Court of Cassation recognises that an employer may use security camera footage for disciplinary purposes

Employers may use security camera footage for disciplinary purposes. This has been confirmed by the Court of Cassation in Order of 23 March 2023, No 8375.

Remote control of workers’ activities

As is now well known, Article 4 of the Italian Workers’ Charter states that audio-visual equipment – or in any case instruments which may enable remote control of workers’ activities (which also includes video surveillance systems) – may be used by the employer exclusively for the following purposes

  • organisational and production needs,
  • safety at work,
  • the protection of company assets.

These instruments may be installed subject to a collective agreement with the trade unions and in any case may not be installed to monitor the employees’ work.

Use of video footage for disciplinary purposes

If the objective of Article 4 of the Workers’ Charter is to protect the worker from remote monitoring of his or her work performance, why has the Court of Cassation held that recordings can be used as the basis for a disciplinary complaint?

Continue reading the full version published in Wired