DLP Insights

Fears over the future of work: the impact of artificial intelligence (Guida al Lavoro of Il Sole 24 Ore, 30 June 2023 – Vittorio De Luca, Alessandra Zilla, Martina De Angeli)

Categories: DLP Insights, Publications, News, Publications | Tag: Privacy, Artificial Intelligence

12 Jul 2023

1. DIGITAL REVOLUTION AND LAW

The emergence of technologies using artificial intelligence systems has ushered in a new round of debates on the key ethical, social and legal issues surrounding the use of such technologies and their consequences.

Modern technologies – with their increasing impact on society and customs – pose the issue of devising instruments to protect fundamental rights, security and data protection in order to ensure that technological advances are carried out in keeping with individual and collective protection needs, while at the same time ensuring a human-centred approach.

Indeed, it is clear that the development of new generation algorithms and increasingly sophisticated automated data processing techniques offer new opportunities but, at the same time, present complex challenges that affect almost every area of law.

Labour law is not immune to this profound transformation, which necessitates constant adaptation to new demands stemming from practical experience. It has been noted, in this regard, how this renders labour law ‘a necessarily dynamic law, since the basis of the employment contract is functionally connected to productive organisations and structured in such a way so that the contents of the employment relationship change in accordance with organisational and productive changes’.

One of the factors changing the organisation and performance of work is undoubtedly that particular IT branch known as artificial intelligence (now referred to as A.I.).

2. ARTIFICIAL INTELLIGENCE IN MANAGING THE EMPLOYMENT RELATIONSHIP

In a precise effort to focus on the endless variations and multiple applications of the phenomenon, several definitions of A.I. have emerged over time. The definition of Artificial Intelligence provided by the European Commission in its Proposal for a Regulation of the European Parliament and of the Council of April 2021 laying down harmonised rules on Artificial Intelligence (A.I.Act) is particularly interesting, in view of its origin.

The Proposal for a Regulation, in Article 3, defines the ‘artificial intelligence system’ as ‘a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions , which influence the environments with which the AI system interacts’.[LT1] 

The specific function of the Regulation, in the terms formulated by the Proposal, is to set out the specific requirements for A.I. systems and the obligations to be complied with by those who place this type of product on the market, right down to the user, in order to ensure that A.I. systems which are marketed and used are safe and respect the EU fundamental rights and values.

The relevant provisions are based on a ranking of the potential level of impact of the systems on the wider community, with particular attention to applications of A.I. formally qualified as ‘high risk’ (i.e. which have ‘a significant harmful impact on the health, safety and fundamental rights of persons in the Union.

For the purposes hereof, it is noted that the A.I. Act qualifies, inter alia, as ‘high-risk systems’ those used ‘in employment, workers management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships’.

This classification stems from the fact that ‘those systems may appreciably impact future career prospects and livelihoods of these persons’.

2.1 ARTIFICIAL INTELLIGENCE IN THE RECRUITING PHASE

Already in the preliminary phase of the employment relationship, A.I. is growing in importance: indeed, algorithmic hiring, understood as a personnel selection procedure wholly or partially entrusted to algorithms, is undergoing great development.

The widespread perception is that such automated procedures are faster, more reliable and cheaper than ‘conventional’ selections, thereby enabling the effective identification of candidates’ personal characteristics and aptitudes through analysing a large amount of data collected during virtual interviews.

While A.I. represents a great opportunity, when it is not properly controlled, it can be adversely affected by an inherent insidious issue, namely human prejudice that inevitably is reflected in the algorithms. In referring to the A.I. Act cited above, the following are in fact considered ‘High-Risk’:

  • AI systems for screening candidates;
  • the drafting of rankings and classifications;
  • matching systems;
  • systems that support candidate assessment during interviews or tests.

With reference to the risks associated with the use of artificial intelligence in the workplace, it was in fact found that ‘throughout the recruitment process and in the evaluation, promotion, or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of these persons may also impact their rights to data protection and privacy.

Depending on the way the software is constructed, even a company that has no discriminatory purposes could unwittingly introduce so-called biases in the processing, which, with a knock-on effect, would affect the outcomes of the process, thus resulting in discriminatory effects.

This is because software, however artificially intelligent it may be, is still programmed by human beings and is therefore affected by the judgmental dynamics of its own programmers.

In addition, data entered into the software remains stored within the programme, thus influencing future predictive analyses that will be affected by outdated data.

Interestingly, the well-known case of Amazon should be mentioned in this regard.

The renowned US giant had developed an experimental automated talent finding programme with the aim of assessing candidates according to a ranked scoring system. However, with specific reference to IT roles, the system did not select applications in a gender-neutral manner: female candidates were automatically excluded. The reason was due to the fact that the software was based on data collected over the last 10 years, and the majority of the resources hired during that time in the IT field were, in fact, male.

The algorithms thus identified and exposed the biases of their own creators, thereby demonstrating that automated systems training on unbiased data leads to future non-neutral decisions.

The case of Amazon is an interesting insight into the limits of Artificial Intelligence learning and the extent to which so-called human biases can be reflected in automated systems, thereby influencing their algorithms.

2.2 LEADERSHIP POWER THROUGH ALGORITHMIC MANAGEMENT

In addition to the pre-hiring phase, A.I. systems are also an important factor in organising work, e.g. systems for managing warehouse logistics as well as platforms used for managing riders.

In these sectors, decisions on how best to manage activities and human resources are increasingly being delegated to algorithms, which are able to analyse an infinite amount of data and identify the most effective management and organisational solution: algorithms that determine the assignment of tasks according to certain parameters, automated monitoring systems, geolocalisation systems that provide alerts or automatic intervention in case of danger.

In this rapidly changing working environment, the European Union has emphasised the need for workers to be fully and promptly informed as to the essential conditions of their work.

In order to ensure that employees and trade unions are aware of the digital systems in individual business organisations, the legislator, by transposing Directive (EU) 2019/1152 on transparent and predictable working conditions into national law, has introduced a disclosure obligation for the employer in cases where automated decision-making or monitoring systems are used (Article 1-bis of Italian Legislative Decree No. 152/1997 introduced by the so-called Transparency Decree, Italian Legislative Decree No. 104/2022).

The purpose of the new legislation was that, as can be seen from the reading of the recitals and Article 1 of the EU Directive, to ‘improve working conditions by promoting more transparent and predictable employment while ensuring labour market adaptability’.

A practical interpretation of a sometimes difficult jargon is that the worker must be able to know whether automated techniques are used, whether the employer uses algorithmic decisions and similar means; furthermore, the worker is entitled to know the way these techniques operate, their logic and their impacts, including in terms of security risks to personal data.

From a combined reading of Article 1(1)(s) and Article 1-bis, para. 1 of Italian Legislative Decree 152/1997, it results that the provision of such specific disclosure is required in cases where the manner in which workers’ services are performed is organised through the use of automated decision-making and/or monitoring systems, which are designed to ‘provide information relevant to the recruitment or assignment of the management or termination of the employment relationship, the assignment of tasks or duties as well as information affecting the monitoring, evaluation, performance and fulfilment of the contractual obligations of workers’.

The scope of the rule contained in Article 1-bis of the Transparency Decree created interpretative uncertainties and applicative difficulties relating to the identification of which systems were to be included among those that were subject to this additional disclosure as opposed to remote control instruments, with respect to which the disclosure obligations are conversely governed, as is widely known, by Article 4 of Italian Law No. 300/1970, i.e. by a provision expressly spared from the reform and which appears to retain some degree of its autonomy.

With reference to the types of tools to be regarded as automated systems, the Circular of the Italian Ministry of Labour and Social Policies (Ministero del Lavoro e delle Politiche Sociali) No. 19/2022 has attempted to provide some clarifications on the innovations introduced by Italian Legislative Decree. 104/2022. In particular, the Circular excluded the obligation to disclose information where badges are used, i.e. automated tools for recording the attendance of employees upon entry or exit, provided that such recording does not automatically trigger an employer’s decision, while, purely by way of example but not limited to, it provided for such an obligation in the case of the use of automated systems for managing shifts, determining pay, tablets, GPS, wearables and other devices.

Continue reading the full version published on Guida al lavoro of Il Sole 24 Ore.


More news