Part 3: No Time to Hire: Will Agentic AI Transform the Workplace?

DLA Piper
Contact

DLA Piper

[co-author: Grace Heaversedge]

The advent of agentic AI is widely predicted to have a significant impact on the workforce, with it automating tasks currently performed by humans once believed to be beyond automation. This will lead to humans working alongside so-called digital co-workers, requiring new skills to do so, and most likely redeployment and/or job losses. Consequently, there are profound implications for the workforce and labour use, with the creation effectively of hybrid teams.

In Part 1 of our series, we explored the overarching key legal considerations surrounding the deployment of agentic AI.

In Part 2, we examined the underlying liability issues which impact the use of agentic AI.

Part 3 explores how the integration of agentic AI may impact the workforce from a Human Resources (HR) perspective and consider how agentic AI may impact HR use cases. Separately, we look at the risks and liabilities associated with employees’ use of agentic AI and advise on next steps employers should take.

Redundancy and collective consultation

Evidence has shown that employers are already taking advantage of agentic AI where job roles consist of process-related and database management tasks.

In this scenario, agentic AI acts as a “super worker”, which delivers:

  1. Up-to-date responses in real time using retrieval-augmented generation.
  2. Execution of tasks at speed and adaptable to a range of computer applications; and
  3. Continuous improvement through a feedback loop.

Any business looking to deploy agentic AI must first have regard to mandatory collective and individual consultation where mass dismissals are possible as a result. Currently in the UK, employers must engage in collective consultation for 30 days if they are making 20 or more employees redundant within any 90-day period at a single establishment, for 45 days if 100 or more. Similar approaches exist in other jurisdictions, often alongside binding obligations that sit within collective bargaining agreements. Failure to comply risks significant financial penalties, disruption risk and, in some countries, criminal liability. Failure to follow due process can also lead to claims by individuals on the subsequent dismissal, and where there is a disproportionate impact on those with a protected characteristic or a failure to accommodate disability, potential uncapped discrimination claims.

Job variation

It may be more positive to reframe agentic AI as less of a “job taker”, but a “job maker”. Where agentic AI can complete repetitive, mundane and low-development tasks, workers can provide creative and strategic insight at high-value points.

Scope for worker development is also possible, once workers are not required to perform so-called “busywork”, such as responding to emails and updating spreadsheets. Workers may upskill, learning languages, developing EQ, and adopting strategic mindsets which were previously not afforded enough time and attention. All of this is well and good, but it begs the question as to whether redeployment or retraining and reskilling is possible, if so whether it requires consensual contract variation, and what it means for reward and succession strategies. And in the meantime, there remains the ongoing concern of the so-called “hollowing out ” of mid-level roles.

Although in its infancy, it is possible that employers which engage in a diverse range of variable projects may have a reduced need for a permanent workforce, where agentic AI can autonomously monitor everyday low-development tasks – this of itself could impact on the flexibility of operating models, the spread of “gig economy” style platforms, and issues of status for the workers concerned.

Employee wellbeing

There is a risk that agentic AI may make us more isolated in the workplace. We are already seeing day-to-day engagement amongst co-workers in decline, and higher levels of loneliness in the remote workforce, Gallup’s State of the Workplace report found that 25% of remote employees experience loneliness, compared to 16% of fully onsite employees. Agentic AI in the workplace may risk reinforcing a sense of isolation attributed to daily interactions with virtual entities. Thought needs to be given to wellbeing, the psychological contract, and the impact of parallel performance management of human and digital co-workers.

Bias

As we now well know, all AI including agentic AI, brings with it the risk of bias. In turn, any bias risks discrimination against those with protected characteristics where any such discrimination would be on a systemic basis.

Examples of bias include where there is a statistical dependence in the decision-making; where there is bias already present in the data used to train the AI model; and where there is not enough data from the model to make confident conclusions for some segments of the population.

Employers will need to be keenly aware of bias and take steps to mitigate against that risk arising and monitor with appropriate remediation if bias does occur. In particular, employers must be aware of the risk of bias and subsequent discrimination, in HR use cases such as and promotional decisions.

Vicarious liability

The so-called doctrine of vicarious liability means that businesses will be held responsible for the acts of their employees in the course of employment when using agentic AI where, generally speaking, the course of employment will likely be given a very wide interpretation. We can expect the same approach to the acts of agentic AI. The risk of vicarious liability can of course arise in situations which extend well beyond just bias. Businesses will need to implement appropriate risk management processes, such as human oversight, relative to the level of autonomy and risk. In addition, agentic AI guardrails, including governance and risk policies will help safeguard businesses against vicarious liability, by encouraging the right behaviours.

Safeguards will allow businesses to show that they are taking appropriate steps to mitigate against agentic AI error. The level of human oversight will need review and be subject to change as agentic AI develops and improves technically.

Privacy

Additional consideration should be given to the processing of employee data by agentic AI under relevant privacy legislation. Employers should be aware of data privacy requirements and how they can ensure that they are compliant. In HR use case terms this can begin with the hiring process, where employers may be processing sensitive data about job applicants. Employers must ensure that they use only permissible data, and that the data is categorised and retained appropriately. Penalties for a failure to comply can be significant.

Automated Decision Making

In the landmark Schufa case, the Court of Justice of the European Union found that, within the EU, individuals had a right not to be subjected to the use of automated systems to make significant decisions about the individual without human intervention under General Data Protection Regulation. Although Schufa concerned a credit reference agency, the judgment extends to significant decisions made by employers about employees without human intervention.

This would cover hiring decisions, promotion decisions, redeployment and redundancy decisions, and other such HR use cases where AI may be deployed. What is challenging about the Schufa case is that there is said to be a need for a “human in the loop” at each stage of the AI process, rather than reviewing the ultimate output, for example, for bias. That likely challenges the whole aim of achieving cost savings.

Legislative landscape

Discussed in Part 2 of this series, the EU AI Act will directly impact UK businesses, due to the extra-territorial nature of its reach. Under the Act, it is worth noting that employment and worker management systems are “high-risk”, such as where used for recruitment purposes or for making decisions that affect the terms of promotion or termination based on the evaluation of individual behaviour and personal traits in the workplace.

High-risk systems under the Act are subject to substantial obligations to ensure the system’s trustworthiness. Examples include: (i) data quality requirements and conformity with data privacy legilsation; (ii) the provision of detailed technical documentation; (iii) the completion of a conformity assessment; and (iv) implementation of human oversight measures. Employers must ensure that they appropriately monitor “high-risk” systems and consider logging agentic AI use cases within the business.

The EU AI Act is arguably the bellwether of AI regulation. However, emerging AI regulations may appear as guidelines, or recommendations in some jurisdictions. The legislative approach is not fixed, and global employers should be aware of the varied regulatory landscape in this area and navigate accordingly.

In many countries existing legislation already gives significant rights to employee representatives on the introduction of new technology, which will need to be addressed before deployment. There will be commercially sensitive issues on timing alongside compliance here.

Mitigation

Considering vicarious liability implications resulting from bias, non-compliance with data privacy legislation and the EU AI Act, employers will need to demonstrate they have taken positive steps to ensure that these risks are actively mitigated where possible.Use cases will engage a range of duties including other stakeholders not just employees and their representatives.

Employers should address the human oversight measures envisaged in the EU AI Act. The Act provides that effective human oversight is where those who monitor the system can properly understand the system’s capacities, risks and limitations in operation and output. This includes the ability to safely stop the system, via use of a “stop” button or countermand. This requirement extends to ensuring that those who monitor the system have the appropriate level of AI literacy and AI-specific skills.

What should businesses be thinking about now?

Here are some suggested questions for businesses to think about asking when looking to use agentic AI:

  1. Have all necessary impact assessments been undertaken?
  2. Has the procurement process assessed the impact and allowed for any necessary employee, employee representative and stakeholder consultation?
  3. Have appropriate privacy risk assessments been conducted for the use and deployment of agentic AI?
  4. How will the risk of bias be monitored and if discovered how will it be remediated?
  5. How will human oversight work, practically speaking?
  6. Is any training or upskilling required for those owning, using or monitoring agentic AI?
  7. What guardrails need to be put in place around the scope of permitted autonomy and/or employee development of agentic AI?
  8. How will agentic AI performance be monitored alongside human performance?
  9. How will agentic AI be monitored as it matures?
  10. Are contingency plans in place, in the event of agentic AI failure or the need to countermand?

As we all know, AI is fast developing, in addition to agentic AI. Employees must understand what is required to maximise the potential gains and minimise the risks involved in the deployment of agentic AI.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© DLA Piper

Written by:

DLA Piper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

DLA Piper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide