AI and Evolving Legislation in the US and Abroad

Here’s how business leaders can prepare to use AI while aligning with emerging legislation globally.

Helena Almeida, Vice President, Managing Counsel, ADP

August 30, 2024

5 Min Read
AI face bursting through 1s and 0s
Mopic via Adobe Stock

As AI continues to dominate the headlines, policymakers and legislators across the world are considering how and when to regulate this technology to protect employee’s consumers, and citizens at large. As business leaders, it’s important to think strategically and deliberately about how we use AI to be in sync with current and emerging legislation, whether it impacts businesses in the United States or globally. 

Key Principles Underlying AI Legislation 

Broadly speaking, these efforts to regulate AI aim to establish protections and guardrails for the safe use of AI. There are common themes in much of the legislation passed or proposals in process in the US or abroad. Some principles underlying the recently passed Artificial Intelligence Act (a.k.a., the EU AI Act), for example, include: protect privacy, mitigate bias, provide transparency and explainability, ensure human oversight, and monitor results after the product is in use. 

The scrutiny is especially high on the use of AI in the HR space because there can be serious, consequential decisions for individuals when AI is used in hiring, promotion, compensation, and termination.  

New and existing legislation in both the EU and North America continues to evolve as part of the compliance landscape. 

Related:Making the Transition to Artificial Intelligence

EU AI Act

The European Parliament approved the final text of the European Union’s AI Act earlier this year and it just went into effect in August. Similar to the GDPR (General Data Protection Regulation) privacy legislation, the EU has taken the lead and may likely greatly influence future AI legislation across the globe. 

The Act takes a horizontal approach, regulating AI whether it’s a standalone offering or embedded in hardware. It also takes a life-cycle approach, from initial development to usage to post-market monitoring, with the majority of obligations falling on developers or providers of high-risk systems, as well as some deployers (users of high-risk AI systems). 

The Act focuses on risk management, addressing three key risk areas: unacceptable risks, high-risk use cases, and foundational models such as large language models (LLM) or General Purpose AI (GPAI). 

Canada’s Artificial Intelligence and Data Act (AIDA) 

Canada is considering a proposal that will regulate AI systems and protect against potential harms, including harms caused by systemic bias. Like the EU, the focus will be on high impact systems, including screening for employment, and will require human oversight, bias testing, and monitoring, among other measures. AIDA is currently moving through the process to better define the final legislation. 

Related:Is OpenAI Quietly Building a Media Content Empire?

New and Existing US Laws

Last October, President Biden issued an Executive Order concerning AI to establish guidance for AI safety and security, privacy, equity and civil rights, consumer and worker protection, and innovation and competition. Many states are considering legislation on how to regulate AI as well.  

In July 2023, a New York City law went into effect that impacts employers using automated employment decision tools (AEDT), which include AI, to screen applicants and employees. The law applies if an employer uses an AEDT to “substantially assist” or replace a human’s discretionary decision. 

In California, the California Privacy Protection Agency (CPPA) just approved draft regulations on automated decision-making tools. Again, the focus is on tools that substantially facilitate human decision-making for significant decisions, like hiring, compensation, promotions and termination.  

In Illinois, the Artificial Intelligence Video Interview Act requires employers who use AI tech and analysis to screen applicants for positions to be transparent, obtain consent or allow an opt out, and conduct bias testing. 

At the federal level, the Equal Employment Opportunity Commission (EEOC) is weighing in on the use of AI in employment. The EEOC helps enforce nondiscrimination laws such as Title VII of the Civil Rights Act of 1964, which generally prohibits employment discrimination based on race, color, religion, sex, or national origin.  

Related:10 Hottest AI Jobs

To help assess whether an employment decision is discriminatory, the EEOC relies on a set of Uniform Guidelines for Selection Decisions adopted in 1978. These guidelines apply when an employer is using a selection procedure to make employment decisions, like hiring, promotion and termination. They provide guidance on how employers should conduct bias assessments of these selection procedures to ensure there is no unjustified adverse impact on a particular group.  

In May 2023, the EEOC published a technical assistance document directly concerning AI. The EEOC made clear that the Uniform Guidelines apply to AI -- specifically, to algorithmic decision-making tools when they are used to make or inform decisions about whether to hire, promote or terminate.   

How Employers Can Prepare

The best way to prepare for regulatory changes is to get your house in order. Most crucial is having an AI and data governance structure. This should be part of the overall product development lifecycle so that you’re thinking about how data and AI is being used from the very beginning. Some best practices for governance include:  

  • Forming a cross-functional committee to evaluate the strategic use of data and AI products 

  • Ensuring you have experts from different domains working together to design algorithms that produce output that is relevant, useful and compliant 

  • Implementing a risk assessment program to determine what risks are at issue for each use case 

  • Executing an internal and external communication plan to inform about how AI is being used in your company and the safeguards you have in place. 

AI has become a significant, competitive factor in product development. As businesses develop their AI program, they should continue to abide by responsible and ethical guidelines to help them stay compliant with current and emerging legislation. Companies that follow best practices for responsible use of AI will be well-positioned to navigate current rules and adapt as regulations evolve. 

Read more about:

Regulation

About the Author

Helena Almeida

Vice President, Managing Counsel, ADP

Helena Almeida is Vice President, Managing Counsel at ADP, working to ensure ADP’s HR and Benefits products and services enable ADP’s clients to achieve their goals with compliance in mind.  

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights