What Changing AI Rules Mean for Hiring

Laws and regulations around the use of artificial intelligence in hiring are finally being introduced, yet they still fall short in terms of equity and fairness. Here’s why.

Rena Nigam, Founder & CEO, Meytier

April 6, 2022

4 Min Read
Bias in scrabble tiles with bright pink background
Ana Baraulia via Alamy Stock

Today, the job application process has moved almost entirely online, and the number of automated tools employers leverage has exploded. The pandemic forced many of the last in-person hold outs of the job search online, with companies around the globe trading in-person tests and interviews for Teams and Zoom alternatives. It is now routine to start with a video interview and make an offer without ever meeting the candidate in-person. But is this process adequately regulated? Are there any laws to protect the candidates?

The answer is yes, but it’s far from being enough. In December of 2021, New York City passed a law restricting the use of artificial intelligence in hiring. The new law prohibits the use of AI and algorithm-driven technologies in hiring unless the tools have undergone an audit for racial and gender bias within a year of its use, and the audit’s results must be made available on the employer's website. Employers must also inform candidates of the tools they’ll be using and what they assess. While the rules don’t come into effect until 2023 and only apply to employers hiring in NYC, the crack-down on AI and technology in hiring is just beginning.

While this is a start in the right direction, this law does not go far enough. Before this, much of the technology used in the recruiting process has run without proper transparency or guardrails in the system. We have to be prepared to take this further as the process will only evolve by embracing innovation -- much of which will be driven by new technologies becoming available.

What Are the Current Issues With Hiring and AI?

Anyone who has ever posted a job online understands the need for these tools. Half the applications are fake, and the real ones are often irrelevant -- including everything from resumes that don’t have critical skills, to ones that aren’t even in the correct industry. A job posting can easily get hundreds of applications in an hour.

These technology-based tools range from simple resume skill matching to intelligent programs that make decisions based on previous successful applicants and learn as they go along. Yet all of them are capable of bias. Programs that match job descriptions to resumes tend to filter out more women, as women typically claim less skills on their resumes than their male peers. More sophisticated AI-driven programs have taught themselves to downgrade resumes that include the word “women’s” or name traditionally women’s colleges. Other video interview tools that claim to analyze voice tonality and facial expression have raised concerns due to voice recognition’s history of not understanding women or racial minorities well.

Can the Law Ensure Fairness?

There are currently no industry standard accountability practices to ensure that such hiring tools will not be biased. A mandatory audit for bias is undoubtedly a step in the right direction, yet it falls short of truly guaranteeing equity and fairness. For starters, it only requires an audit with specific non-bias parameters regarding bias due to gender or race. Yet bias happens due to far more aspects of identity, often negatively impacting intersecting identities. For example, women between 55-65 constitute nearly half of all long-term unemployment.

Bias against age, sexuality, veteran status, disability, and other identities are equally important groups to consider in hiring. AI is only as unbiased as the data it is trained on. As companies work to make sure their audits reveal no bias based on gender or race, we run the risk of skewing algorithms against older employees or employees with disabilities who aren’t represented well in the AI data sets. Research has even shown that AI doesn’t simply replicate bias present in data sets -- it often amplifies it.

There is no doubt that this new law is a massive step forward in accountability. Candidates have the right to know which tools are being used to evaluate them and what they measure for. Yet, it is imperative that this law be expanded upon. An annual audit to measure for racial and gender bias is simply not broad enough to keep up with the rapid innovation in technology. Tools should be audited for bias against all aspects and intersections of diversity. The hiring tools in question have a massive impact on society and prosperity and this new law will begin an important process towards equity. Candidates should have a reasonable expectation for a fair shot when they apply for a job.

About the Author

Rena Nigam

Founder & CEO, Meytier

Rena Nigam is the Founder & CEO of Meytier, which she started with the mission to help improve diversity at scale through a technology based approach. She is an entrepreneur focused on building and scaling firms that focus on the re-imagination of businesses through technology. Through 2018, Rena was President and Board Member at Incedo. Previously, Rena co-founded Aspark (sold to LiquidHub, subsequently acquired by CapGemini) and was on the Executive team of Mphasis until 2011. Follow Rena on LinkedIn.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights