October 20, 2020
Due to the continuing effects of COVID-19, many businesses have been forced to move their recruiting and interview process entirely online. This has accelerated the adoption of Artificial Intelligence (A.I.) and other data-driven tools as part of the hiring process, tools that a burgeoning group of employers were using even before the pandemic. Advocates argue that A.I. speeds up the hiring process and eliminates human bias and subjectivity. However, without proper vetting and analysis, these tools can actually introduce bias into the process, and expose employers to liability. This article explores the ways in which A.I. has been used during the hiring process and the expanding legal framework in which these tools must operate, and identifies potential pitfalls for employers.
Increased Use of A.I. in Hiring
The use of computer processing power in the screening and hiring process is not new, but the simple text searches of yesteryear have yielded to more complex algorithms that touch every stage of recruiting. Companies such as LinkedIn Recruiter use A.I. to search the social media profiles of millions of individuals to determine the “best” audience for job postings, in an attempt to preemptively circumvent applicants without the required job specifications.
After a job is posted, employers can turn to programs that understand and compare experiences across resumes to determine which candidate’s work history more closely matches the requirements of an open position, and then use chat bots to reach out to applicants to determine whether the person is available to start on the employer’s preferred timeline or whether the individual is open to commuting.
Some companies have applicants play neuroscience computer games, which are then analyzed to predict candidates’ cognitive and personality traits. One tech company, HireVue, uses facial and voice recognition software to analyze a candidate’s body language, tone, and other factors during recorded interviews to determine whether the candidate exhibits preferred traits.
Proponents of this technology tout its ability to help recruiters and HR departments quickly sift through mountains of applicants and more efficiently identify qualified candidates. A.I. systems can ensure that every resume is at least screened, and potentially save time by analyzing publicly available data such as social media profiles and posted resumes. Advocates also argue that A.I. systems can be fairer and more thorough than human recruiters, applying the same analysis to every applicant, whether it is the first resume reviewed for a position or the thousandth. A.I. also theoretically can be used to avoid the unconscious preferences and biases of human recruiters by stripping out information relating to, among other things, name, age, and gender.
Those who are wary, however, about the use of A.I. in recruiting point out that the systems are only as good as those who “feed the machine.” If an A.I. tool is fed resumes of people who have previously been hired by the company, the past biases and preferences of the company’s hiring professionals could be inherited by the A.I. tool. In fact, that is one of the rationales behind the groundswell of state and municipal laws banning salary history inquiries.
Amazon reportedly scrapped an internally developed recruiting tool after it discovered that the algorithm was disfavoring resumes that included the word “women’s,” (for example, if a resume included information about the applicant’s participation on a college’s women’s ice hockey team) and candidates who graduated from two all-women’s colleges. (https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G). This occurred because the algorithm had been fed resumes from applicants who had previously been hired by Amazon, and those hires were overwhelmingly male.
Unintentional discrimination could also seep into A.I. systems in less direct ways. An algorithm trained to prefer employees within a certain commuting distance might result in applicants from poorer areas being disadvantaged. Even as recently as 2019, top facial recognition systems were shown to misidentify female black faces ten times more frequently than female white faces. (https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/). This suggests that A.I. programs might have issues analyzing the facial expressions of black applicants.
Differences in speech patterns and vocabulary that correlate with race or ethnicity could complicate automated voice analysis. These are not biases that are being intentionally programmed into A.I. software, but they could nonetheless result in certain groups of applicants being unfairly disadvantaged, opening up employers to potential claims under various anti-discrimination laws.
Potential Risks under Employment Laws
Like any other recruiting or hiring practice, the use of A.I. systems to screen and interview candidates implicates the New York State and New York City Human Rights Laws. Both statutes prohibit discrimination based on disparate treatment and/or disparate impact. While a claim of disparate treatment—i.e., intentional discrimination—might seem odd when talking about use of a computer program, as discussed above, unconscious bias can manifest in an A.I. system because of its programming and training.
Thus, a court could find that an employer faces the same liability for a program exhibiting the unconscious bias of its programmer as it would if the programmer had made the hiring decision directly. Alternatively, an employer could face a disparate impact claim if use of a particular A.I.-driven program or algorithm adversely impacts members of a protected class, such as the female applicants who were being disfavored by Amazon’s recruiting tool, or disabled applicants with significant concentration or communication issues that might be disfavored by face- and voice-recognition tools. Similar concerns arise under federal laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, and the Americans with Disabilities Act.
In addition to laws focusing on discrimination, the use of certain A.I. recruiting tools could implicate New York’s data protection laws, which were amended in 2019 by the New York SHIELD (“Stop Hacks and Improve Electronic Data Security”) Act to include biometric data in the definition of protected personal information. To the extent that New York employers use facial or voice recognition software to analyze applicants’ video interviews, they may have to develop policies to ensure that their storage and use of that data complies with the modified statute.
Furthermore, the nature of an online application process means that employers may inadvertently collect biometric data from individuals who reside outside of the states in which the company normally operates, which could expose the employer to additional legal requirements of which it might not be aware.
In addition to existing statutes, the New York City Council has introduced legislation intended to limit the discriminatory use of A.I.-technology. If enacted, the new law would prohibit the sale of “automated employment decision tools” unless the tools’ developers first conducted anti-bias audits to assess the tools’ predicted compliance with the provisions of Section 8-107 of the New York City Code, which sets forth the city’s employment discrimination laws, and prohibits, among other things, employment practices that disparately impact protected applicants or workers. (https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9).
This comes on the heels of a 2019 New York State bill that created a temporary commission to study and investigate how to regulate artificial intelligence, robotics and automation. (https://www.nysenate.gov/legislation/bills/2019/s3971). These efforts demonstrate that state and local officials are focused on the use of A.I., and that employers should be aware of the rapidly shifting legal framework.
What Employers Should Know
COVID-19 has accelerated the nationwide movement toward work-from-home arrangements, in turn accelerating the adoption of A.I. tools in the hiring process. To the extent that employers are considering using such tools, either in-house or through a recruiting company, there are certain issues of which they should be cognizant:
- In much the same way that employers carefully develop and identify non-discriminatory factors that are important to their traditional hiring decisions, they must develop and modify (where appropriate) the inputs that are fed into recruiting and hiring programs and algorithms. This will give employers the opportunity to assess whether the factors are, in fact, job-related, which is a lynchpin criterion under many employment laws.
- One of the main selling points for machine learning tools is that they can adapt on their own to feedback from the person making employment decisions. The downside of this constant adaptation is that employers cannot rely on an initial analysis of whether the program is returning results that may disadvantage one group or another. Employers should consider regularly auditing the results produced by these tools to ensure that the programs are not inadvertently “learning” the wrong lessons.
- Many employers contract with outside vendors to handle parts of the recruiting process, particularly the initial vetting of applicants and/or the advertising to specific potential candidates. Such arrangements do not exempt the employer from liability if the vendor is using tools that discriminate against protected groups. As such, employers—through appropriate contract language—should require their vendors to comply with all existing employment laws in connection with the screening and hiring of job applicants.
Reprinted with permission from the October 20, 2020 edition of the NEW YORK LAW JOURNAL © 2020 ALM Media Properties, LLC. All rights reserved. Further duplication without permission is prohibited. ALMReprints.com – +1 877-257-3382 - reprints@alm.com.