Employment Discrimination Via Artificial Intelligence
Thought leaders opine that advanced artificial intelligence (AI) is the most transformative event in human history. Experts extol the benefits of combining human intelligence with machine learning to increase productivity in every facet of society. Businesses are rapidly harnessing the power of AI in the workplace to increase efficiency, productivity, customer service, quality control, and employee safety, and to decrease costs of human labor.
85% of large employers currently use AI for employment-related tasks and decisions, from recruiting and hiring to performance measurement and promotion. AI can help create job descriptions, predict likelihood of applicants’ success, recommend jobs and roles for applicants, identify potential candidates to recruit (by analyzing resumes, applications, and social media), select among applicants for hiring, conduct initial interviews through chatbots, onboard and train new employees, measure employee productivity and performance, select among employees for promotion, and improve communication between employees and management.
But experts also warn about the challenges and risks of AI. One of the biggest challenges is bias, which lurks in the data humans choose to input into proprietary algorithms, in the publicly available information we allow generative AI to curate when harvesting data, and in outcomes that may be predicated on wrong assumptions.
One reason employers use employment AI is to eliminate conscious and unconscious bias and, in turn, increase diversity and inclusion. But employee advocates and policymakers caution employers to beware. Ironically, while AI can theoretically and potentially reduce or eliminate inherent human bias, decision making based on algorithms may actually perpetuate bias.
After all, AI is only as good as the data it relies on to make inferences. If selection criteria inputted into AI models favor candidates who are not in protected classes (or, consistent with the U.S. Supreme Court’s rationale in its recent decision in the litigation by Students for Fair Admissions against Harvard and UNC, if selection criteria favor candidates in protected classes), then AI can become a discriminatory screening tool. AI chatbots used in hiring might also make biased inferences about candidates from publicly available information available on the internet and social media.
Because AI can preserve and multiply bias in employment decision making, employers should understand how their AI tools work and implement policies and procedures to avoid running afoul of federal, state, and local anti-discrimination laws that prevent direct and disparate treatment and impact in the workplace.
Enforcement in the News: The EEOC’s First Lawsuit Over Discrimination Via AI
In August, the Equal Employment Opportunity Commission (EEOC) settled its first lawsuit alleging employment discrimination through use of AI. The lawsuit alleged that a company providing tutoring services, iTutorGroup, violated the Age Discrimination in Employment Act (ADEA) by programming software used in the employment application process to automatically reject male applicants over 60 years old and female applicants over 55. The lawsuit alleged that iTutorGroup failed to hire more than 200 qualified applicants over age 55 because of their age. The charging party alleged that she applied using her real birthdate, was immediately rejected, applied the next day using a more recent birthdate, and was offered an interview. The consent decree (a) requires iTutorGroup to pay $365,000 to applicants allegedly rejected because of their age; (b) enjoins iTutorGroup from requesting applicants’ birthdates before job offers are made, from rejecting or screening any applicants over 40 because of age or sex, and from retaliating against employees; and (c) requires iTutorGroup to adopt antidiscrimination policies for screening and hiring applicants and supervising tutors, to notify all employees involved in these HR tasks of the federal antidiscrimination laws, to provide 4-hour training programs by EEOC-approved third parties for all employees and contractors involved in these HR tasks, to provide relevant training to new employees followed by annual training, and to implement complaint procedures and provide written notice to the EEOC of complaints. The employer must also contact all applicants who were allegedly rejected because of age, invite them to reapply, offer interviews for all renewed applications, and provide a written explanation to the EEOC of the outcome of each application and interview and an explanation if an offer was not extended.
Lawsuit Follows in Lockstep with Recent EEOC Guidance
The EEOC has recently warned employers about the risk of unlawful discrimination through use of AI. In May 2023, it released guidance that AI may cause disparate impact discrimination against applicants and employees under Title VII of the Civil Rights Act of 1964 (Title VII). The guidance recommends that employers verify that their AI selection tools do not result in substantially lower selection rates for individuals with protected characteristics. In May 2022, it released guidance that AI may violate the Americans With Disabilities Act (ADA), for example when an employer uses video interviews recorded by applicants with speech disorders but fails to give them opportunities to request accommodations. In January 2023, the EEOC issued its draft strategic enforcement plan for 2023 through 2027, which demonstrates clear EEOC focus on discriminatory use of AI throughout the employment life cycle, beginning with recruitment and including employee performance management. The guidance makes clear that employers’ AI tools, even those outsourced through third-party vendors, must comply with federal antidiscrimination laws. Also, some states have passed laws limiting use of AI in human resources tasks, including Illinois (video interviews) and New York City (automated employment decision tools).
Tips for Employers on “Intelligent” Use of AI in the Workplace
- Instead of relying completely on AI software vendors’ representations of employment AI tools, understand how your AI tools operate.
- Ensure that your employment AI tools comply with antidiscrimination laws and regularly audit outcomes for compliance.
- Train all employees on proper use of employment AI tools and how to avoid adding bias through their interactions with them.
- Don’t enter protected characteristics or proxies for protected characteristics as selection criteria in AI algorithms.
- Conduct audits for disparate impact, looking for correlations between data and protected characteristics, and implement measures to eliminate the problematic selection criteria.
- Be transparent to applicants and employees about how you are using AI and identify the specific information you are measuring.
- Give applicants opportunities to request accommodations.
- Good mantra: Involve a human in every employment decision.
- Best mantra: A human should always make the ultimate employment decision.
- Document how your decision making is performed by humans and AI, so you can explain it if necessary in litigation.
Considerations and Tips for Other Uses of AI in the Workplace
Monitoring employees through electronic surveillance of internet activity, email, chat, social media, and wearable devices may be useful in analyzing employee data, managing schedules, and increasing productivity. However, employers should carefully balance the competing interest of respecting employees’ privacy. Employers should also avoid potential violation of employees’ rights to concerted activity under the National Labor Relations Act (NLRA), by chilling their eagerness to discuss wages and workplace conditions. AI can also be useful in identifying behavioral and other workforce trends, predicting behavior, and improving employee retention and satisfaction. It is generally best to be transparent with employees about what data you use and why.
Employers should verify that when they collect and process applicant and employee data, they comply with any applicable privacy laws such as GDPR.
Employers should consider implementing policies that limit employee use of generative AI. Consider to what extent your permission of their use of generative AI might sanction violation of confidentiality policies pertaining to your proprietary information or confidential customer or vendor information. Also consider how your company might risk disseminating or publishing inaccurate information created by generative AI, or violate intellectual property laws. To mitigate these risks, consider implementing policies to verify accuracy of content created by AI chatbots, to require citations to AI chatbots, and to limit use of generative AI to internal uses only, as opposed to allowing the content to be posted on the company’s website or otherwise distributed or published.