EEOC Publishes Guidance on Using AI in Employment Decisions

by | Jun 13, 2023 | Uncategorized

The Equal Employment Opportunity Commission has issued new guidance on how employers can properly use software, algorithms and artificial intelligence-driven decision-making tools when screening job applicants and selecting candidates. 

The EEOC has grown concerned about possible adverse impacts of these technologies that can help employers with a wide range of employment matters, like hiring decisions, recruitment, retention, monitoring performance, and determining pay, promotions, demotions, dismissals and referrals.

The guidance follows the EEOC’s recent announcement that it would pursue enforcement of violations of Title VII of the Civil Rights Act of 1964 and other statutes under its jurisdiction arising from use of AI in employment decisions.

Employers are increasingly using algorithmic decision-making tools in various stages of the employment process. Examples include:

  • Resume scanners that prioritize applications using certain keywords,
  • Monitoring software that rates employees on the basis of their keystrokes or other factors, and
  • Testing software that provides “job fit” scores for applicants or employees.

The new guidance includes a series of questions and answers to help employers prevent the use of AI and other technologies from leading to discrimination on the basis of on race, color, religion, sex or national origin, in violation of Title VII.

Here are the main points of the guidance:

Responsibility: Employers are ultimately responsible for discriminatory decisions rendered by algorithmic decision-making tools, even if they are administered by another entity, such as a software vendor.

Assessment: Employers should assess whether their use of technology has an adverse impact on a particular protected group by checking whether use of the procedure causes a selection rate for individuals in the group that is “substantially” less than the selection rate for individuals in another group.

The selection rate for a group of applicants or candidates is calculated by dividing the number of persons hired, promoted or otherwise selected from the group by the total number of candidates in that group.

For example: Consider a situation where 100 white individuals and 50 Black individuals take a personality test that is scored using an algorithm as part of a job application, and 60 of the white applicants and 15 of the Black applicants advance to the next round of the selection process. Based on these results, the selection rate for whites is 60/100 (equivalent to 60%), and the selection rate for blacks is 15/50 (equivalent to 30%). This may indicate adverse selection.

If an employer is in the process of implementing a selection tool and discovers that using it would have an adverse impact on individuals of a protected class, it can take steps to reduce the impact or select a different tool, per the guidance.

If an employer fails to adopt a less discriminatory algorithm than that which was considered during the implementation process, it could result in liability, according to the EEOC.

The takeaway

Employers using algorithmic decision-making tools for employment decisions need to take the same care as they do when making employment moves without assistance from technology.

Businesses should not blindly implement these technologies without considering possible adverse decision-making that could lead to violations of the law and prompt litigation and regulatory action by the EEOC.

Experts advise that you move forward carefully and work with the vendor to ensure the technology doesn’t get your organization in trouble.

Sign up for our Newsletter

Stay up to date on our latest articles, news and blogs.