AI is transforming hiring and making it easier for companies to find the right candidates for open positions. And in theory, AI is helping job seekers find opportunities that they are the most qualified for.
Companies now rely on algorithms to screen resumes, assess video interviews, and rank candidates. But while these tools promise efficiency, they may also be filtering out qualified workers unfairly, which is raising growing concerns about hidden bias and discrimination. AI tools may be actually making it harder for workers and businesses to find each other.
How AI Hiring Tools Work
Many companies use automated resume screeners to scan for keywords when they post a listing for an open job. Resumes that contain those keywords, or match a certain percentage of the keywords, are selected to be shown to a human screener.
AI tools are also becoming increasingly common in interviews and pre-screening sessions when candidates are being narrowed down. They use video analysis tools to evaluate facial expressions or tone so that companies can screen out candidates that have attributes that are considered undesirable for the position.
Typically, AI systems rely on machine learning, trained on data from past hires. Algorithms then make predictions about a candidate’s fit, often with little human oversight.
Where Discrimination Happens
Bias can creep into the AI tools that are used for screening candidates through skewed training data. If a company’s past hires lacked diversity, the AI may learn to prefer certain races, genders, or age groups.
For instance, Amazon scrapped a recruiting tool that penalized resumes with the word “women’s.” Facial analysis software has also misread emotions more frequently in people of color. Studies show these tools can replicate and amplify existing discrimination, especially when the algorithm’s decision-making process is opaque or flawed.
Legal and Ethical Concerns
As more and more companies start using AI tools at every stage of their hiring process, alarms are being raised about potential discrimination and violations of worker’s rights. Discriminatory AI hiring practices may violate equal employment opportunity laws.
The EEOC and DOJ have warned companies about the legal risks of relying on biased technology. Employers that use AI tools as part of their hiring process have a legal and ethical duty to ensure their hiring tools don’t screen out protected groups, even unintentionally.
What Workers Can Do
If you think that you have been discriminated against because a company used AI tools in their hiring process you need to know what your rights are, and how to file a claim with the EEOC if those rights were violated.
If you believe an AI tool was used unfairly in a hiring decision, you should speak with an employment lawyer who can help you understand what employment laws say about discrimination and the use of AI tools for hiring. An employment lawyer can also help you file an EEOC complaint. Staying informed is key to fighting back against algorithmic bias.
AI Tools Should Be Used Carefully
AI can streamline hiring, but it can also reinforce bias. Transparency and accountability are critical to ensure fair treatment for all candidates, not just those who fit an algorithm’s mold.