EEOC Sues Workday for AI Bias in Hiring Practices

(This could be a game-changer for HR and AI ethics)

The EEOC is challenging HR software firm Workday over claims of discriminatory AI in hiring processes.

In a significant legal move, the EEOC filed an amicus brief urging a federal judge to allow a class action lawsuit against Workday to proceed.

The lawsuit, initiated by Derek Mobley, alleges that Workday's AI-powered software screened out job applicants based on race, age, and health conditions. This led to over 100 rejections for Mobley, a Black man over 40 with anxiety and depression.

The EEOC contends that Workday's algorithms operate similarly to traditional employment agencies, making them subject to Title VII of the Civil Rights Act of 1964. They argue that by using Workday's platform, employers are effectively outsourcing their hiring decisions, which could lead to biased outcomes.

This case highlights the growing concerns around AI in hiring. While AI can streamline recruitment, it's crucial to ensure these tools do not perpetuate existing biases. With approximately 80% of U.S. employers using AI in hiring, this lawsuit underscores the need for vigilance and fairness in tech-driven processes.

See the link to the full article: https://www.reuters.com/legal/transactional/eeoc-says-workday-covered-by-anti-bias-laws-ai-discrimination-case-2024-04-11/

♻️ Repost to spread awareness!

#bias #hiring #AI #informdecisions

Research shows that: The Untapped Value of Job Interviews

In research conducted by Vikram R. Bhargava and Pooria Assadi from Cambridge University, they discuss the enduring relevance of job interviews in the age of AI.

Why do companies still rely on job interviews despite the high costs? It seems straightforward: interviews are meant to predict a candidate's performance and fit within the company. Yet, evidence suggests we are generally poor at predicting performance and assessing fit through interviews due to biases, overconfidence in our judgment abilities, and a lack of evaluation of rejected candidates.

While algorithms are often more consistent than humans and in many cases are better at predicting job performance, the role of interviews extends beyond mere assessments. Interviews imbue the hiring process with a human element that algorithms cannot replicate. They provide a platform for candidates to express their unique qualities and for employers to convey their organizational culture, fostering a mutual assessment that goes beyond data points.

The unacknowledged value of interviews lies in their ability to integrate personal interaction within the hiring process, offering a sense of involvement and choice that enriches both parties' understanding. Interviews can indeed be both predictive and inclusive if they are structured effectively. When standardized, skills-based, and data-driven, interviews not only reduce biases but also enhance their predictive accuracy.

Interviews are here to stay, retaining significant value even in an AI-dominated era where the human touch is increasingly cherished. They possess the potential to evolve into a predictive powerhouse that supports a fair and inclusive hiring environment.

If you're looking to transform your interview process and hiring teams into a more effective, predictive tool, feel free to reach out. Let's discuss how you can leverage both human insight and data-driven methods to refine your recruitment strategy.

#interviews #AI #informedecisions

Unveiling the AI in Hiring Black Box #1

In her compelling book, "The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now," Hilke Schellmann brings to light a compelling case for the ethical use of AI in hiring.

The story of Lizzie, a young makeup artist from the United Kingdom, Lizzie's experience began when her employer, faced with the need to reduce headcount during the pandemic, turned to a very well-known AI-driven one-way video interview tool, as part of their decision-making process.

Despite Lizzie's commendable performance over her three-year tenure at a MAC Cosmetics counter, and her previously high performance scores, she was unexpectedly let go.

Perplexed by this decision, Lizzie inquired about the rationale behind her layoff and was informed by her manager that she had scored a zero on her one-way video interview—a score that seemed incongruous with her known capabilities and work ethic.

Refusing to accept this outcome passively, Lizzie, along with two other laid-off makeup artists, sought the assistance of pro bono employment lawyers to challenge their unfair dismissals. It was during this process that a critical revelation came to light: the video interview that purportedly led to her layoff had never actually been scored.

Lizzie's case underscores the potential fallout of unchecked reliance on AI in making pivotal decisions about employment. Her initial zero score from an unassessed interview starkly highlights the risks of opaque AI systems in determining individuals' professional fates.

This narrative isn't merely a caution against AI but a call for its ethical use. It emphasizes the importance of integrating transparency, accountability, and human oversight into AI-driven hiring tools. AI has immense potential to make hiring more efficient and equitable, but only if implemented with care and consideration for its impact on real lives.

As we move forward, let's advocate for the demystification of AI in hiring. We need to ensure that these technologies are used responsibly, with clear avenues for review and rectification of errors. By doing so, we can harness the power of AI to improve the hiring process, while safeguarding against its misuse.

#ethicalAI #AIinhiring #worktech #informedecisions

Bytes of Bias: Rite Aid's AI Misstep Leads to FTC-Enforced Ban

Rite Aid has been banned from using facial recognition technology for five years following a Federal Trade Commission (FTC) ruling. The decision comes after the FTC found that Rite Aid's AI system, used between 2012 and 2020, incorrectly identified consumers, especially women and people of color, as shoplifters. This led to various forms of harassment, including wrongful accusations, police confrontations, and public humiliation.

The FTC's complaint highlights Rite Aid's failure to implement safeguards against these harms. The technology produced numerous false positives, misidentifying individuals based on poor-quality images and inaccurate matches. The system was more prone to errors in stores located in predominantly Black and Asian communities.

As part of the settlement, Rite Aid must now establish comprehensive procedures to mitigate risks when using biometric data and discontinue any such technology if it cannot ensure consumer safety. The company is also required to enhance its information security program and comply with a 2010 Commission data security order, which it previously violated.

See link to full article in the first comment

#AI #bias #informedecisions

AI GENDER BIAS - EXAMPLE

Here is an example of CHAT-GPT gender bias.
Many thanks to @keith mcnulty for pointing this out and sharing some sample prompts

#AI #bias #informedecisions

3 WAYS AI CAN ENHANCE YOUR RECRUITMENT

With the rapidly advancing field of artificial intelligence, there are now powerful tools available to help organizations optimize their hiring processes. Here are three ways that AI can be used to enhance your hiring practices and give your company a competitive edge.

1. Use AI to continuously refine your hiring practices - AI that compares hiring scores to performance and retention can identify which components of the process are more or less predictive, and adjust their weighting accordingly to improve overall predictive validity. AI can also suggest additional interview questions similar to the most predictive ones, and remove non-predictive questions from the interview process.

2. Use AI to identify your best interviewers - AI can evaluate each interviewer's predictive capabilities by analyzing the percentage of candidates they have hired who are considered top performers and have remained with the organization long-term, versus those who have left or been fired.

3. Use AI to track and mitigate bias in your hiring processes - AI can identify systematic biases at the individual or group level, such as first impressions, overemphasizing credentials versus skills, favoring or disfavoring particular genders or ethnicities, and much more.

All of these capabilities are already out there, allowing talent leaders and recruiters to become more data-driven and focus on high impact activities. The companies that will be late to adopt them will stay behind the curve.