The term "Pray to get the job" just got a whole new meaning…
#bias #informedecisions
#bias #informedecisions
🧠 Test Your Perception 🧠
Score the following statements on a scale of 1-5 based on how profound you think they are (1 - not at all, 5 - very profound):
A. Today, science tells us that the essence of nature is grace.
B. Life is the driver of potential. We live, we dream, we are reborn.
.
.
Surprise! These statements were actually generated by an AI “New Age Bullshit Generator.”
The tendency to see such statements as profound is known as “Bullshit Receptivity.”
In their book “Noise,” late Daniel Kahneman and colleagues reveal that some people are more susceptible to being impressed by seemingly profound statements that are actually shallow or meaningless.
If you fell for this, don’t worry—it might just mean you’re in a good mood! Research shows that people in a good mood are more receptive to bullshit and less likely to spot fraud or misleading information.
The key takeaway? Don’t come grumpy to an interview, but be aware that some candidates excel at storytelling and speaking in slogans.
As interviewers, our job is to break down high-level statements like “my mission is to inspire and deliver” into concrete, real-life examples.
How do you spot and handle bullshit in interviews? Share your strategies in the comments
#bias #interviews #informedecisions
After earning my license as an organizational psychologist, I eagerly accepted my first job as a Manager of Impact Evaluation for an NGO helping disadvantaged teens earn their GED.
Fueled by a deep connection to the organization's mission, I traveled across the country to administer surveys in less-than-friendly areas.
As my workload grew, I requested to hire a research assistant. After a meticulous selection process, I found the ideal candidate and presented her to the CEO.
But just five minutes into their conversation, he pulled me aside and declared that I couldn’t hire her because she was not "one of ours."
It took me a moment to comprehend what he meant. When I looked around, I noticed that all the employees were of Middle-Eastern descent, while the candidate was an Ashkenazi Jew—Jews of Central and Eastern European descent.
This revelation hit me like a ton of bricks—until that moment, I had never experienced hiring discrimination firsthand.
I suddenly recalled the CEO's subtle racist jokes about Ashkenazi Jews during team meetings, which I had previously dismissed as quirks.
And the irony, oh the bitter irony, of an NGO CEO, committed to aiding disadvantaged populations, perpetuating the same biases he claimed to fight, just in reverse.
I was too inexperienced and too stunned to challenge him.
The candidate wasn't hired, and I couldn't continue working in such an environment. I resigned, but the experience haunted me.
In retrospect, this pivotal moment shaped my future career choices. It ignited a passion for promoting fairness and eliminating bias in hiring processes.
This experience also serves as a cautionary tale against fighting bias with reversed bias—one of the key reasons why DEI initiatives face backlash today.
P.S. What discrimination stories have you experienced as TA or candidates?
Repost this to raise awareness about hiring discrimination.
♻️ Thank you!
#bias #hiring #informedecisions
(This could be a game-changer for HR and AI ethics)
The EEOC is challenging HR software firm Workday over claims of discriminatory AI in hiring processes.
In a significant legal move, the EEOC filed an amicus brief urging a federal judge to allow a class action lawsuit against Workday to proceed.
The lawsuit, initiated by Derek Mobley, alleges that Workday's AI-powered software screened out job applicants based on race, age, and health conditions. This led to over 100 rejections for Mobley, a Black man over 40 with anxiety and depression.
The EEOC contends that Workday's algorithms operate similarly to traditional employment agencies, making them subject to Title VII of the Civil Rights Act of 1964. They argue that by using Workday's platform, employers are effectively outsourcing their hiring decisions, which could lead to biased outcomes.
This case highlights the growing concerns around AI in hiring. While AI can streamline recruitment, it's crucial to ensure these tools do not perpetuate existing biases. With approximately 80% of U.S. employers using AI in hiring, this lawsuit underscores the need for vigilance and fairness in tech-driven processes.
See the link to the full article: https://www.reuters.com/legal/transactional/eeoc-says-workday-covered-by-anti-bias-laws-ai-discrimination-case-2024-04-11/
♻️ Repost to spread awareness!
#bias #hiring #AI #informdecisions
Hiring diverse individuals just to check a box is not only discriminatory but also undermines the very groups it aims to support, making them seem less qualified.
True diversity is not a trade-off; there are plenty of qualified, diverse candidates. Organizations must invest in bringing them into the hiring pipeline and then not letting them fall between the bias cracks of the hiring process.
#dei #bias #informedecisions
Would love your thoughts on what do you think it's aiming to assess and is it able to do so successfully?
#recruitment #bias #informedecisions
A cautionary tale on why we shouldn't blindly rely on AI when it comes to people decisions...
#AI #bias #worktech #informedecisions
Rite Aid has been banned from using facial recognition technology for five years following a Federal Trade Commission (FTC) ruling. The decision comes after the FTC found that Rite Aid's AI system, used between 2012 and 2020, incorrectly identified consumers, especially women and people of color, as shoplifters. This led to various forms of harassment, including wrongful accusations, police confrontations, and public humiliation.
The FTC's complaint highlights Rite Aid's failure to implement safeguards against these harms. The technology produced numerous false positives, misidentifying individuals based on poor-quality images and inaccurate matches. The system was more prone to errors in stores located in predominantly Black and Asian communities.
As part of the settlement, Rite Aid must now establish comprehensive procedures to mitigate risks when using biometric data and discontinue any such technology if it cannot ensure consumer safety. The company is also required to enhance its information security program and comply with a 2010 Commission data security order, which it previously violated.
See link to full article in the first comment
#AI #bias #informedecisions
Here is an example of CHAT-GPT gender bias.
Many thanks to @keith mcnulty for pointing this out and sharing some sample prompts
#AI #bias #informedecisions
One of my managers decided to freestyle an interview and ask candidates “If I invite you to a potluck, what are you bringing?”. The candidate said “mashed potatoes”.
Well, wasn’t that insightful into how they will perform as a Digital Marketing Specialist?
Source: the web
#interviews #bias #informedecisions