+
upworthy

unconscious bias

Identity

Woman’s experience scheduling an EEG highlights the unconscious bias of textured hair

Though her scalp was exposed for the procedure, they still insisted she take her twists out, making it harder to get to her scalp.

Woman can't schedule EEG due to unconscious textured hair bias.

Getting a medical procedure done can be scary, or at the very least nerve-wracking, no matter how many times you've had it done. It's something that's outside of your normal routine and you're essentially at the mercy of the medical facility and providers. Most of the time, the pre-procedure instructions make sense, and if something catches you by surprise, it's usually easily explained.

Sadé Naima recently had an experience while attempting to get an EEG that wasn't easily explained away. In fact, the entire situation didn't make sense to the TikTok creator who experiences migraines. Naima uploaded a video to the social media platform explaining the sequence of events that happened after her doctor referred her to receive an MRI and EEG.

An MRI uses a magnetic field to generate images and an EEG uses electrodes that stick to your scalp to create images of your brain waves.


Since Naima was having consistent migraines, it seemed like a medical necessity to have these tests done to make sure nothing more serious was going on. So imagine the patient's surprise when the pre-procedure paperwork for the EEG mentioned that her hair had to be loose, which at first glance may seem harmless and inclusive of everyone. But, kinky textured hair does not have the same effect as straight hair when it's "loose."

"I received a document saying to prepare for the EEG—I can't have weave, braids, no hair oil, no conditioner, like nothing in your hair," Naima explains. "And how as a Black woman that is so exclusionary for coarse and thick hair. To literally have no product in your hair and show up with it loose, you're not even reaching my scalp with that."

@sadenaima

update: someone else from the medical center called me & suggested i show my hair to/talk with the technician so tbd i guess ... It's 2023, this makes no sense that the technology isn't inclusive or that the practioners aren't educated / prepared for diverse experiences. Not too much on the appearance 😂 #migraine #eeg #eegblackhair #blackhair #nyc #neurology #racialbiasinmedicine #racialbias #fyp #foryoupage

When kinky textured hair is "loose" without product, this generally means the hair is in an afro, which makes the scalp extremely difficult to get to without the use of a tool to part it and hold it out of the way. Naima called the facility for clarification and explained that her hair is currently in twists with her scalp exposed. She assured the woman on the other end that she would make sure her hair was clean and free of product, but that it would be easier for everyone involved if her hair remained in twists.

Naima went as far as to send an email with multiple pictures of her hair showing that her scalp was indeed easily accessible with her protective style in place. But the unnamed woman told her that it wasn't possible for the EEG to be completed if Naima's hair was in twists. This prompted the question, "What about people with locs?" to which the person told her they also wouldn't be able to get the procedure.

@sadenaima

I sent out an email to the center & their HR and will see where that takes me to start! Thank you everyone! 🧡 i love that this has been able to help & inspire people. #racialbias #medicalracialbias #minitwists #migraine #eegblackhair #eegnaturalhair #neurology #locs #eeg #fyp

The frustrated patient searched the internet looking for the best way to have an EEG with kinky textured hair when she came across information written by a Black doctor who was also trying to find an answer. Currently, there doesn't seem to be much information on how to appropriately give an EEG if the patient has textured hair, though many protective styles provide the direct access to the scalp needed for the procedure.

So, while policies like these aren't meant to be discriminatory, it's clear that they may cause some unintended problems. In the end, Naima's EEG was rescheduled, but after speaking to the technician that completes the procedure, she was assured her hair would not be an issue. Hopefully, the results of her EEG are favorable and she has a much more pleasant experience when preparing for the procedure.

"Computer computer, on my screen — what's the fairest face you've ever seen?"

Presumably, that's what the folks at Youth Laboratories were thinking when they launched Beauty.AI, the world's first international beauty contest judged entirely by an advanced artificial intelligence system.

More than 600,000 people from across the world entered the contest, which was open to anyone willing to submit a selfie taken in neutral lighting without any makeup.


According to the scientists, their system would use algorithms based on facial symmetry, wrinkles, and perceived age to define "objective beauty" — whatever that means.

This murderous robot understands my feelings. GIF via CNBC/YouTube.

It's a pretty cool idea, right?

Removing all the personal taste and prejudice from physical judgment and allowing an algorithm to become the sole arbiter and beholder of beauty would be awesome.

What could possibly go wrong?

"Did I do that?" — These researchers, probably. GIF from "Family Matters."

Of the 44 "winners" the computer selected, seven of them were Asian, and one was black. The rest were white.

This is obviously proof that white people are the most objectively attractive race, right? Hahaha. NO.

Instead, it proves (once again) that human beings have unconscious biases, and that it's possible to pass those same biases on to machines.

Basically, if your algorithm is based mostly on white faces and 75% of the people who enter your contest are white Europeans, the white faces are going to win based on probability, even if the computer is told to ignore skin tone.

Plus, most cameras are literally optimized for light skin, so that probably didn't help the problem, either. In fact, the AI actually discarded some entries that it deemed to be "too dim."

So, because of shoddy recruitment, a non-diverse team, internal biases, and a whole slew of other reasons, these results were ... more than a little skewed.

Thankfully, Youth Laboratories acknowledged this oversight in a press release. They're delaying the next stage in their robotic beauty pageant until they iron out the kinks in the system.

Ironically, Alex Zhavoronkov, their chief science officer, told The Guardian, "The algorithm ... chose people who I may not have selected myself."

Basically, their accidentally racist and not-actually-objective robot also had lousy taste.Whoops.

Ooooh baby, racist robots! Yeah! GIF from Ruptly TV/YouTube.

This begs an important question: As cool as it would be to create an "objective" robot or algorithm, is it really even possible?

The short answer is: probably not. But that's because people aren't actually working on it yet — at least, not in the way they claim to be.

As cool and revelatory as these cold computer calculations could potentially be, getting people to acknowledge and compensate for their unconscious biases when they build the machines could be the biggest hurdle. Because what you put in determines what you get out.

"While many AI safety activists are concerned about machines wiping us out, there are very few initiatives focused on ensuring diversity, balance, and equal opportunity for humans in the eyes of AI," said Youth Laboratories Chief Technology Officer Konstantin Kiselev.

Of course you like that one. GIF from "Ex Machina."

This is the same issue we've seen with predictive policing, too.

If you tell a computer that blacks and Hispanics are more likely to be criminals, for example, it's going to provide you with an excuse for profiling that appears on the surface to be objective.

But in actuality, it just perpetuates the same racist system that already exists — except now, the police can blame the computer instead of not taking responsibility for themselves.

"There is no justice. There is ... just us." GIF from "Justice League."

Of course, even if the Beauty.AI programmers did find a way to compensate for their unconscious biases, they'd still have to deal with the fact that, well, there's just no clear definition for "beauty."

People have been trying to unlock that "ultimate secret key" to attractiveness since the beginning of time. And all kinds of theories abound: Is attractiveness all about the baby-makin', or is it some other evolutionary advantage? Is it like Youth Laboratories suggests, that "healthy people look more attractive despite their age and nationality"?

Also, how much of beauty is strictly physical, as opposed to physiological? Is it all just some icky and inescapable Freudian slip? How much is our taste influenced by what we're told is attractive, as opposed to our own unbiased feelings?

Simply put: Attractiveness serves as many different purposes as there are factors that define it. Even if this algorithm somehow managed to unlock every possible component of beauty, the project was flawed from the start. Humans can't even unanimously pick a single attractive quality that matters most to all of us.

GIF from "Gilligan's Island."

The takeaway here? Even our technology starts with our humanity.

Rather than creating algorithms to justify our prejudices or preferences, we should focus our energies on making institutional changes that bring in more diverse voices to help make decisions. Embracing more perspectives gives us a wider range of beauty — and that's better for everyone.

If your research team or board room or city council actually looks like the world it's supposed to represent, chances are they're going to produce results that look the same way, too.

You meet all the qualifications. Your résumé is perfect. But you don't get a callback.

Maybe there were a wealth of qualified candidates. Or maybe the reviewer saw your name and just "had a feeling." Too often, the latter turns out to be true, as unconscious bias often plays a role in hiring decisions, and it largely affects women and people of color.


Photo by WOCinTech Chat/Flickr (cropped).

That's why the local government in Victoria, Australia, recently launched an 18-month experiment with blind applications.

Yeah, it's basically the human-resources equivalent of "The Voice."

GIF via "The Voice."


The trial will evaluate which pieces of personal information like age, gender, name, or location should be shown or hidden from reviewers during the employment application process. Government departments and agencies as well as a few private companies in the region will take part (the latter will receive financial benefits for participating).

But of course, Australia isn't the only nation with this problem. Not even close.

Here's why hiring managers and recruiters everywhere need to consider blind applications.

1. Because just using your given name can make your job hunt exponentially longer.

One study found that candidates with African-American-sounding names were 50% less likely to move forward to job interviews than candidates with white-sounding names, even with identical résumés. And research from the Australian National University revealed that to get as many interviews as an applicant with an Anglo-sounding name, a person with a Middle Eastern name would have to submit 64% more applications. A person with a Chinese-sounding name? 68%.

Photo by iStock.

2. Because even though we've shown them time and time again, some people still don't think women can get the job done.

In the1970s and '80s, orchestras around the world began using blind auditions to fill their ranks. Musicians perform behind a screen to disguise themselves, even removing their shoes if they could be a giveaway.

Photo by Miguel Medina/AFP/Getty Images.

When orchestras use the screen — even just for preliminary auditions — women are 50% more likely to make it to the final round of judging. Many attribute the increase in women in performance ensembles over the years to these blind auditions.

A recent (non-peer-reviewed) study of women coders got similar results. When those judging their work didn't know they were women, the reviewers were more likely to accept their suggestions.

Photo by WOCinTech Chat/Flickr (cropped). 

3. Because even your neighborhood can make people think twice about hiring you.

As if name and gender weren't enough, individuals making hiring decisions sometimes judge applicants on their address too. Regardless of race, people living in more affluent, better-educated neighborhoods receive more callbacks for interviews.

Photo by iStock.

Our biases continue into the interview process. Luckily, there ways to overcome them there too.

Some argue that interviews themselves are simply exercises in affirmation bias and aren't the best way to hire people. As Ori Brafman, behavioral expert and co-author of the book "Sway," told the New York Times: "Time and again, the research shows that interviews are poor predictors of job performance because we tend to hire people we think are similar to us rather than those who are objectively going to do a good job."

His suggestion? Replace the "first-date" model often used in interviews and stick to the facts: examples of past performance.

Photo by iStock.

Unconscious bias is just that: unconscious. But there are steps we can take to give qualified applicants a fair shot.

All of us, regardless of gender, race, or age have biases. It's up to us to acknowledge them, identify them, and take active steps to keep them out of the decision-making process.

It's easier said than done, but like this trial run in Victoria, we have to start somewhere.

Photo by iStock.