Scientists launched a robot-judged beauty contest. What could go wrong? A lot.

"Computer computer, on my screen — what's the fairest face you've ever seen?"

Presumably, that's what the folks at Youth Laboratories were thinking when they launched Beauty.AI, the world's first international beauty contest judged entirely by an advanced artificial intelligence system.

More than 600,000 people from across the world entered the contest, which was open to anyone willing to submit a selfie taken in neutral lighting without any makeup.


According to the scientists, their system would use algorithms based on facial symmetry, wrinkles, and perceived age to define "objective beauty" — whatever that means.

This murderous robot understands my feelings. GIF via CNBC/YouTube.

It's a pretty cool idea, right?

Removing all the personal taste and prejudice from physical judgment and allowing an algorithm to become the sole arbiter and beholder of beauty would be awesome.

What could possibly go wrong?

"Did I do that?" — These researchers, probably. GIF from "Family Matters."

Of the 44 "winners" the computer selected, seven of them were Asian, and one was black. The rest were white.

This is obviously proof that white people are the most objectively attractive race, right? Hahaha. NO.

Instead, it proves (once again) that human beings have unconscious biases, and that it's possible to pass those same biases on to machines.

Basically, if your algorithm is based mostly on white faces and 75% of the people who enter your contest are white Europeans, the white faces are going to win based on probability, even if the computer is told to ignore skin tone.

Plus, most cameras are literally optimized for light skin, so that probably didn't help the problem, either. In fact, the AI actually discarded some entries that it deemed to be "too dim."

So, because of shoddy recruitment, a non-diverse team, internal biases, and a whole slew of other reasons, these results were ... more than a little skewed.

Thankfully, Youth Laboratories acknowledged this oversight in a press release. They're delaying the next stage in their robotic beauty pageant until they iron out the kinks in the system.

Ironically, Alex Zhavoronkov, their chief science officer, told The Guardian, "The algorithm ... chose people who I may not have selected myself."

Basically, their accidentally racist and not-actually-objective robot also had lousy taste. Whoops.

Ooooh baby, racist robots! Yeah! GIF from Ruptly TV/YouTube.

This begs an important question: As cool as it would be to create an "objective" robot or algorithm, is it really even possible?

The short answer is: probably not. But that's because people aren't actually working on it yet — at least, not in the way they claim to be.

As cool and revelatory as these cold computer calculations could potentially be, getting people to acknowledge and compensate for their unconscious biases when they build the machines could be the biggest hurdle. Because what you put in determines what you get out.

"While many AI safety activists are concerned about machines wiping us out, there are very few initiatives focused on ensuring diversity, balance, and equal opportunity for humans in the eyes of AI," said Youth Laboratories Chief Technology Officer Konstantin Kiselev.

Of course you like that one. GIF from "Ex Machina."

This is the same issue we've seen with predictive policing, too.

If you tell a computer that blacks and Hispanics are more likely to be criminals, for example, it's going to provide you with an excuse for profiling that appears on the surface to be objective.

But in actuality, it just perpetuates the same racist system that already exists — except now, the police can blame the computer instead of not taking responsibility for themselves.

"There is no justice. There is ... just us." GIF from "Justice League."

Of course, even if the Beauty.AI programmers did find a way to compensate for their unconscious biases, they'd still have to deal with the fact that, well, there's just no clear definition for "beauty."

People have been trying to unlock that "ultimate secret key" to attractiveness since the beginning of time. And all kinds of theories abound: Is attractiveness all about the baby-makin', or is it some other evolutionary advantage? Is it like Youth Laboratories suggests, that "healthy people look more attractive despite their age and nationality"?

Also, how much of beauty is strictly physical, as opposed to physiological? Is it all just some icky and inescapable Freudian slip? How much is our taste influenced by what we're told is attractive, as opposed to our own unbiased feelings?

Simply put: Attractiveness serves as many different purposes as there are factors that define it. Even if this algorithm somehow managed to unlock every possible component of beauty, the project was flawed from the start. Humans can't even unanimously pick a single attractive quality that matters most to all of us.

GIF from "Gilligan's Island."

The takeaway here? Even our technology starts with our humanity.

Rather than creating algorithms to justify our prejudices or preferences, we should focus our energies on making institutional changes that bring in more diverse voices to help make decisions. Embracing more perspectives gives us a wider range of beauty — and that's better for everyone.

If your research team or board room or city council actually looks like the world it's supposed to represent, chances are they're going to produce results that look the same way, too.

via The Walt Disney Company / Flickr

One of the ways to tell if you're in a healthy relationship is whether you and your partner are free to talk about other people you find attractive. For many couples, bringing up such a sensitive topic can cause some major jealousy.

Of course, there's a healthy way to approach such a potentially dangerous topic.

Telling your partner you find someone else attractive shouldn't be about making them feel jealous. It's probably also best that if you're attracted to a coworker, friend, or their sibling, that you keep it to yourself.

Keep Reading Show less
Courtesy of CeraVe
True

"I love being a nurse because I have the honor of connecting with my patients during some of their best and some of their worst days and making a difference in their lives is among the most rewarding things that I can do in my own life" - Tenesia Richards, RN

From ushering new life into the world to holding the hand of a patient as they take their last breath, nurses are everyday heroes that deserve our respect and appreciation.

To give back to this community that is always giving so selflessly to others, CeraVe® put out a call to nurses to share their stories for a chance to be featured in Heroes Behind the Masks, a digital content series shining a light on nurses who go above and beyond to provide safe and quality care to patients and their communities.

First up: Tenesia Richards, a labor and delivery nurse working in New York City who, in addition to her regular job, started a community outreach program in a homeless shelter that houses expectant mothers for up to one year postpartum.

Tenesia | Heroes Behind the Masks presented by CeraVe www.youtube.com

Upon learning at a conference that black mothers in the U.S. die at three to four times the rate of white mothers, one of the widest of all racial disparities in women's health, Richards decided to take further action to help her community. She, along with a handful of fellow nurses, volunteered to provide antepartum, childbirth and postpartum education to the women living at the shelter. Additionally, they looked for other ways to boost the spirits of the residents, like throwing baby showers and bringing in guest speakers. When COVID-19 hit and in-person gatherings were no longer possible, Richards and her team found creative workarounds and created holiday care packages for the mothers instead.

Keep Reading Show less