upworthy

artificial intelligence

Popular

I Googled to see if Maria Von Trapp remarried after Georg died. The result was horrifying.

Having blatantly false information as the top search result is actually a huge problem for us all.

Google's AI Overview sometimes gets basic facts wrong.

With AI being implemented seemingly everywhere for seemingly everything these days, it wasn't surprising when Google launched its "AI Overview" in the spring of 2024. With messaging like "Generative AI in Search: Let Google do the searching for you" and "Find what you're looking for faster and easier with AI overviews in search results," the expectation is that AI will parse through the search results for you and synopsize the answer.

That sounds great. The problem is, its synopsis is too often entirely wrong. We're not talking just a little misleading or incomplete, but blatantly, factually false. Let me show you an example.

I recently wrote an article about the real-life love story between Maria and Georg Von Trapp, and as part of my research, I found out Georg died 20 years after they married. I hadn't seen anything about Maria remarrying, so I Googled whether she had. Here's what the AI Overview said when I searched last week:

maria von trapp, ai overview results, false information, ai, Internet This is what Google AI Overview said when I asked how many times Maria Von Trapp had been married. It's wrong.Screenshot via Google

"Maria Von Trapp married twice. First, she married Georg Von Trapp in 1927 and they had 10 children together. After Georg's death, she married Hugh David Campbell in 1954 and had 7 daughters with him. Later, she also married Lynne Peterson in 1969 and had one son and daughter with him."

Something about that didn't add up—and it wasn't just how it said she married twice but then listed three spouses. Maria Von Trapp was born in 1905, so according to the AI Overview, she remarried at 49 years old and had seven more children, and then married again at 64 years old and had another two children. That seems…unlikely.

old woman, elderly, golden girls, women, gif Did Maria Von Trapp have two children in her mid-60s? No. Giphy

So I clicked the link icon on the AI Overview, which took me to the Maria Von Trapp Wikipedia page. On that page, I found a chart where the extra two spouses were listed—but they very clearly weren't hers. Hugh David Campbell was the husband of one of her daughters. Lynn Peterson was the wife of one of her sons.

The fact is that Maria never remarried after Georg died. If I had just run with the AI Overview, I would have gotten it this very basic fact about her life completely wrong. And it's not like it pulled that information from a source that got it wrong. Wikipedia had it right. The AI Overview extrapolated the real information incorrectly.

Ironically, when I Googled "Did Maria Von Trapp remarry after Georg died?" in the middle of writing this article to see if the same result came back, the AI Overview got it right, citing the Upworthy article I wrote. (Yes, I laughed out loud.)

maria von trapp, ai overview results, false information, media, literacy After my article was published, the AI Overview cited it while giving the correct answer.Screenshot via Google

This may seem like a lot of fuss over something inconsequential in the big picture, but Maria Von Trapp's marital status is not the only wrong result I've seen in Google's AI Overview. I once searched for the cast of a specific movie and the AI Overview included a famous actor's name that I knew for 100% certain was not in the film. I've asked it for quotes about certain subjects and found quotes that were completely made up.

Are these world-changing questions? No. Does that matter? No.

facts matter, misinformation, disinformation, fact-checking, AI Facts should matter no matter what they are. Giphy GIF by Angie Tribeca

Objective facts are objective facts. If the AI Overview so egregiously messes up the facts about something that's easily verifiable, how can it be relied on for anything else? Since its launch, Google has had to fix major errors, like when it responded to the query "How many Muslim presidents has the U.S. had?" with the very wrong answer that Barack Obama had been our first Muslim president.

Some people have "tricked" Google's AI into giving ridiculous answers by simply asking it ridiculous questions, like "How many rocks should I eat?" but that's a much smaller part of the problem. Most of us have come to rely on basic, normal, run-of-the-mill searches on Google for all kinds of information. Google is, by far, the most used search engine, with 79% of the search engine market share worldwide as of March 2025. The most relied upon search tool should have reliable search results, don't you think?

Even the Google AI Overview itself says it's not reliable:

ai overview results, false information, google reliability, AI, misinformation Google's AI Overview doesn't even trust itself to be accurate.Screenshot via Google

As much as I appreciate how useful Google's search engine has been over the years, launching an AI feature that might just make things up and put them them at the top of the search results feels incredibly irresponsible. And the fact that it still spits out completely (yet unpredictably) false results about objectively factual information over a year later is unforgivable, in my opinion.

We're living in an era where people are divided not only by political ideologies but by our very perceptions of reality. Misinformation has been weaponized more and more over the past decade, and as a result, we often can't even agree on the basic facts much less complex ideas. As the public's trust in expertise, institutions, legacy media, and fact-checking has dwindled, people have turned to alternative sources to get information. Unfortunately, those sources come with varying levels of bias and reliability, and our society and democracy are suffering because of it. Having Google spitting out false search results at random is not helpful on that front.

AI has its place, but this isn't it. My fear is that far too many people assume the AI Overview is correct without double-checking its sources. And if people have to double-check it anyway, the thing is of no real use—just have Google give links to the sources like they used to and end this bizarre experiment with technology that simply isn't ready for its intended use.

This article originally appeared in June.

ChatGPT

Some are a little…too accurate.

You’ve probably heard about how dogs tend to resemble their owners (or rather, how owners tend to pick dogs that resemble them). But this new TikTok trend takes that concept to whole new levels.

Similar to the previous trend of turning yourself into a Barbie doll or action figures, people are now using ChatGPT to transform their pups into humans. Some are eerily realistic, others are laughably weird, but all are incredibly entertaining.

Below is one particularly viral video, where we first see a picture of a red Irish setter demurely laying on a bed. Cut to an equally demure redhead in a green sweat, and even sporting a dog bone necklace, similar to the collar in the previous image.

@roisintheredsetter

what do we think? 😂 the necklace 😂 #dog #chatgpt #redsetter #irishsetter #dogsoftiktok


“Reminds me of Helly from Severance,” one person noted. Another quipped, “oooo she classy.”

Here’s another one for pug lovers (technically this is a Brussels Griffon). The facial expression is uncanny.

@juliac0p3land

Visit TikTok to discover videos!

“Your dog is Tyrion Lannister,” a Game of Thrones fan wrote.

And yes, this fun is not exclusive to dogs. Any species can go through this AI Animorphing. Over on Reddit, a Calico became a Goth rocker sporting an orange streak and three baby ducks became three yellow clad toddlers, just to name a few.



Going back to TikTok, someone even human-ified a chicken. And it's every bit as great as you’d hope it would be.


@themissysmith

Visit TikTok to discover videos!


Perhaps the greatest thing about this is how easy it is to do.

How to turn your dog into a human with AI using ChatGPT

  1. First, you need to go to the ChatGPT website or app. Then, log in or create an account if you haven’t got one already.
  2. After that, press the + and upload a photo of your dog that you want to turn into a human. Make sure it’s a clear, high-quality image.
  3. Underneath the picture, write this prompt with the correct gender: “What would my *male/female* dog look like as a person?”
  4. Now, all you need to do is click the arrow to send the message and wait for Chat GPT to turn your dog into a human.

Out of curiosity I did this with my cat, Clyde.

Before generating an image ChatGPT was kind enough to imagine his personality, which was quite enjoyable. The "he probably drinks coffee or herbal tea” part was my favorite.

After a few minutes, a human version of Clyde appeared…who is apparently Hozier.

To get a little more specific, I then added some things about his personality: he’s affectionate, sweet, soulful, and sometimes a bit mischievous. Here's what ChatGPT came up with:

So…happy Hozier. Honestly it’s pretty spot on.

As with most things ChatGPT, it helps to be as specific as possible. Lucky for pet owners, they could talk about their fur babies all day! With all the unsavory news regarding AI, it’s nice to have something pretty wholesome thrown in the mix.

Teddy Roosevelt, Ronald Reagan, Joe Biden and Barack Obama all having a laugh.

Like it or not, we’ve recently entered the age of artificial intelligence, and although that may be scary for some, one guy in Florida thinks it’s a great way to make people laugh. Cam Harless, the host of The Mad Ones podcast, used AI to create portraits of every U.S. president looking “cool” with a mullet hairstyle, and the results are hilarious.

The mullet is a notorious hairdo known as the "business in the front, party in the back" look. It's believed that the term "mullet" was coined by the rap-punk-funk group Beastie Boys in 1994.

While cool is in the eye of the beholder, Harless seems to believe it means looking like a cross between Dog the Bounty Hunter and Kenny Powers from “Eastbound and Down.”

Harless made the photos using Midjourney, an app that creates images from textual descriptions. "I love making AI art," Harless told Newsweek. "Often I think of a prompt, create the image and choose the one that makes me laugh the most to present on Twitter and have people try and guess my prompt."

"The idea of Biden with a mullet made me laugh, so I tried to make one with him and Trump together and that led to the whole list of presidents,” he continued.

Harless made AI photos of all 46 presidents with mullets and shared them on Twitter, and the response has been tremendous. His first photo of Joe Biden with a mullet has nearly 75,000 likes and counting.

Here’s our list of the 14 best presidents with mullets. Check out Harless' thread here if you want to see all 46.

Joe Biden with an incredible blonde mane and a tailored suit. This guy takes no malarkey.

Donald Trump looking like a guy who has 35 different pairs of stonewashed jeans in his closet at Mar-a-Lago.

Barack Obama looking like he played an informant on "Starsky and Hutch" in 1976.

George H.W. Bush looking like he plays bass in Elvis's backing band at the International Hotel in Vegas in '73.

Gerald Ford looking like the last guy on Earth that you want to owe money.

"C'mon down and get a great deal at Dick Nixon's Chrysler, Dodge, Jeep and Ram, right off the I-95 in Daytona Beach."

"Who you calling Teddy? That's Theodore Roosevelt to you."

Grover Cleveland is giving off some serious steampunk vibes here.

Pray you never key Chester A. Arthur's Trans Am. If you know what's best for you.

Honest Abe? More like Honest Babe. Am I right?

Franklin Pierce looking like your favorite New Romantic singer from 1982. Eat your heart out, Adam Ant.

"Daniel Day Lewis stole my look in 'Last of the Mohicans.'" — John Tyler

Many have tried the tri-level mullet but few pulled it off as beautifully as James Madison.

Washington's mullet was like a white, fluffy cloud of freedom.

Find more cool, mulletted U.S. presidents here.


This article originally appeared three years ago.

Parenting

2 years before ChatGPT, a kids cartoon warned us about the environmental impacts of AI

Kids should know what AI can and can't do, and what it really costs. Doc McStuffins is on the case.

Disney Jr./YouTube, Unsplash

My 4-year-old watches so much Doc McStuffins that the show has basically become white noise in my household. It's the only thing she'll watch, so when it's on in the background, I barely notice — outside of the absurdly catchy songs living rent-free in my head 24/7. But the other day, she was watching one particular episode when I half tuned in just to see what the plot was.

If you don't have kids in this age bracket, Doc McStuffins is a 10-year-old girl who helps fix up broken toys. It's a really cute show with sweet messages on acceptance, accessibility, imagination, caring, and more. But the episode in question seemed to have a lot more going on plot-wise than the usual, so I sat down and watched a little more. And pretty soon I was hooked into a fascinating story about the climate dangers of Artificial Intelligence and automation. I couldn't believe it!

'The Great McStuffins Meltdown' explained

Season 5, Episode 13. Doc McStuffins, in the previous season, has stopped running her toy-doctoring practice out of her childhood home and now works at McStuffins Toy Hospital. In this episode, it has received a major upgrade with lots of fancy new equipment.

The new machines do a lot of the work that Doc and her friends used to do around the hospital. There's a machine that plays with and encourages toy pets, a Cuddle Bot that cuddles sick toys, and even a Check-Up 3000 that gives routine medical care so the Doc herself can do other things. Doc and her friends are a little bored, and the patients aren't so sure about these new machines, but mostly, things are going pretty great. The hospital is able to help more toys, faster this way.

But oh no! Doc gets a distress call from her friends at the Toyarctic, a fictional frozen land where toys live. Chunks of ice have been breaking off their glaciers. The Toyarctic is melting!

Doc and her friends quickly figure out that the Toyarctic has gotten too warm, which is causing the ice to melt. And the culprit is McStuffins Hospital. With all the new automated machines running, the hospital is using too much power and overheating the power grid, which is causing the Toyarctic's climate to warm at a dangerous rate.

I mean... woah! Doc McStuffins definitely did not have to go this hard, but I respect it.

What fascinated me most was that this episode was released in 2020 — a full two years before ChatGPT became publicly available and the AI craze kicked into hyperdrive.

Disney Jr./YouTube

AI and climate change are both inevitable parts of our children's lives. It's crucial that they learn about them both from a young age.

AI is moving so fast and changing every day. It's also publicly available to people of all ages, and so many of us don't understand how it works very well. That's a dangerous combination. Teachers and college professors everywhere are bemoaning that more and more kids are using AI to write their papers and do their homework without ever learning the material.

And, of course, the even bigger elephant in the room is climate change, which will play a major role in our children's lives as they grow into adults. Parents are desperate for some way to help their kids understand how big of a deal it is. A report from This Is Planeted states "Nearly 70% of parents and caregivers surveyed in 2022 believed children’s media should include age-appropriate information about climate, and 74% agreed that children’s media should include climate solutions," but that less than 5% of the most popular children's shows and family films have any content or themes related to climate change.

(I'd be curious how much of the heavy lifting the GOAT Captain Planet is still doing!)

captain planet flyingGiphy

What's not being talked about enough — unless you're a McStuffins-head like my family is — is the relationship between AI and climate change.

In short: It's not good! AI seems like a quick and fun thing we can access on our phones and computers, but the massive data centers that perform the calculations behind this 'intelligence' consume staggering amounts of power and water, while generating heat and harmful emissions. Promises of more energy-efficient intelligence models, like DeepSeek, are murky at best.

Scientific American even writes that the environmental impact of AI goes far beyond its emissions and energy usage. What is it being used for? In many cases, to make things faster and bigger — including industries that can harm the Earth like logging, drilling, fast fashion.

I was so impressed that a show popular with children as young as 2 could tackle such an urgent and important topic.

Watching it together opened doors for us to begin age-appropriate conversations with both of our kids about AI, climate change, and how the two are related. Conversations that, I'm sure, we'll be continuing to have and build on for years to come.

To be fair, Artificial Intelligence can do some good things. You see this play out on the show. Initially, it does help the hospital treat more toys! And in the real world, for all the negative environmental effects, there are people out there trying to use AI to monitor emissions and create more energy-efficient practices that might ultimately help the planet.

In the end, Doc McStuffins and her friends decide to shut down the fancy automated machines at the hospital. Not only are they hurting the toys that live in the Toyarctic, they just aren't as good as the real thing. They don't always know the right questions to ask, they don't make the patients feel safe or cared for, and of course, their machine-cuddles don't come with any real warmth or love.

If nothing else, I hope that's the message that sticks with my kids long after they've outgrown this show.