upworthy

misinformation

Popular

I Googled to see if Maria Von Trapp remarried after Georg died. The result was horrifying.

Having blatantly false information as the top search result is actually a huge problem for us all.

Google's AI Overview sometimes gets basic facts wrong.

With AI being implemented seemingly everywhere for seemingly everything these days, it wasn't surprising when Google launched its "AI Overview" in the spring of 2024. With messaging like "Generative AI in Search: Let Google do the searching for you" and "Find what you're looking for faster and easier with AI overviews in search results," the expectation is that AI will parse through the search results for you and synopsize the answer.

That sounds great. The problem is, its synopsis is too often entirely wrong. We're not talking just a little misleading or incomplete, but blatantly, factually false. Let me show you an example.

I recently wrote an article about the real-life love story between Maria and Georg Von Trapp, and as part of my research, I found out Georg died 20 years after they married. I hadn't seen anything about Maria remarrying, so I Googled whether she had. Here's what the AI Overview said when I searched last week:

maria von trapp, ai overview results, false information, ai, Internet This is what Google AI Overview said when I asked how many times Maria Von Trapp had been married. It's wrong.Screenshot via Google

"Maria Von Trapp married twice. First, she married Georg Von Trapp in 1927 and they had 10 children together. After Georg's death, she married Hugh David Campbell in 1954 and had 7 daughters with him. Later, she also married Lynne Peterson in 1969 and had one son and daughter with him."

Something about that didn't add up—and it wasn't just how it said she married twice but then listed three spouses. Maria Von Trapp was born in 1905, so according to the AI Overview, she remarried at 49 years old and had seven more children, and then married again at 64 years old and had another two children. That seems…unlikely.

old woman, elderly, golden girls, women, gif Did Maria Von Trapp have two children in her mid-60s? No. Giphy

So I clicked the link icon on the AI Overview, which took me to the Maria Von Trapp Wikipedia page. On that page, I found a chart where the extra two spouses were listed—but they very clearly weren't hers. Hugh David Campbell was the husband of one of her daughters. Lynn Peterson was the wife of one of her sons.

The fact is that Maria never remarried after Georg died. If I had just run with the AI Overview, I would have gotten it this very basic fact about her life completely wrong. And it's not like it pulled that information from a source that got it wrong. Wikipedia had it right. The AI Overview extrapolated the real information incorrectly.

Ironically, when I Googled "Did Maria Von Trapp remarry after Georg died?" in the middle of writing this article to see if the same result came back, the AI Overview got it right, citing the Upworthy article I wrote. (Yes, I laughed out loud.)

maria von trapp, ai overview results, false information, media, literacy After my article was published, the AI Overview cited it while giving the correct answer.Screenshot via Google

This may seem like a lot of fuss over something inconsequential in the big picture, but Maria Von Trapp's marital status is not the only wrong result I've seen in Google's AI Overview. I once searched for the cast of a specific movie and the AI Overview included a famous actor's name that I knew for 100% certain was not in the film. I've asked it for quotes about certain subjects and found quotes that were completely made up.

Are these world-changing questions? No. Does that matter? No.

facts matter, misinformation, disinformation, fact-checking, AI Facts should matter no matter what they are. Giphy GIF by Angie Tribeca

Objective facts are objective facts. If the AI Overview so egregiously messes up the facts about something that's easily verifiable, how can it be relied on for anything else? Since its launch, Google has had to fix major errors, like when it responded to the query "How many Muslim presidents has the U.S. had?" with the very wrong answer that Barack Obama had been our first Muslim president.

Some people have "tricked" Google's AI into giving ridiculous answers by simply asking it ridiculous questions, like "How many rocks should I eat?" but that's a much smaller part of the problem. Most of us have come to rely on basic, normal, run-of-the-mill searches on Google for all kinds of information. Google is, by far, the most used search engine, with 79% of the search engine market share worldwide as of March 2025. The most relied upon search tool should have reliable search results, don't you think?

Even the Google AI Overview itself says it's not reliable:

ai overview results, false information, google reliability, AI, misinformation Google's AI Overview doesn't even trust itself to be accurate.Screenshot via Google

As much as I appreciate how useful Google's search engine has been over the years, launching an AI feature that might just make things up and put them them at the top of the search results feels incredibly irresponsible. And the fact that it still spits out completely (yet unpredictably) false results about objectively factual information over a year later is unforgivable, in my opinion.

We're living in an era where people are divided not only by political ideologies but by our very perceptions of reality. Misinformation has been weaponized more and more over the past decade, and as a result, we often can't even agree on the basic facts much less complex ideas. As the public's trust in expertise, institutions, legacy media, and fact-checking has dwindled, people have turned to alternative sources to get information. Unfortunately, those sources come with varying levels of bias and reliability, and our society and democracy are suffering because of it. Having Google spitting out false search results at random is not helpful on that front.

AI has its place, but this isn't it. My fear is that far too many people assume the AI Overview is correct without double-checking its sources. And if people have to double-check it anyway, the thing is of no real use—just have Google give links to the sources like they used to and end this bizarre experiment with technology that simply isn't ready for its intended use.

This article originally appeared in June.

RumorGuard by The News Literacy Project.

The 2016 election was a watershed moment when misinformation online became a serious problem and had enormous consequences. Even though social media sites have tried to slow the spread of misleading information, it doesn’t show any signs of letting up.

A NewsGuard report from 2020 found that engagement with unreliable sites between 2019 and 2020 doubled over that time period. But we don’t need studies to show that misinformation is a huge problem. The fact that COVID-19 misinformation was such a hindrance to stopping the virus and one-third of American voters believe that the 2020 election was stolen is proof enough.

What’s worse is that according to Pew Research, only 26% of American adults are able to distinguish between fact and opinion.

To help teach Americans how to discern real news from fake news, The News Literacy Project has created a new website called RumorGuard that debunks questionable news stories and teaches people how to become more news literate.


“Misinformation is a real threat to our democracy, our health and our environment. But too many people are not sure how to verify the news they come across and are convinced there is no useful action they can take to protect themselves and others from being fooled,” Charles Salter, NLP’s president and CEO, said in a statement. “We can confront these challenges by making sure more people have news literacy skills and the ability to collectively push back against the spread of false, misleading and harmful content.”

The site regularly posts debunked news stories to push back against the lies that spread online. The great thing is that the stories explain why the information shouldn’t be trusted.

Each post explains how to use five major factors of credibility to judge whether a claim is legitimate and walks the reader through the debunking process. The five criteria are a great thing to consider any time someone is reading a news article.

Source: Has the information been posted by a credible source?

Evidence: Is there any evidence that proves the claim is true?

Context: Is the provided context accurate?

Reasoning: Is the claim based on sound reasoning?

Authenticity: Is the information authentic, or has it been edited, changed, or completely made up?

The site also provides lessons to teach people how to identify misinformation so they don’t fall for it in the future. Studies show that the best way to combat misinformation is by inoculating people against it by teaching them how to spot the deceptive tactics used by illegitimate news sites.

A recent study highlighted by Upworthy from researchers from universities of Cambridge and Bristol found that “pre-bunking” was one of the most effective ways to stop the spread of misinformation.

“Across seven high-powered preregistered studies including a field experiment on YouTube, with a total of nearly 30,000 participants, we find that watching short inoculation videos improves people’s ability to identify manipulation techniques commonly used in online misinformation, both in a laboratory setting and in a real-world environment where exposure to misinformation is common,” the recently published findings note.

Over the past six years, there have been numerous attempts by social media platforms and fact-checking organizations to try to stop the spread of false information online as it slowly erodes our democracy. RumorGuard seems to be following the lessons we’ve learned over the past few years by providing fact-checks to big news stories in real time and by helping to inoculate people against fake news in the future.

Let’s hope we can stop the spread of misinformation while we still have a democracy to protect.

Let's start with the facts.

Last night, the U.S. women's soccer team played against Mexico's women's team at a game held in Hartford, Connecticut. Before the match, 98-year-old WWII veteran Pete DuPré played the national anthem on the harmonica. Some of the women on the team turned to face the American flag that was flying at the end of the field during the performance. Some of the women remained facing forward—toward Dupré, the same direction he was facing. All stood silently, some with their hands on their hearts, some with their hands clasped behind their backs.

Those are the facts. Nothing about any of those actions should have been controversial. And yet, we now have countless Americans rooting against the U.S. National Women's Team because they believe some players either turned their back on a veteran or turned their back on the flag.

The manufactured controversy came swift and hard from the "anti-wokeness" crowd, who boast huge followings on social media. I won't share the posts themselves as I don't think viral lies should get more traffic, but fact-checker Daniel Dale's screenshots offer a taste of the lies being pushed, including from the former Acting Director of U.S. National Intelligence.



pbs.twimg.com


pbs.twimg.com



pbs.twimg.com


That's just a small sampling. There are also comments galore on these posts as well as the ESPN post above with people railing against the team, hoping for them to lose, accusing them of disrespecting the country, the flag, and/or the veteran performing.

Athletes have demonstrated during the anthem before, of course, which they have the right to do. But that's not what happened here.

Some of the outlets that ran with the story issued quiet corrections. Some of the influencers who tweeted their outrage added "updates," which most of their followers will never see (instead of just deleting their original tweets and issuing new ones explaining that they were wrong and don't want to be part of spreading misinformation).

U.S. Soccer issued a statement clarifying the situation, which shouldn't have been necessary because we can see with our own eyes what actually happened. And even with the video clearly showing what happened, people are still responding with claims that the players were disrespectful.

Perhaps it's a bit of a Sophie's choice to be presented with a veteran playing the anthem facing one direction and the American flag facing a different direction, but no one can seriously claim that facing either one of them during the anthem is disrespectful.


It's not the "which way should you face in this situation" that's the real controversy. It's the claim that "they turned away from the flag" or "they turned away from the veteran while he played the anthem," when it's clear that no one actually "turned away" from anything. They were all facing the direction of the veteran to start off with. Some chose to turn toward the flag when the anthem started playing. If you look at the audience in the video, it's the same thing—some people are facing the flag and some are facing the veteran who is performing.

It's just baffling how many people are still claiming that team members were being disrespectful, even in the face of clear video evidence to the contrary. It's like the story they heard and chose to believe got stuck in their brains, making it impossible for them to see anything else.

We can disagree on ideas and ideologies and discuss them all day long, but people can't just create their own reality. I firmly believe that we can sit down and work out—or at least work through—our various perspectives and beliefs when we at least agree on the facts. But we can't debate ideas if they are based on alternate realities that aren't actual realities.

If you tell me "BLM protesters burned Portland, Oregon to the ground!" I don't see how we can discuss racial injustice in a meaningful way, because your belief about the Black Lives Matter movement is not based on fact. If you tell me that the COVID vaccines are more dangerous than COVID, or that they turn you magnetic, or that they contain microchips, then we can't discuss the merits of public health measures because what you believe is objectively, verifiably, indisputably false.

America's biggest problem is not that we lack shared values or that we are divided over ideas, even if we are. The bigger, more dangerous divide is reality vs. unreality and facts vs. "alternative facts." Unfortunately, we have a whole slew of media personalities who excel at using falsehood to manipulate people's outrage and fuel the misinformation machinery that makes them gobs of money. And we have too many people who can't seem to discern a fact from a hole in the ground and who refuse to admit when they get the objective facts wrong.

It's like the old parable of the blind men describing an elephant differently depending on what part of the elephant they're touching. Sharing their various descriptions based on their individual perspectives add up to a fuller picture of reality, right? But what if one of those men insisted that what he's touching isn't actually an elephant, but an ostrich? That person's perspective loses its value immediately. You can't discuss a perspective that's based on a falsehood.

The information age requires digging through muck and mire of misinformation, and we all get it wrong sometimes. But if we don't do the digging for the truth before forming and expressing an opinion and if we don't thoroughly correct our mistakes after the facts are made clear, what are we even doing?

We can deal with different beliefs, we can discuss our diverse opinions, but we can't coexist in separate realities. It just can't work. We have to insist on objective truth as the baseline for everything else, or we'll never be able to discuss our perspectives on reality in a way that might actually move us forward as a society.

Canva

As millions of Americans have raced to receive the COVID-19 vaccine, millions of others have held back. Vaccine hesitancy is nothing new, of course, especially with new vaccines, but the information people use to weigh their decisions matters greatly. When choices based on flat-out wrong information can literally kill people, it's vital that we fight disinformation every which way we can.

Researchers at the Center for Countering Digital Hate, a not-for-profit non-governmental organization dedicated to disrupting online hate and misinformation, and the group Anti-Vax Watch performed an analysis of social media posts that included false claims about the COVID-19 vaccines between February 1 and March 16, 2021. Of the disinformation content posted or shared more than 800,000 times, nearly two-thirds could be traced back to just 12 individuals. On Facebook alone, 73% of the false vaccine claims originated from those 12 people.

Dubbed the "Disinformation Dozen," these 12 anti-vaxxers have an outsized influence on social media. According to the CCDH, anti-vaccine accounts have a reach of more than 59 million people. And most of them have been spreading disinformation with impunity.


"Despite repeatedly violating Facebook, Instagram and Twitter's terms of service agreements, nine of the Disinformation Dozen remain on all three platforms, while just three have been comprehensively removed from just one platform," the report states. It also says platforms fail to act on 95% of the COVID and vaccine misinformation that is reported to them.

NPR has reported that Facebook has taken down more of the accounts following the publishing of its article on the CCDH analysis.

Despite many people's understandable resistance to censorship, health disinformation carries a great deal of weight—and consequence. As the CCDH writes, "The public cannot make informed decisions about their health when they are constantly inundated by disinformation and false content. By removing the source of disinformation, social media platforms including Facebook, Instagram and Twitter can enable individuals to make a truly informed choice about vaccines."

So who are these 12 individuals? The report names them and provides some basic info about them starting on page 12 of the report (which you can read here). They are:

1. Joseph Mercola

2. Robert F. Kennedy, Jr.

3. Ty and Charlene Bollinger

4. Sherri Tenpenny

5. Rizza Islam

6. Rashid Buttar

7. Erin Elizabeth

8. Sayer Ji

9. Kelly Brogan

10. Christiane Northrup

11. Ben Tapper

12. Kevin Jenkins

Several of these folks are physicians, which ups their credibility in the eyes of their followers. But as vaccine skeptics themselves say, "Follow the money." These anti-vaxxer influencers rake in the dough by preying on people's paranoia with monetized websites and social media posts, as well as by selling books and supplements.

Some of them may be "true believer" conspiracy theorists and some of them may be opportunistic grifters, but they all benefit from misinformation mongering.

In addition to these individuals, the report names organizations linked to them, including:

- Children's Health Defense (Robert F. Kennedy, Jr.)

- Informed Consent Action Network (ICAN) (Del Bigtree)

- National Vaccine Information Center (NVIC) (Barbara Loe Fisher, Joseph Mercola)

- Organic Consumers Association (OCA) (Joseph Mercola)

- Millions Against Medical Mandates

Don't the names chosen for these organizations sound like things many people would support? Who isn't in favor of defending children's health or informed consent? The "National Vaccine Information Center" sounds downright official, right? Organic consumers? That's me. How would people know whether or not these organizations were trustworthy sources of information, especially if people they know and love are sharing posts from them?

They wouldn't. That's the entire problem.

The report offers suggestions for how to handle misinformation pushers, starting with deplatforming.

"The most effective and efficient way to stop the dissemination of harmful information is to deplatform the most highly visible repeat offenders, who we term the Disinformation Dozen. This should also include the organisations these individuals control or fund, as well as any backup accounts they have established to evade removal."

The CCDH also recommends platforms "establish a clear threshold for enforcement action" that serve as a warning before removing someone and present warning screens and effective correction to users when a link they attempt to click leads to a source known to promote anti-vaccine misinformation. In addition, the report recommends that Facebook not allow private and secret anti-vaccine Groups "where dangerous anti-vaccine disinformation can be spread with impunity."

Finally, the CCDH recommends instituting an Accountability API "to allow experts on sensitive and high-importance topics to perform the human analysis that will ultimately make Facebook's AI more effective."

The information age is also the misinformation and disinformation age, unfortunately. When it's people pushing that the moon landing was a hoax, it's annoying, but when it's people pushing falsehoods about a deadly pandemic and the life-saving vaccines that can end it, we can't just brush it off with an eye roll. Disinformation is dangerous, figuring out how to stop it is tricky, but at least knowing where most of it comes from might give us a chance to limit its spread.


From Your Site Articles
Related Articles Around the Web