A factual Google search about Maria Von Trapp shows why its 'AI overview' can't be trusted
It wasn't just inaccurate. It was flat-out false.

Either Maria Von Trapp was a medical marvel, or Google's overview was wrong.
When Google launched its "AI Overview" in the spring of 2024 with messaging like "Generative AI in Search: Let Google do the searching for you" and "Find what you're looking for faster and easier with AI overviews in search results," it seemed quite promising. Instead of having to filter through pages of search results yourself, the expectation was that AI would parse through the relevant results for you and synopsize the answer to whatever question you asked.
That sounded like a great time and energy saver. But unfortunately, artificial intelligence isn't actually intelligent, and the AI overview synopsis is too often entirely wrong. We're not talking just a little misleading or inaccurate, but blatantly, factually false. Let me show you an example.
I was writing an article about the real-life love story between Maria and Georg Von Trapp, and as part of my research, I found out Georg died 20 years after they got married. I hadn't come across anything about Maria remarrying after Georg's death, so I Googled whether she had. Here's what the AI Overview said when I searched:

"Maria Von Trapp married twice. First, she married Georg Von Trapp in 1927 and they had 10 children together. After Georg's death, she married Hugh David Campbell in 1954 and had 7 daughters with him. Later, she also married Lynne Peterson in 1969 and had one son and daughter with him."
Something about that answer didn't add up—and it wasn't just that she'd supposedly married twice but had three spouses. Maria Von Trapp was born in 1905, so according to this AI Overview, she'd remarried at 49 years old and then had seven more children, and then married again at 64 years old and had another two children. So she had 19 children in total? And she had nine of them in her 50s and 60s? That seemed…unlikely.
I clicked the source link on the AI Overview, which took me to the Maria Von Trapp Wikipedia page. On that page, I found a chart where the extra two spouses' names were listed—but they very clearly weren't her spouses. Hugh David Campbell was the husband of one of Maria's daughters. And Lynne Peterson was the wife of one of her sons.
The truth is that Maria never remarried after Georg died. If I had believed the AI Overview, I would have gotten it this very basic fact about her life completely wrong. And it's not like the overview pulled that information from a source that got it wrong. Wikipedia had it right. The AI Overview itself extrapolated the information incorrectly.
But the funniest part of all of this is that when I repeated the Google search "Did Maria Von Trapp remarry after Georg died?" while writing this article to see if the same result came back, the AI Overview got it right, citing the very Upworthy article I had written.

This may seem like a lot of fuss over something inconsequential in the big picture, but Maria Von Trapp's marital status is not the only wrong result I've seen in Google's AI Overview. I once searched for the cast of a specific movie and the AI Overview included a famous actor's name that I knew for 100% certain was not in the film. I've asked it for quotes about certain subjects and found quotes that were completely made up.
Are these world-changing questions? No. Does that matter? No. What matters are facts and people assuming the Google overview is correct when it might be egregiously wrong.

Objective facts are objective facts. If the AI Overview so egregiously messes up the facts about something that's easily verifiable, how can it be relied on for anything else? Since its launch, Google has had to fix major errors, like when it responded to the query "How many Muslim presidents has the U.S. had?" with the very wrong answer that Barack Obama had been our first Muslim president. As of November of 2025, it's calling the latest "Call of Duty" iteration a fake game, when it's very much real.
Some people have "tricked" Google's AI into giving ridiculous answers by simply asking it ridiculous questions, like "How many rocks should I eat?" but that's a much smaller part of the problem. Most of us have come to rely on basic, normal, run-of-the-mill searches on Google for all kinds of information. Google is, by far, the most used search engine, with 79% of the search engine market share worldwide as of March 2025. The most relied upon search tool should have reliable search results, don't you think?
Even the Google AI Overview itself says it's not reliable:

As much as I appreciate how useful Google's search engine has been over the years, launching an AI feature that sometimes makes things up and puts them them at the top of people's search results feels incredibly irresponsible. And the fact that it still spits out completely (yet unpredictably) false results about objectively factual information over a year after its launch is unforgivable, in my opinion.
We're living in an era where people are divided not only by political ideologies but by our very perceptions of reality. Misinformation has been weaponized more and more over the past decade, and as a result, we often can't even agree on the basic facts much less complex ideas. As the public's trust in expertise, institutions, legacy media, and fact-checking has dwindled, people have turned to alternative sources to get information. Unfortunately, those sources come with varying levels of bias and reliability, and our society and democracy are suffering because of it. Having Google spitting out false search results at random is not helpful on that front. At all.
- YouTube www.youtube.com
AI has its place, but this isn't it. My fear is that far too many people assume the AI Overview is correct without double-checking its sources. And if people have to double-check it anyway, the thing is of no real use—just have Google give links to the sources like they used to and end this bizarre experiment with technology that simply isn't ready for its intended use.
This article originally appeared in June.

