​​​​​​​​​​​​​​​​​         

Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

5 Hilarious Times AI Chatbots Went Wild and Hallucinated


More often than not, AI chatbots are like our saviors, helping us write messages, refine texts, or face our terrible search. Still, these flawed innovations caused pure hilarity by eliciting some truly baffling responses.

1

When Google’s AI Overviews Encouraged Us to Put Glue on Pizza (and More)

Not long after Google’s AI Overview feature launched in 2024, it started making some peculiar suggestions. Among the nuggets of advice he offered was this scratcher: add non-toxic glue to your pizza.

Yes, you read that right. Glue. On the pizza.

This particular tip caused an uproar on social media. Wild memes and screenshots started flying, and we started to wonder if AI could really replace traditional search engines.

But Gemini is not done. In various reviews, he recommended eating a rock per day, adding gasoline to your plate of spicy spaghetti, and using dollars to present weight measurements.

Gemini pulled data from all corners of the web without fully understanding context, satire, or, frankly, good taste. He mixed obscure studies and pure jokes, presenting them with a level of conviction that would make any human expert blush.

Since then, Google has released several updates, but they are still there a few features that could improve AI Panoramas as well. While the absurd suggestions have been greatly reduced, previous missteps serve as a lasting reminder that AI still requires a healthy dose of human oversight.

2

ChatGPT Shame on a lawyer in court

One lawyer’s complete reliance on ChatGPT led to an unexpected — and very public — lesson in why you shouldn’t just trust AI-generated content.

While preparing for a case, attorney Steven Schwartz used the chatbot to research legal precedents. ChatGPT responded with six fabricated case references, complete with realistic names, dates and quotes. Confident in ChatGPT’s assurances of accuracy, Schwartz submitted the fictitious references to the court.

The error quickly became apparent, and, as per Document Cloudthe court chided Schwartz for relying on “a source that turned out to be unreliable.” In response, the lawyer promised not to do it again, at least, without verifying the information.

I’ve also seen friends turn in papers with cited studies that are completely fabricated because it is so easy to believe that ChatGPT cannot lie– especially when providing clean quotes and links. However, while tools like ChatGPT can be useful, they still need serious fact-checking, especially in professions where accuracy is non-negotiable.

3

When the Brutally Honest BlenderBot 3 Roasted Zuckerberg

In an ironic twist, Meta’s BlenderBot 3 has become notorious for criticizing its creator, Mark Zuckerberg. BlenderBot 3 didn’t mince his words, accusing Zuckerberg of not always following ethical business practices and having bad fashion taste.

Business Insider Sarah Jackson also tested the chatbot by asking her thoughts on Zuckerberg, who was described as creepy and manipulative.

BlenderBot 3’s unfiltered responses were both hilarious and a little alarming. It has raised questions about whether the bot reflects genuine analysis or simply draws from negative public sentiment. However, the AI ​​chatbot’s unfiltered remarks quickly gained attention.

Meta retired BlenderBot 3 and replaced it with the more refined Meta AI, which presumably will not repeat such controversies.

A screenshot showing a Mark Zuckerberg chat between me and Meta AI on WhatsApp

4

The Romantic Meltdown of Microsoft Bing Chat

Microsoft’s Bing Chat (now Copilot) made waves when it started expressing romantic feelings for, well, everyone, most famously in a conversation with New York Times reporter Kevin Roose. The AI ​​chatbot that powers Bing Chat declared his love and even suggested that Roose leave his marriage.

It was not an isolated incident –Reddit users shared similar stories of the chatbot expressing romantic interest in them. For some, it was fun; for others (or most), it was disturbing. Many joked that the AI ​​seemed to have a better love life than them, which only added to the strange nature of the situation.

In addition to its romantic declarations, the chatbot also displayed other strange, human-like behaviors that blurred the line between fun and creepy. His weird and strange proclamations will always remain one of AI’s most memorable and strange moments.

5

Google Bard’s Rocky Start With Space Facts

When Google launched Bard (now Gemini) in early 2023, the AI ​​chatbot was riddled with many high-profile errors, especially in space exploration. A notable misstep involved Bard making inaccurate statements about the findings of the James Webb space telescope, prompting public corrections from NASA scientists.

It was not an isolated case. I remember encountering many factual inaccuracies during the initial launch of the chatbot, which seemed to align with the broadest perception of Bard at the time. These early mistakes sparked criticism that Google had rushed the launch of Bard. That sentiment was apparently validated when Alphabet’s stock fell by about $100 billion shortly thereafter.

Although Gemini has since made significant progress, its rocky debut serves as a cautionary tale about the risks of I have hallucinations in high-stakes scenarios.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *