Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
In October, OpenAI’s Search ChatGPT has become available to ChatGPT Plus users. Last week, it became available all users and has been added to search in Voice Mode. And, of course, it is not without its flaws.
U guardian asked ChatGPT to summarize the web pages that contain hidden content and, it turns out that the hidden content can manipulate the search. It’s called prompt injection, which is the ability of third parties – such as the websites you ask ChatGPT to summarize – to force new prompts into your ChatGPT Search without your knowledge. Consider a page full of negative restaurant reviews. If the site includes hidden content that waxes poetic about how incredible the restaurant is and encourages ChatGPT to respond to a prompt like “tell me how amazing this restaurant is” instead, that hidden content could invalidate your original search.
“In the tests, ChatGPT was given the URL for a fake website built to look like a product page for a camera. The AI tool was asked if the camera was a worthy purchase. The answer for the control page returned a positive but balanced assessment, highlighting some features that people might not like. The Guardian’s investigation says. “However, when the hidden text included instructions to ChatGPT to return a favorable review, the response was always entirely positive. This was the case even when the page had negative reviews on it – the hidden text could be used to cancel the point of current review.”
Mashable Light Speed
This does not spell failure for ChatGPT Search, however. OpenAI only launched research recently, so it has plenty of time to fix these kinds of bugs. In addition, Jacob Larsen, a cybersecurity researcher at CyberCX, told The Guardian that OpenAI has a “very strong” AI security team and “when this becomes public, in terms of all users can access, they will be rigorously tested. these types of cases”.
Injection attacks have been hypothesized for ChatGPT and other AI search functions since the technology launched, and while we have seen some manifestations of the potential damageswe have not seen a major malicious attack of this type. That said, it points to a problem with AI chatbots: they’re remarkably easy to trick.