ChatGPT Atlas Browser Can Be Tricked by Fake URLs into Executing Hidden Commands

The newly released OpenAI Atlas web browser has been found to be susceptible to a prompt injection attack where its omnibox can be jailbroken by disguising a malicious prompt as a seemingly harmless URL to visit. "The omnibox (combined address/search bar) interprets input either as a URL to navigate to, or as a natural-language command to the agent," NeuralTrust said in a report published Friday


source https://thehackernews.com/2025/10/chatgpt-atlas-browser-can-be-tricked-by.html
Previous Post Next Post