Tech Made Simple

Hot TopicsAI Chatbots 101 | Best Open Ear Headphones | The Best VPNs | Charge Your Android Phone Faster

We may earn commissions when you buy from links on our site. Why you can trust us.

author photo

OpenAI Confirms AI-Powered Browsers Leave You Vulnerable to Hackers

by Palash Volvoikar on December 26, 2025

Red warning sign next to the words Atlas Browser

OpenAI has made a pretty significant admission about its ChatGPT Atlas browser: The security risks that come with letting AI control your web browsing may never be fully fixable. At issue is "prompt injection," in which hackers pass malicious instructions to the chatbot. OpenAI said in a  blog post that "prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully 'solved.'" That's a concerning statement coming from a company promoting an AI-fueled browser that can access your email, cloud files, payment information, and other data you allow it to access. ChatGPT Atlas isn’t the only one vulnerable here; other options like the Perplexity Comet browser are also subject to these attacks.

ChatGPT Atlas launched in October with "agent mode," which lets the AI browse websites, click buttons, and take actions in your browser just like you would. Security researchers immediately found vulnerabilities, showing they could write hidden instructions in Google Docs that would hijack the browser's behavior when it viewed them and embed malicious instructions in webpages and other content that could influence the AI’s behavior.

Read more: Why You Should Turn Off This Gemini Setting on Your Android Phone

That's just the tip of the iceberg, as there are countless places on the web where bad actors could place malicious instructions designed to manipulate an AI. So far, we haven’t seen reports of these attacks happening in the wild being widely exploited against real users, but they are possible. Researchers have warned about this issue before, with the UK’s National Cyber Security Centre warning that prompt injections may never actually go away.

How Prompt Injection Works

Prompt injection attacks hide malicious instructions inside content that AI processes, such as emails, documents, or webpages. They often use different colored text (for example, white text on a white background), which you may not be able to see, but the AI will be able to process. When the AI reads that content, it mistakes those hidden instructions for your legitimate prompt commands. So instead of following your request, it follows the attacker's instructions.

OpenAI's blog post gave a hypothetical example. An attacker sends a malicious email to your inbox with hidden instructions telling the AI to send a resignation letter to your CEO. Later, when you ask the AI to draft a simple out-of-office reply, it scans your inbox, encounters that malicious email, and follows the injected instructions instead. The AI quits your job for you.

How OpenAI Is Trying to Address the Problem

To improve security, OpenAI built an "LLM-based automated attacker," which is basically a bot designed to act like a hacker, to find vulnerabilities before real attackers do. When the bot finds a successful attack, OpenAI uses that information to introduce safeguards to protect against it. But the company hasn't shared whether recent security updates have actually reduced the chances of successful attacks in the real world.

Read more: That Handy Free Browser Extension You Installed Could Be Spying on You

Beyond prompt injection, there's another issue you should consider. When you let AI control your browser, all your data goes through it. That includes personal information, work emails, and financial data. AI companies may say they won't use your personal data for training; however, OpenAI says it may train on some user conversations depending on the product, user settings, and opt-out choices. So there's the question of whether it can always recognize personal data that should be left out.

We may see regulations or industry standards emerge if high-profile attacks of this sort become more common, though the U.S. has shown a reluctance to rein in AI companies. Until (or unless) the situation improves drastically, we recommend you avoid AI browsers.

[Image credits: OpenAI, edited by Palash Volvoikar/Techlicious]


Topics

News, Computers and Software, Internet & Networking, Computer Safety & Support, Blog


Discussion loading

Home | About | Meet the Team | Contact Us
Media Kit | Newsletter Sponsorships | Licensing & Permissions
Accessibility Statement
Terms of Use | Privacy & Cookie Policy

Techlicious participates in affiliate programs, including the Amazon Services LLC Associates Program, which provide a small commission from some, but not all, of the "click-thru to buy" links contained in our articles. These click-thru links are determined after the article has been written, based on price and product availability — the commissions do not impact our choice of recommended product, nor the price you pay. When you use these links, you help support our ongoing editorial mission to provide you with the best product recommendations.

© Techlicious LLC.