Agentic AI browsers now handle your banking, emails, and private documents. A single malicious link can turn these assistants against you.
Recent discoveries in Perplexity’s Comet browser reveal how attackers exploit prompt injection to steal credentials, exfiltrate data, and hijack authenticated sessions1 .
Our analysis shows why same-origin policy, CORS, and other traditional web protections become irrelevant when it comes to AI browser security.
AI Browsers vs. Regular Browsers
Regular browsers display content. AI browsers take action.
Traditional browsers like Chrome or Firefox act as viewers; they render web pages but don’t interact with them autonomously. You click, type, and navigate manually.
Agentic AI browsers such as Perplexity Comet, Opera Neon, and emerging tools from OpenAI operate differently. They can:
- Read your emails in Gmail
- Book flights using saved payment methods
- Schedule calendar appointments
- Extract data from banking portals
- Post comments on social media
Real-World Examples: Vulnerable AI Browsers
Security researchers found critical flaws across multiple AI browsers.
Example 1: ChatGPT Atlas Browser
- Memory Contamination Attack (October 2025)
LayerX discovered attackers could poison ChatGPT’s persistent memory, causing lasting damage across all user devices2 .
Attack process:
- User clicks malicious link while logged into ChatGPT
- Hidden code injects false “memories” into ChatGPT’s storage
- Contaminated memory persists across home computers, work laptops, and phones
- Next time user asks ChatGPT anything, poisoned memory activates
- AI executes attacker’s commands disguised as normal responses
LayerX testing showed Atlas blocked only 5.8% of phishing attacks, compared to Chrome’s 47% and Edge’s 53%. Atlas users face 90% more exposure than traditional browser users.
OpenAI disputed the findings, stating they couldn’t reproduce the attack. Vulnerability status remains unclear as of October 2025.
Example 2: Perplexity Comet Browser
Researchers discovered three different ways to attack Comet:
- Reddit Comment Attack ( July 2025)
A malicious Reddit comment contained hidden commands inside a spoiler tag. The model couldn’t tell which text came from the user and which came from the webpage so it executed the attacker’s instructions.
User visits Reddit and clicks “Summarize this page.” Hidden instructions in a comment spoiler tag command Comet to:
- Extract user’s Perplexity email address
- Request password reset OTP
- Read OTP from Gmail
- Send credentials to attacker
The attack required no password entry. AI executed everything autonomously.
- Screenshot Manipulation (October 2025)
Attackers embedded nearly invisible text in images faint blue on yellow background. When users took screenshots and asked questions about them, Comet’s image recognition extracted the hidden text and followed its commands.
- URL Weaponization (LayerX Security, August 2024)
Attackers crafted URLs contained malicious instructions:
The parameter forced Comet to search stored emails, calendar data, and connected services. Attackers bypassed security by encoding stolen data before sending it out.
LayerX’s report was marked “Not Applicable” by Perplexity and remains unpatched3 .
Example 3: Fellou Browser
- Navigation-Triggered Attack (August 2025)
Fellou resisted hidden text attacks but failed with visible content. The flaw: asking the AI to visit any website automatically sends that page’s content to the LLM for processing
User command: “Go to example-site.com”
What happens:
- Browser navigates to the site
- Automatically sends page content to AI
- Visible malicious instructions on the page override user’s original intent
- AI follows attacker’s commands
No summarization request needed. Navigation alone activates the attack. Brave disclosed this in August 2024. Backend models receiving both inputs must treat any output as potentially unsafe, requiring independent alignment checks.
Example 4: Opera Neon
- Hidden HTML Injection (October, 2025)
Opera Neon processes hidden HTML elements as AI commands. The vulnerability worked through zero-opacity span tags invisible to users but readable by the LLM4 .
When user requests “Summarize this page,” Opera Neon:
- Reads hidden instructions from HTML
- Navigates to auth.opera.com
- Extracts the authenticated user’s email
- Exfiltrates to attacker’s server
Opera stated that their testing reproduced the attack with only a 10% success rate due to the AI model’s non-determinism. Brave successfully demonstrated it multiple times. The fix prevents the AI assistant from processing HTML comments.
Current State of AI Browser Security
Multiple vendors face similar vulnerabilities. The pattern repeats across implementations:
Perplexity Comet
- Indirect prompt injection via webpage text
- Screenshot-based injection
- URL parameter exploitation
Opera Neon
- Hidden HTML element injection
Fellou Browser
- Navigation-triggered injection
OpenAI Atlas
- Memory contamination vulnerability
What This Means for Enterprise Security?
Corporate environments face amplified risks. A single compromised employee browser can:
- Access internal documentation on Google Drive
- Read confidential emails containing trade secrets
- Extract financial data from authenticated business applications
- Manipulate corporate communication channels
- Move laterally across cloud services using saved credentials
Traditional endpoint protection doesn’t address this threat. The browser itself becomes an insider threat, operating with legitimate user privileges while following attacker instructions.
Security teams should:
- Audit which employees use AI browsers
- Restrict AI browser use for roles accessing sensitive systems
- Implement browser security platforms like LayerX that monitor AI agent behavior
- Establish policies requiring explicit approval before enabling agentic features
How to Protect Yourself from AI Browser Attacks?
Until vendors implement robust protections, users should take defensive measures.
Disable AI features on sensitive accounts
Don’t use AI browsers while logged into:
- Banking and financial services
- Email accounts containing sensitive correspondence
- Corporate systems and intranets
- Healthcare portals
- Legal or confidential document repositories
Use separate browser profiles
Maintain distinct profiles for:
- AI-assisted casual browsing (no sensitive logins)
- Authenticated sessions for important accounts (no AI features enabled)
Verify before clicking “Summarize”
Check page source for suspicious hidden elements before allowing AI to process content. Look for:
- Unusual whitespace or comments in HTML
- Text styled to match background colors
- Collapsed spoiler sections on social media
FAQ
Further Reading:
Reference Links

Cem's work has been cited by leading global publications including Business Insider, Forbes, Washington Post, global firms like Deloitte, HPE and NGOs like World Economic Forum and supranational organizations like European Commission. You can see more reputable companies and resources that referenced AIMultiple.
Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised enterprises on their technology decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization.
He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider.
Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.




Be the first to comment
Your email address will not be published. All fields are required.