AI Browsers Face Permanent Cyber Risk: OpenAI Warns of Unavoidable Prompt Attacks

OpenAI: OpenAI has warned that "prompt injection attacks" in AI browsers are a cyber threat that may never be completely eradicated. According to the company, these attacks are similar to scams and social engineering prevalent on the web.

Muskan Kumawat
Muskan Kumawat Verified Local Voice • 13 Apr, 2026 Author
December 23, 2025 • 2:24 PM  0
T
Tech
NEWS CARD
Logo
AI Browsers Face Permanent Cyber Risk: OpenAI Warns of Unavoidable Prompt Attacks
“AI Browsers Face Permanent Cyber Risk: OpenAI Warns of Unavoidable Prompt Attacks”
Favicon
Read more on sangritoday.com
23 Dec 2025
https://www.sangritoday.com/ai-browsers-face-permanent-cyber-risk-openai-warns-of-unavoidable-prompt-attacks
Google News
Copied
AI Browsers Face Permanent Cyber Risk: OpenAI Warns of Unavoidable Prompt Attacks
AI Browsers Face Permanent Cyber Risk: OpenAI Warns of Unavoidable Prompt Attacks

OpenAI is working hard to protect its new Atlas AI browser against cyber attacks. However, the organization has recognized the bitter truth as well. According to OpenAI, prompt injection attacks, a hacking technique used on AIs, are a threat which can never completely be eliminated. This is a rather interesting comment on the security of operating AIs on the web.

OpenAI explained on their blog post that “prompt injection attacks are somewhat similar to scamming and social engineering on the web that are difficult to completely eliminate.” To put it shortly, prompt injection is a hack where hackers embed malicious intent inside web pages or emails. When an AI agent reads that page, it unknowingly follows those instructions. OpenAI has acknowledged that their browser's "Agent Mode" increases the security risk. Not only OpenAI, but Brave and the UK's National Cyber ​​Security Centre have also warned that completely preventing such attacks may never be possible.

Prompt injection attacks cannot be completely eliminated, so OpenAI is taking a different approach to manage them. The company has developed a "Large Language Model-Based Automated Attacker." This is essentially a bot that OpenAI has designed to play the role of a "hacker" through reinforcement learning (RL). This bot attacks the AI ​​agent in a simulated environment and finds new vulnerabilities. This helps OpenAI understand what the AI ​​will think and how it will react if attacked. The advantage of this is that the company can strengthen its security before real hackers attack.

Muskan Kumawat Verified Local Voice • 13 Apr, 2026 Author

Journalist & Writer

home Home amp_stories Web Stories local_fire_department Trending play_circle Videos mark_email_unread Newsletter