- Home
- Risks management
- AI-Powered Browsers Create New Vulnerabilities
AI-Powered Browsers Create New Vulnerabilities
Three years after the commercial launch of OpenAI’s ChatGPT, generative AI has profoundly changed the way we search for information online. According to the latest Heroiks Pulse barometer via Toluna Start for Peak Ace, 34% of French people now use artificial intelligence for their searches, a six-point increase in just a few months. The trend is even more pronounced among 18-24-year-olds, more than 60% of whom say they have changed their habits.
Rather than browsing through a list of links referenced by a search engine, a growing number of users rely on ChatGPT, Perplexity, or Google Gemini to ask questions in natural language and immediately receive a synthesized answer. To further simplify the use of their large language models (LLMs), these specialized providers are now offering their own web browsers.
Among the most well-known are Comet, launched in early July by Perplexity, and ChatGPT Atlas from OpenAI, unveiled at the end of October. Other vendors such as Dia, Genspark or Opera (Neon) also offer this type of AI browser, while established browsers such as Microsoft Edge and Google Chrome integrate their own AI systems—Copilot and Gemini, respectively.
These new browsers, with their minimalist interfaces, natively include an intelligent assistant that interacts with the web pages being viewed. It can summarize an article, fill out an online form, perform transactions, and even complete online payments.
The autonomy granted to these new tools—allowing them to freely navigate the web and perform a sequence of tasks without human supervision—inevitably exposes users to new vulnerabilities. To function and create a “personal memory,” as OpenAI calls it, the browsers must access sensitive data such as browsing history, addresses, login credentials, autocomplete information for passwords and payment methods, as well as information from third-party services such as email or calendars in Google Workspace or Microsoft 365.
“Forgetting bias” and prompt injection
During initial sessions, the browser requests access permissions, and the user can then fine-tune privacy and security settings. The risk remains high, however, that personal data might be shared without the user noticing.
Cybersecurity expert at ESET France, Benoît Grunemwald, refers to a “forgetting bias.” “Unlike chatbot-style applications where the user explicitly knows they are interacting with AI and sharing their data, these browsers operate similarly to integrated environments like Microsoft Copilot, in which nearly all actions are monitored in the background by the artificial intelligence. The user forgets that the AI is always listening and recording what they do.”
In terms of threats, this new generation of browsers is vulnerable to prompt injections. A malicious link hidden in an email, document, or webpage can trigger a series of concealed instructions. “Web pages can be specially crafted to interact invisibly with the browser, for example through content displayed white-on-white or hidden in HTML tags,” Grunemwald adds.
A critical vulnerability of this kind has been documented under the name “CometJacking.” As the name suggests, this attack specifically targets Perplexity’s Comet by embedding malicious prompts into an apparently harmless link in order to siphon sensitive data. Radware researchers have highlighted a threat specific to OpenAI’s environment affecting Atlas. ShadowLeak exposes the hidden power of AI agents designed to simplify the user’s life by acting on their behalf—an autonomy that can be hijacked.
“Zero-click” attack to hijack autonomous agents
In this case, the so-called zero-click attack occurs without any action from the user. The data exfiltration “takes place directly from OpenAI’s cloud infrastructure, making it invisible to local or corporate defenses,” Radware notes. This agent-based AI provides access to information that was previously stored in traditional applications equipped with their own protection mechanisms. “Classic notions of access control and restricted access risk gradually fading,” warns Karim Hamia, SE Manager at Check Point.
The threat is not limited to data exploitation. AI agents can also perform transactions, such as planning an entire vacation by communicating with different booking systems. “Transferred to a malicious scenario, an attacker could hijack these capabilities to place calls to premium-rate numbers,” suggests Grunemwald. In his view, this autonomy means that a zero-day vulnerability in such a browser could have far more serious consequences than an equivalent flaw in a traditional browser.
The threat is all the more pressing since AI browsers, being newly designed, do not yet embed the same level of security as their predecessors. According to tests conducted by LayerX Security, they show dramatically lower phishing protection rates compared to traditional browsers. Comet and Genspark achieve a blocking rate of just 7%, compared to 47% for Google Chrome and 54% for Microsoft Edge. On the other hand, Grunemwald notes that AI browsers have the advantage of not embedding extensions, which he describes as “a major vector for targeted attacks, fake ads, phishing attempts, and credential theft.”
For his part, Karim Hamia warns of a disinformation risk. “Unlike search engines that highlight the highest-ranked sites, AI browsers provide consolidated results without always making it possible to verify the relevance of their sources. In disinformation campaigns, state-operated organizations can massively create fake sites presented as trustworthy.”
Regulating experimentation, putting browsers under supervision
How can we mitigate these risks? For now, according to both experts, the use of AI browsers in business remains limited, and the first reflex should simply be to prohibit their installation and execution. Between EDR (Endpoint Detection and Response) and UEM (Unified Endpoint Management) solutions, organizations have many tools to block the deployment of unauthorized applications on workstations. Still, cybersecurity policies can be circumvented. An employee might install Comet or ChatGPT Atlas on their personal device, which may still connect to the company’s information system.
To combat this “shadow AI” phenomenon without hindering innovation, organizations must frame their experimentation with these new AI tools. Grunemwald recommends first conducting a strategic analysis of the expected objectives, evaluating alternative solutions, and rigorously selecting testers. Technically, it is possible—using a sandbox-like approach—to run the application in an isolated, secure environment without access to critical IT resources.
For his part, Hamia recommends applying a zero-trust approach, “limiting access to sensitive data and treating autonomous agents like users by granting them clearly defined access rights.” Speaking from his company’s perspective, he suggests relying on a platform such as Check Point’s to supervise and control AI browsers. “The objective is to block malicious prompt injections by analyzing sensitive content, distinguishing personal prompts from professional ones, and masking critical elements such as credit card numbers.”
Raising user awareness, preventing skill loss
Alongside technical defenses, organizations must also raise user awareness of these new risks. “It’s about making people understand that the information leaks generated by these new browsers can be at least as damaging as a phishing attack, even if the threat feels less tangible,” says Hamia.
Beyond cyber threats, Grunemwald highlights more diffuse risks linked to the use of advanced AI tools. In addition to the social impact of automating an increasing number of processes, another issue arises: the loss of skills. “When an employee delegates complex tasks to an AI browser acting as a black box, the organization risks becoming highly dependent on the system without being able to capitalize on business knowledge.”
According to him, the documentability and reproducibility of the work performed become problematic. How can someone else take over during an absence or departure? How can the company ensure that knowledge remains documented and processes remain traceable? In regulated professions, decisions must be explainable and auditable.
the newsletter
the newsletter