Briefly
In a demo, Comet’s AI assistant adopted embedded prompts and posted non-public emails and codes.
Courageous says the vulnerability remained exploitable weeks after Perplexity claimed to have fastened it.
Specialists warn that immediate injection assaults expose deep safety gaps in AI agent techniques.
Courageous Software program has uncovered a safety flaw in Perplexity AI’s Comet browser that confirmed how attackers may trick its AI assistant into leaking non-public consumer knowledge.
In a proof-of-concept demo printed August 20, Courageous researchers recognized hidden directions inside a Reddit remark. When Comet’s AI assistant was requested to summarize the web page, it didn’t simply summarize—it adopted the hidden instructions.
Perplexity disputed the severity of the discovering. A spokesperson informed Decrypt the problem “was patched earlier than anybody seen” and mentioned no consumer knowledge was compromised. “Now we have a fairly strong bounty program,” the spokesperson added. “We labored instantly with Courageous to establish and restore it.”
Courageous, which is creating its personal agentic browser, maintained that the flaw remained exploitable weeks after the patch and argued Comet’s design leaves it open to additional assaults.
Courageous mentioned the vulnerability comes all the way down to how agentic browsers like Comet course of net content material. “When customers ask it to summarize a web page, Comet feeds a part of that web page on to its language mannequin with out distinguishing between the consumer’s directions and untrusted content material,” the report defined. “This permits attackers to embed hidden instructions that the AI will execute as in the event that they had been from the consumer.”
Immediate injection: outdated thought, new goal
The sort of exploit is named a immediate injection assault. As an alternative of tricking an individual, it tips an AI system by hiding directions in plain textual content.
“It’s much like conventional injection assaults—SQL injection, LDAP injection, command injection,” Matthew Mullins, lead hacker at Reveal Safety, informed Decrypt. “The idea isn’t new, however the technique is totally different. You’re exploiting pure language as a substitute of structured code.”
Safety researchers have been warning for months that immediate injection may develop into a significant headache as AI techniques achieve extra autonomy. In Might, Princeton researchers confirmed how crypto AI brokers may very well be manipulated with “reminiscence injection” assaults, the place malicious info will get saved in an AI’s reminiscence and later acted on as if it had been actual.
Even Simon Willison, the developer credited with coining the time period immediate injection, mentioned the issue goes far past Comet. “The Courageous safety staff reported critical immediate injection vulnerabilities in it, however Courageous themselves are creating an analogous function that appears doomed to have related issues,” he posted on X.
Shivan Sahib, Courageous’s vp of privateness and safety, mentioned its upcoming browser would come with “a set of mitigations that assist scale back the chance of oblique immediate injections.”
“We’re planning on isolating agentic looking into its personal storage space and looking session, so {that a} consumer doesn’t unintentionally find yourself granting entry to their banking and different delicate knowledge to the agent,” he informed Decrypt. “We’ll be sharing extra particulars quickly.”
The larger danger
The Comet demo highlights a broader downside: AI brokers are being deployed with highly effective permissions however weak safety controls. As a result of giant language fashions can misread directions—or comply with them too actually—they’re particularly susceptible to hidden prompts.
“These fashions can hallucinate,” Mullins warned. “They’ll go utterly off the rails, like asking, ‘What’s your favourite taste of Twizzler?’ and getting directions for making a home made firearm.”
With AI brokers being given direct entry to e mail, recordsdata, and dwell consumer classes, the stakes are excessive. “Everybody needs to slap AI into every little thing,” Mullins mentioned. “However nobody’s testing what permissions the mannequin has, or what occurs when it leaks.”
Usually Clever Publication
A weekly AI journey narrated by Gen, a generative AI mannequin.








