85
6

New attack on ChatGPT research agent pilfers secrets from Gmail inboxes

2d 4h ago by sopuli.xyz/u/supersquirrel in fuck_ai from arstechnica.com

On Thursday, security firm Radware published research showing how a garden-variety attack known as a prompt injection is all it took for company researchers to exfiltrate confidential information when Deep Research was given access to a target’s Gmail inbox. This type of integration is precisely what Deep Research was designed to do—and something OpenAI has encouraged. Radware has dubbed the attack Shadow Leak.

“ShadowLeak weaponizes the very capabilities that make AI assistants useful: email access, tool use and autonomous web calls,” Radware researchers wrote. “It results in silent data loss and unlogged actions performed ‘on behalf of the user,’ bypassing traditional security controls that assume intentional user clicks or data leakage prevention at the gateway level.”

...

People considering connecting LLM agents to their inboxes, documents, and other private resources should think long and hard about doing so, since these sorts of vulnerabilities aren’t likely to be contained anytime soon.

If I recall, gmail accounts were automatically integrated with gemini. You have to disable smart features to disengage gemini from gmail, and that also forces you to turn off spell check and spam filtering.

I don't know about Gmail's own spellcheck since my browser does it natively. However, disabling Gemini and smart features in Gmail absolutely does not (yet) stop spam filtering from working.

Whoa, spam filtering is Gmail's only advantage over generic email.

I've migrated away, but still have my legacy Gmail account, but this'll get me gone completely.

I disabled the smart features recently and all it did ("all" it's really annoying actually) was stop the automatic categorising feature, where it'd sort everything into bills & promotions & newsletters etc

The spam filtering is, touch wood, still working fine

These tools are like using a hammer guaranteed to make you hit your own head with it.

using LLMs on a search index to retrieve mail with structured output results, probably fine. 

using them across apps or with network tools is not fine.