#1680549: SpAIware & More: Advanced Prompt Injection Exploits in LLM Applications
Description: |
Join Knostic for an engaging and thought-provoking discussion as we delve into the intricate world of advanced prompt injection exploits that target widely used LLM applications, such as Microsoft Copilot, Google Gemini, Google NotebookLM, Apple Intelligence, GitHub Copilot Chat, Anthropic Claude, and many more. Through captivating real-world demonstrations, we will explore the following pressing threats in vivid detail: Misinformation, Phishing, and Scams: Including advanced techniques such as conditional instructions. Automatic Tool Invocation: Exploiting tool integration to escalate privileges, extract sensitive data, or modify system configurations. Data Exfiltration: Leveraging strategies, such as markdown and hidden payloads, to bypass security controls and leak data. SpAIware and Persistence: Manipulating LLM memory for long-term control and persistence. ASCII Smuggling: How LLMs can hide secrets and craft hidden text invisible to users. For each threat category, we will discuss mitigations and show how vendors are addressing these vulnerabilities. |
---|---|
More info: | https://apps.blackhat.com/e/es?s=95530031&e=71880&elqTrackId=CCDC3E7D5DA7C6B39F06ADDC895220E4&elq=ea8a60043b524a42a11288938e477680&elqaid=4313&elqat=1&elqak=8AF573BDE9C3B5793D9EA9B524E85C506B39C4136606996B3F39F877F0878B2701BD |
Date added | May 9, 2025, 10:01 p.m. |
---|---|
Source | Blackhat |
Subjects |
|
Venue | May 29, 2025, midnight - May 29, 2025, midnight |