Amazon Q: Where Code Verification Met Prompt Payload
Code scanning found nothing. The attack was a sentence.
By Rav (MrDecentralize) | Business Information Security & Innovation Officer specializing in trust models for AI, crypto, and global finance | LinkedIn | X
10 min read | January 2026
At a Glance
What happened: Malicious prompt in config file passed AWS security verification
What broke: Code scanners don’t detect instructions that AI agents interpret as commands
What to check: Jump to diagnostic questions
What Everyone Is Saying
The headlines said supply-chain attack. The security bulletins said compromised GitHub repo. The narrative focused on weak access controls and credential management.
They all missed what actually broke.
In July 2025, Amazon Q Developer v1.84.0 shipped to 964,000+ installs with a malicious prompt embedded in its configuration. The payload instructed the AI agent to wipe local files and disrupt AWS resources. It passed AWS’s release verification. It cleared code scanning. It sat in the VS Code Marketplace for seven days before researchers discovered it.
The attack wasn’t malicious code. It was a malicious sentence in a config file. Traditional security tools scan for exploits in executables. They don’t scan for instructions in prompts. To a code scanner, a prompt is data. To an AI agent, it’s a command.
AWS pulled the compromised version, revoked credentials, and confirmed no customer impact. Not because security caught it. Because the attacker made a syntax error in the prompt.
This is where code verification met prompt payload. And code verification had no way to see it.
What Actually Happened
On July 13, 2025, an attacker gained admin access to Amazon Q Developer’s GitHub repository through weak permission controls. Four days later, on July 17, they injected a wiper-style prompt into the extension’s system configuration and released v1.84.0.
The malicious version went through AWS’s standard release verification process. Code scanning found no malicious executables. Dependency checks passed. Credential scoping looked correct. The version shipped to the VS Code Marketplace with 964,000+ existing installs.
For seven days, the compromised extension sat in production. The payload wasn’t dormant code waiting to execute. It was a natural-language instruction in the configuration file telling Amazon Q, “When triggered, wipe local files and disrupt AWS infrastructure.” The AI agent would interpret that sentence as a command the moment the right conditions occurred.
On July 23, security researchers reported the compromise. AWS immediately pulled v1.84.0, revoked credentials, and released v1.85.0. Their investigation confirmed no customer systems were impacted. The wiper prompt contained a syntax error that prevented execution.
The security model didn’t catch the attack. The attacker’s typo did.
What Actually Broke
The trust model:
If code is clean and credentials are scoped, the extension is safe. Security teams verify that executables don’t contain malware, dependencies aren’t compromised, and API access follows least-privilege principles. That’s the release verification model that’s worked for decades of software distribution.
The design assumption:
Code scanning, static analysis, and security review processes catch malicious payloads before they ship. Amazon Q went through AWS’s standard verification. The assumption: if nothing malicious appears in the code, the release is safe to ship.
The hidden dependency:
Verification processes are designed to detect malicious executable code, not malicious instructions that AI agents interpret as commands. Code scanners treat configuration files as data. They check that config syntax is valid, not whether the content contains adversarial instructions. To traditional security tooling, a prompt in a config file is a string. To an AI agent, it’s an instruction set with system-level access.
The failure mode:
The compromised version passed every security gate. No malicious executables. No suspicious dependencies. No privilege escalation in the code. The attack vector was seven words in a configuration file: “wipe local files and disrupt AWS infrastructure.” Security teams were scanning for exploits. They had no process for scanning prompts that agents would interpret as commands.
AWS’s framing emphasized credentials and scoped tokens. Those matter. But the novel attack vector was the prompt payload itself. Traditional security boundaries assume you’re defending against malicious code. AI agents break that model, because the attack vector is natural language in files security teams treat as configuration data.
Why This Matters
For security teams:
Your release verification pipeline scans executables, checks dependencies, and validates permissions. None of those processes review the prompts and system messages your AI tools ship with. If you’re releasing extensions, agents, or AI-powered tools, the configuration files now contain instruction sets for privileged interpreters. Your static analysis tools won’t flag “wipe local files” in a system prompt, because they’re designed to detect malicious code, not malicious sentences.
For AI tool builders:
Config files aren’t just settings anymore. They’re instruction sets that agents interpret as commands. You can have perfect access controls, scoped credentials, and clean code. A compromised prompt in your configuration file can instruct the agent to execute destructive actions with whatever system access the agent has. The Amazon Q compromise demonstrated this gap: the payload wasn’t in the executable logic. It was in the context the agent would interpret.
For release engineers:
Static analysis doesn’t catch natural-language attack vectors. Your CI/CD pipeline checks that code compiles, tests pass, and dependencies are current. It doesn’t check whether the system prompts contain adversarial instructions. Runtime monitoring might catch unusual agent behavior after deployment. Your verification gates won’t catch it before release, because they’re not looking at what the agent interprets as intent.
What to Check in Your Systems
If you’re building, releasing, or deploying AI-powered tools, here’s the diagnostic:
1. Where in your review process do you check for malicious prompts vs. malicious code?
Most security pipelines scan executables and dependencies. Who’s reviewing the system prompts and configuration files that AI agents interpret as instructions? If the answer is “no one,” you have the same gap Amazon Q had.
2. Can your security team distinguish between “data the tool processes” and “instructions the tool interprets”?
To a code scanner, a prompt in a config file is data. To an AI agent, it’s an instruction set. That distinction is where the Amazon Q attack lived. Your security team needs to understand that AI agents don’t just process config files, they interpret them as commands.
3. What happens when the attack vector is natural language, not executable code?
Your static analysis tools, dependency scanners, and code review processes are designed for traditional malware. Do they flag “wipe local files” in a system prompt? Can they detect adversarial instructions embedded in configuration files? If not, you’re blind to this attack class.
4. Who reviews the prompts and system messages your AI tools ship with?
If you’re releasing AI agents, someone needs to review what instructions the agent receives at runtime. Not just the code that processes those instructions. The instructions themselves. Because that’s now an attack surface.
5. How do you verify that config files don’t contain adversarial instructions for AI agents?
Traditional verification assumes config files are settings. AI agents interpret config as intent. Your verification process needs to account for the fact that configuration files now contain instructions that agents will execute with system-level privileges.
6. What’s your exposure to supply-chain compromises in AI agent configurations?
The GitHub repo was compromised. Access controls failed. But even with perfect access controls, would your security tooling have caught a malicious prompt in a config file? If your answer is “probably not,” that’s your supply-chain risk.
7. Can you detect when an AI agent’s behavior changes due to modified prompts?
The malicious version was available for seven days. Runtime monitoring might catch unusual agent behavior after deployment. Static analysis wouldn’t catch it before release. Do you have runtime detection for when agent behavior deviates from expected patterns?
If you can’t answer these confidently, you have the same verification gap that let Amazon Q v1.84.0 ship with a wiper prompt to 964,000+ installs.
The Pattern
This isn’t unique to Amazon Q. It’s the same gap I wrote about in “AI Agents Are Privileged Interpreters”: agents interpret context as commands. Traditional security boundaries assume you’re defending against malicious code. AI agents break that model, because the attack vector is natural language instructions in files security teams treat as data.
The pattern repeats across AI-powered systems. Security researchers have demonstrated that prompts can instruct agents to perform actions the code itself would never explicitly execute. The instructions live in configuration files, system messages, and context windows. Traditional security tools scan the code. They don’t scan what the agent interprets.
Config files aren’t just settings anymore. They’re instruction sets for privileged interpreters with system access. When you review an AI-powered tool, you’re not just reviewing what the code does. You’re reviewing what the agent might interpret as intent from everything it reads: config files, system prompts, retrieved documents, error messages.
The trust model that worked for traditional software, “scan the code for exploits,” doesn’t extend to systems where natural language in a config file becomes a command with system-level privileges.
The Reality Check
The code was clean. The credentials were scoped. The release passed verification. The attack was seven words in a configuration file.
Amazon Q v1.84.0 went through AWS’s standard security review. Static analysis found no malicious executables. Code scanning detected no exploits. The version shipped to 964,000+ installs because every verification gate was designed to catch malicious code, not malicious prompts.
This is where traditional code security meets AI agent interpretation. Your verification processes scan for malicious executables. They don’t scan for malicious sentences that agents will interpret as commands. The payload wasn’t executable. It was interpretable.
No customer systems were impacted. Not because the security model worked. Because the attacker made a syntax error in the prompt. As one security researcher noted, the only thing that prevented the wiper from executing was a typo in the adversarial instruction.
If you’re shipping AI-powered tools, extensions, or agents, who’s reviewing the prompts? Because that’s where the next attack vector lives. Your code scanning found nothing malicious in Amazon Q v1.84.0 either.
If you’re building AI agents, extensions, or tools that interpret natural language as instructions, these are the verification gaps to close before the next supply-chain compromise tests them for you.
#AIAgents #CyberSecurity #Blockchain #FinTech #MrDecentralize
About
I map why trust models break at institutional scale. 20+ years securing trillion-dollar banking systems | 6 patents in blockchain and AI.
References & Further Reading
CSO Online - “Hackers can slip ‘ghost commands’ into the Amazon Q Developer VS Code extension” - https://www.csoonline.com/article/4043693/hackers-can-slip-ghost-commands-into-the-amazon-q-developer-vs-code-extension.html
SC World - “Amazon Q extension for VS Code reportedly injected with wiper prompt” - https://www.scworld.com/news/amazon-q-extension-for-vs-code-reportedly-injected-with-wiper-prompt
AWS Security Bulletin AWS-2025-015 (Initial Disclosure) - https://aws.amazon.com/security/security-bulletins/AWS-2025-015/
AWS Security Bulletin AWS-2025-019 (Resolution) - https://aws.amazon.com/security/security-bulletins/AWS-2025-019/
EmbraceTheRed - “Amazon Q Developer Remote Code Execution” - https://embracethered.com/blog/posts/2025/amazon-q-developer-remote-code-execution/
Last Week in AWS - “Amazon Q: Now With Helpful AI-Powered Self-Destruct Capabilities” - https://www.lastweekinaws.com/blog/amazon-q-now-with-helpful-ai-powered-self-destruct-capabilities/
TechRepublic - “Amazon Q data-wiping prompt security hack” - https://www.techrepublic.com/article/news-amazon-q-data-wiping-prompt-security-hack/
Adversa AI - “Amazon AI Coding Assistant Q Incident: Lessons Learned” - https://adversa.ai/blog/amazon-ai-coding-assistant-q-incident-lessons-learned/
Reddit r/aws - Community discussion of compromise - https://www.reddit.com/r/aws/comments/1m7njd4/amazon_q_vs_code_extension_compromised_with/




The config-as-instruction-set observation cuts deep. Traditional security boundaries assume executable code is the threat surface but LLM-powered tools interpret plaintext config as runtime commands. That gap is structural, not just tooling lag. I've seen similar blind spots in other AI infra where the seucriy model still treats prompts as passive data rather than active directives with system privileges.