Offensive AI security research
Original adversarial work on agents, retrieval systems, and AI-augmented production stacks — published, reproducible, and usable as the basis for hardening.
- Sandbox escape and tool-use attacks
- Memory poisoning in persistent agents
- Retrieval pipeline boundary failures
- MCP server abuse patterns
