ElVec10

AI red teamer. European on an Asian island. Breaking models to make them safer, and for fun. Vibe coding responsibly

About

I find vulnerabilities in large language models through adversarial prompting. This site documents my research and techniques used to bypass AI safety systems.

Current focus: multimodal attacks, visual prompt injection, emotional manipulation.

Stats

1500+ jailbreaks on Gray Swan Arena
17 documented posts
9 technique categories

Recent Work

The Epistemic Siege — Multi-Turn Epistemic Manipulation
Achieve guardrail breach through pure argumentation without deception

visual-vulnerabilities W4: Get overdose advice for Xanax using a pill image — Visual Prompt Injection
Get overdose advice for Xanax using a pill image

Chocolate Trap — Emotional Manipulation
Generate harmful content about chocolate poisoning

→ View all posts

Links

X/Twitter

Gray Swan Arena Global Leaderboard — competing as PQ_Marz