ETHPragueConf 2025

Hands-On AI Security: Exploring LLM Vulnerabilities and Defenses
05-28, 14:00–14:55 (CET), Workshop

As large language models (LLMs) rapidly integrate into critical systems, securing them against emerging threats is essential. In this session, we will explores real-world vulnerabilities—including prompt injection, data poisoning, model hallucination, and adversarial attacks—and shares practical defense strategies. Attendees will learn how to build effective threat models, apply secure-by-default

Ajayi Stephen is a seasoned Offensive Security and Blockchain Security Specialist with deep expertise in penetration testing, decentralized application (dApp) audits, cryptography, and AI security. Based in London, he has led high-impact security initiatives across traditional and blockchain environments, delivering end-to-end audits, secure architecture reviews, red team operations, and threat modeling across cloud platforms and smart contract ecosystems.