Hands-On AI Attack & Defense Cohort.
Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company's stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled cohort in GenAI and LLM security dives into these pressing questions.
Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.
Master AI & LLM security through CTF-style cohort, tackling real-world attacks, defenses, adversarial threats, and Responsible AI principles
Direct & Indirect prompt injection attacks.
Jailbreak attacks on AI models and how to build defense guardrails.
Plan and execute AI red-team engagements using adversary simulation techniques.
Map AI/LLM applications against the OWASP LLM Top 10 to identify high-risk weaknesses.
Use MITRE ATLAS to design realistic attack scenarios and trace adversary tactics.
Craft and execute prompt injection, jailbreak, and agentic attacks to test system robustness.
Build simple data poisoning scenarios and explain their impact on model behavior and downstream systems.
Design and implement defensive patterns (input validation, output filtering, tool-use constraints) to mitigate these attacks.
Prototype LLM-based security scanners to detect injections, jailbreaks, manipulations, and risky behaviors.
Design and implement custom guardrails for input/output protection and integrate them into an existing AI application.
Run security benchmarking and focused penetration tests against LLM agents, producing clear, actionable findings.
Develop an incident response playbook for AI applications, services and agents.
Threat Modeling of AI applications and agents.
Build a practical AI SecOps workflow that covers the full AI/LLM supply chain (models, data, tools, plugins, infra).
International Trainer & Speaker, security researcher & consultant.
CISOs & Security practitioners looking to implement AI security best practices.
Security professionals seeking to update their skills for the AI era.
Red & Blue team members.
Live sessions
Learn directly from Abhinav Singh in a real-time, interactive format.
Lifetime access
Go back to course content and recordings whenever you need to.
Lifetime access to Labs & CTF Platform
Enrolled students get lifetime access to an online lab platform(CTF) with many additional labs and content to master their skills.
Community of peers
Stay accountable and share insights with like-minded professionals through a dedicated Discord community for AI Security.
Certificate of completion
Share your new skills with your employer or on LinkedIn.
Maven Guarantee
This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.
4 live sessions • 12 lessons • 5 projects
Oct
27
Session 1
Oct
28
Session 2
Oct
30
Session 4
Oct
29
Session 3

Explore prompt injections—their mechanics, impact, and why they matter in securing AI systems.
Differentiate between risks at the application layer and vulnerabilities within the model itself.
Break down agentic systems to reveal where key security gaps and attack surfaces emerge.
Live sessions
10-12 hrs
Mon, Oct 27
5:00 PM—8:00 PM (UTC)
Tue, Oct 28
5:00 PM—8:00 PM (UTC)
Thu, Oct 30
5:00 PM—8:00 PM (UTC)
Wed, Oct 29
5:00 PM—8:00 PM (UTC)
Projects
1-3 hrs
Async content
1-3 hrs
$595
USD
5 hours left to enroll