Staging environment

AI Security in Action: Hands-on cohort on Attacking & Defending AI apps & Agents

Abhinav Singh

Hands-On AI Attack & Defense Cohort.

Live, hands-on, CTF-style

Can prompt injections lead to complete infrastructure takeovers? Could AI applications be exploited to compromise backend services? Can data poisoning in AI copilots impact a company's stock? Can jailbreaks create false crisis alerts in security systems? This immersive, CTF-styled cohort in GenAI and LLM security dives into these pressing questions.

Engage in realistic attack and defense scenarios focused on real-world threats, from prompt injection and remote code execution to backend compromise. Tackle hands-on challenges with actual AI applications to understand vulnerabilities and develop robust defenses. You’ll learn how to create a comprehensive security pipeline, mastering AI red and blue team strategies, building resilient defenses for LLMs, and handling incident response for AI-based threats. Additionally, implement a Responsible AI (RAI) program to enforce ethical AI standards across enterprise services, fortifying your organization’s AI security foundation.

What you’ll learn

Master AI & LLM security through CTF-style cohort, tackling real-world attacks, defenses, adversarial threats, and Responsible AI principles

  • Direct & Indirect prompt injection attacks.

  • Jailbreak attacks on AI models and how to build defense guardrails.

  • Plan and execute AI red-team engagements using adversary simulation techniques.

  • Map AI/LLM applications against the OWASP LLM Top 10 to identify high-risk weaknesses.

  • Use MITRE ATLAS to design realistic attack scenarios and trace adversary tactics.

  • Craft and execute prompt injection, jailbreak, and agentic attacks to test system robustness.

  • Build simple data poisoning scenarios and explain their impact on model behavior and downstream systems.

  • Design and implement defensive patterns (input validation, output filtering, tool-use constraints) to mitigate these attacks.

  • Prototype LLM-based security scanners to detect injections, jailbreaks, manipulations, and risky behaviors.

  • Design and implement custom guardrails for input/output protection and integrate them into an existing AI application.

  • Run security benchmarking and focused penetration tests against LLM agents, producing clear, actionable findings.

  • Develop an incident response playbook for AI applications, services and agents.

  • Threat Modeling of AI applications and agents.

  • Build a practical AI SecOps workflow that covers the full AI/LLM supply chain (models, data, tools, plugins, infra).

Learn directly from Abhinav

Abhinav Singh

Abhinav Singh

International Trainer & Speaker, security researcher & consultant.

Who this course is for

  • CISOs & Security practitioners looking to implement AI security best practices.

  • Security professionals seeking to update their skills for the AI era.

  • Red & Blue team members.

What's included

Abhinav Singh

Live sessions

Learn directly from Abhinav Singh in a real-time, interactive format.

Lifetime access

Go back to course content and recordings whenever you need to.

Lifetime access to Labs & CTF Platform

Enrolled students get lifetime access to an online lab platform(CTF) with many additional labs and content to master their skills.

Community of peers

Stay accountable and share insights with like-minded professionals through a dedicated Discord community for AI Security.

Certificate of completion

Share your new skills with your employer or on LinkedIn.

Maven Guarantee

This course is backed by the Maven Guarantee. Students are eligible for a full refund up until the halfway point of the course.

Course syllabus

4 live sessions • 12 lessons • 5 projects

Week 1

Dec 2—Dec 5

    Prompt Injections & threat Modeling of AI Applications

    • Oct

      27

      Session 1

      Mon 10/275:00 PM—8:00 PM (UTC)
    4 more items

    Jailbreaks & Elements of Responsible AI(RAI)

    • Oct

      28

      Session 2

      Tue 10/285:00 PM—8:00 PM (UTC)
    3 more items

    Agentic Security

    • Oct

      30

      Session 4

      Thu 10/305:00 PM—8:00 PM (UTC)
    3 more items

    Scalable AI Red Teaming

    • Oct

      29

      Session 3

      Wed 10/295:00 PM—8:00 PM (UTC)
    4 more items

Bonus

    Introduction to AI and Security use-cases

    2 items

    Attacking AI Applications & Services

    1 item

    Jailbreaks & Responsible AI

    0 items

Free resource

Threat Modeling AI Applications & Agentic Systems cover image

Threat Modeling AI Applications & Agentic Systems

Understanding Common Threat Vectors

Explore prompt injections—their mechanics, impact, and why they matter in securing AI systems.

Application Vs Model level threats

Differentiate between risks at the application layer and vulnerabilities within the model itself.

Threat Modeling Agentic Systems

Break down agentic systems to reveal where key security gaps and attack surfaces emerge.

Schedule

Live sessions

10-12 hrs

    • Mon, Oct 27

      5:00 PM—8:00 PM (UTC)

    • Tue, Oct 28

      5:00 PM—8:00 PM (UTC)

    • Thu, Oct 30

      5:00 PM—8:00 PM (UTC)

    • Wed, Oct 29

      5:00 PM—8:00 PM (UTC)

Projects

1-3 hrs

Async content

1-3 hrs

Frequently asked questions

$595

USD

·

5 hours left to enroll

Enroll