Black Friday Offer

Black Friday — 15% OFF Code : BLACKFRIDAY-15-MSECAI

We have presented research in the past at various conferences

Speaker presenting AI Security Training at DEFCON
Speaker presenting AI Security Training at HITB
Speaker presenting AI Security Training at Bsides
Speaker presenting AI Security Training at DEFCON

AI Security Course Overview

  • What You Will Learn

    • Beginner-Friendly: No AI background needed. We start from LLM basics, agentic workflows, MCPs, RAG, vector DBs, and the full modern AI stack using real world labs.
    • Attack Labs: Practice real AI/LLM security attacks through guided, hands-on exercises.
    • Security Reviews & Threat Models: Learn to Pentest AI applications and build practical threat models.
    • Defensive Skills: Apply proven techniques to secure AI stack - using LLM Firewall, gateways etc
  • What Will You Get in this AI Security Training?

    • Video Tutorials: In-depth explanations of all techniques to help you understand AI security concepts thoroughly.
    • Labs: Custom-built labs for each technique that you can download and practice at your own pace.
  • Intended Participants

    • Security Engineers: Looking to integrate AI and LLMs into their workflows with a strong focus on security.
    • Developers: Interested in learning how to build and secure AI-powered applications effectively.
    • Penetration Tested/Red Teamers: Who want to understand how to assess AI applications.
    • Technical Leaders: Aiming to understand the risks, vulnerabilities, and defense strategies related to AI and LLM technologies.

AI Security Training Course Curriculum

  1. Certified AI Security Expert Course

About this course

  • $599.00
  • 33 lessons
  • Hands-On

Course Updates

Add your email to the mailing list to get the latest updates.

Thank You

Course in Depth

Hands-On AI Security Training

Welcome to this comprehensive journey into AI and LLMs! In this course, you'll learn how to build, secure, and defend AI/LLM-based applications. Let's get started.


Build

Our AI Security Training is a hands-on training focused on learning AI security from first principles and with an engineering mindset. We heavily focus on building a fundamental understanding of how real-world GenAI applications are built, based on our experience working with AI-native engineering teams.

We will use hands-on labs to interact with LLM APIs, then go deep into:

  • Embeddings
  • VectorDBs
  • RAG
  • Agentic systems
  • MCPs
  • Langsmith
  • Essential tooling around them

—all with real-world examples and labs.

Once we understand and do labs around these concepts, we will actually go ahead and build our own threat model agent.


Break – Offensive techniques, tooling, and threat model AI Stack

We then dive into the offensive component with real-world apps in our labs. Some examples of the labs we cover in the course include:

  • Classic Prompt Injection and Indirect Prompt Injection attacks using our Email Assistant bot
  • Sensitive Information Disclosure and Authorization issues
  • MCP attacks — We will build MCP Servers (Local/Remote), SSE vs stdio, and then go into MCP attacks using custom MCP servers we built
  • Attacks in Agentic architecture
  • Model Backdoors — Real-world backdoor example from Hugging Face; learn how adversaries embed hidden behavior into AI models
  • Threat Model AI Application workflows and how to think about the application layer when combined with LLMs

Defend – Practical tools and techniques

We will then go over practical techniques, covering both tools and architecture-level thinking on how to secure AI applications:

  • Practical defense techniques using our labs:
    • Inline LLM guardrails
    • MCP Gateways for observability and detection
  • Go over each attack we demonstrated and fix them at the App layer or make architecture changes
  • Agentic Security Architecture

By the end, you'll know how production-grade GenAI apps are built, how to assess them for risks, and how to provide actionable recommendations.

  1. Chapter 1 - Course Overview

  2. Welcome to the Course! Course Preview.

  3. How to Use This Course and setup your lab

  4. Chapter 2 - From AI to ML to LLMs: Foundations

  5. What is AI? — A quick look at how AI evolved over the decades

  6. AI → ML → GenAI — How we moved from traditional models to today's generative systems

  7. Generative AI Explained — What it is, why it matters, and what makes it different

  8. Understanding LLMs — How large models learn using data, neural networks, and scale

  9. Open Source vs. Closed Source — When to use which, and why the ecosystem matters

  10. Real Example: Inside an LLM-powered customer support chatbot

  11. Where AI Helps Today — Practical use cases across teams and industries

  12. Foundations of AI Security — Why securing ML and LLM systems is becoming critical

  13. Chapter 3 - First App leveraging LLM (Hello world of LLM)

  14. Using this course

  15. Lab Setup

  16. The Hello World of AI: Build Your First LLM App

  17. Hands-on experience creating your first AI application

  18. Chapter 4 - Leveraging Internal Data in LLMs

  19. A Practical Introduction to Embeddings

  20. What are Embeddings?

  21. How Embeddings Help LLMs Understand Internal Data

  22. Chapter 5 - Creating Embeddings

  23. Chapter 6 - RAG 1: Storing Embeddings in Vector Databases and querying them

  24. Storing Embeddings in Vector Databases

  25. Querying Embeddings for AI Applications

  26. Chapter 7 - RAG 2 : Storing Embeddings in Vector Databases and querying them

  27. Project: Internal Security Chatbot using RAG

  28. Combining Embeddings and Vector DBs for Real Use Cases

  29. Chapter 8 - Understanding Langchain Ecosystem

  30. Introduction to Langchain

  31. Introduction to Langsmith

  32. Building with Langchain Tools and Workflows

  33. Chapter 9 - Understanding and integrating LangSmith and Security Considerations in LLM Workflows

  34. Chapter 10 - Agents Series 1 : Intro to Agents

  35. What are Agents? Practical understanding of AI agents and their purpose

  36. How Does an Agent Work? Think, Act, React & Observe, Loop Until Success

  37. Think → Act → React → Observe Cycle is key to agent intelligence and autonomy

  38. Chapter 11 - Agents Series 2 : Tools/Function Calls

  39. What Are Tools/Function Calls? Interfaces the agent uses to perform specific actions

  40. Examples of Tools: Nmap, ExploitDB, Custom APIs

  41. Role in Agents: Tools execute the "Act" phase by carrying out specific tasks

  42. Chapter 12 - Agents Series 3 : Digging deeper into Tools/Function Calls using Langsmith

  43. Chapter 13 - Agents Series 4 : Building your own Web Security Scanner Agent

  44. Overview: Build a simple Python-based agent integrated with an LLM

  45. Agent Capabilities: Think, Act, Observe & React

  46. Project: Build a web scanning agent that uses tools like Nmap with the LLM orchestrating tasks

  47. Goal: Understand how agents coordinate LLM intelligence with tool execution to automate complex workflows

  48. Chapter 14 - Building our own AI Security Tool

  49. Understanding the Architecture of the Tool

  50. Components, Data Flow, and Design Patterns

  51. Chapter 15 - Getting hands on and implemeting the tool

  52. Implementing the LLM Security Tool

  53. Coding, Integration, and Security Considerations

  54. Chapter 16 - Attacks across LLM Ecosystem

  55. Dive into Practical Attacks Across LLM Ecosystems

  56. Understanding the Threat Landscape

  57. Common Attacks on LLMs and AI Systems

  58. Why Understanding These Attacks Is Vital

  59. Chapter 17 - Prompt Injection : We will learn how it works using an Essay AI App we built for this lab

  60. Hands-on: Prompt Injection Using an Essay AI App

  61. Learn how attackers manipulate prompts to alter AI behavior

  62. Chapter 18 - Sensitive Information Disclosure : Attacking AI Support Bot

  63. Hands-on: Attacking an AI Support Bot

  64. Understand how sensitive data leaks from poorly secured AI systems

  65. Chapter 19 - Indirect Prompt Injection : Personal Assistant AI Bot

  66. Hands-on: Personal Assistant AI Bot Exploit

  67. Explore attacks using indirect vectors like embedded user inputs

  68. Chapter 20 - AI Supply Chain Security: Understanding the Attack Vector

  69. Exploring AI Supply Chain Architecture

  70. Model Distribution and Local Deployment

  71. Vulnerability Analysis: How Model Backdooring Exploits the Supply Chain

  72. Chapter 21 - AI Supply Chain Security: Hands-On Attack Demonstration

  73. Downloading and Querying a Verified Safe Model

  74. Attack Implementation: Injecting a Backdoor into the Model

  75. Deep Dive Analysis: Understanding the Complete Attack Lifecycle and Exploitation Process

  76. Chapter 22 - Introduction to MCP

  77. What is MCP?

  78. What Problems Does MCP Solve?

  79. Building Your Own MCP Server – Step-by-step

  80. Attacking MCP Servers – Common Techniques

  81. Defending Against MCP Vulnerabilities

  82. Chapter 23 - MCP In Depth : Building your own MCP Server

  83. Chapter 24 - MCP Security : Offensive Part 1

  84. Chapter 25 - MCP Security : Offensive Part 2

  85. Chapter 26 - MCP Security : Defense/ Guardrails

  86. Chapter 27 - Introduction to LLM Defense

  87. Overview of Defensive Mechanisms

  88. Why Defense Needs to Be Multi-Layered

  89. Concepts: Prompt Sanitization, Model Monitoring, Robust Prompt Design

  90. Chapter 28 - Defending Prompt Injections (Part 1)

  91. Chapter 29 - Defending Prompt Injections (Part 2)

  92. Chapter 30 - Defending Sensitive Information Disclosure

  93. Strategies and Tools for Input/Output Control

  94. Validating Prompts and Managing AI Responses

  95. Real-World Examples and Patterns to Harden Systems

  96. Chapter 31 - Mitigating Model Attacks

  97. Defending Against Backdoors and Poisoning

  98. Techniques: Model Vetting, Anomaly Detection, Secure Training Practices, Third-Party Model Risk Assessment

  99. Thank you!

  100. Course Feedback

Hands on AI Security course

What to Expect from the AI Security Certification Course?

We tried our best to build a comprehensive AI security training course with a hands-on and practical approach, making it ideal for those pursuing an artificial intelligence security certification.

  • No Prior AI/LLM Knowledge Required: Start from the basics and build a strong foundation in AI and the LLM ecosystem.

  • Core AI Concepts: Gain essential knowledge of AI fundamentals before exploring the security landscape.

  • Practical AI Security Attacks: Learn about real-world attacks targeting AI and LLM-based applications.

  • Security Reviews and Threat Modeling: Master the process of conducting security reviews and creating effective threat models for AI systems.

  • Defensive Strategies: Implement practical defense and detection techniques to secure your AI applications.

Instructor(s)

Harish Ramadoss

Harish Ramadoss is an experienced Cybersecurity Professional with a Software Engineering background and several years of expertise in Product Security, Red Teaming, and Security Research.

He is the Co-Founder of Camolabs.io, where the team built an open-source deception platform.

In the past, he has presented at conferences including Black Hat, DEF CON, HITB, and several others worldwide.

FAQs

  • Will I receive a certificate of completion when I finish a course?

    Yes - You will receive a certificate of completion

  • Where do I contact if I face any issues?

    You can reach us on [email protected] if you have any issues.

  • Who is this AI Security Certification course designed for?

    This course is ideal for security engineers, developers, cybersecurity professionals, and technical leaders who want to build and secure AI/LLM-based systems.

  • Do I need any prior experience in AI or LLMs to take this course?

    No prior experience is required. The course starts from absolute basics and is beginner-friendly, making it accessible even if you're new to AI or LLM space.

  • What hands-on projects are included in the course?

    You’ll work on projects like building an end-to-end security tool using LLMs, performing real-world AI attacks, and developing an internal RAG-based chatbot.

  • What kind of attacks and defenses will I learn?

    You’ll learn about prompt injection, model poisoning, backdoors, sensitive data leaks, and corresponding defensive techniques such as validation, monitoring, and model vetting.

  • Can this course help in preparing for LLM Security training in the industry?

    Absolutely. The course provides foundational knowledge and hands-on experience that align with the current needs of AI Security roles.

  • Can I take the course at my own pace?

    Yes, it’s a self-paced course. You can move through the modules/labs at your own speed based on your schedule and comfort level.