Certified AI Security Expert (MSec-CAIS)
Build, Break, and Defend Real-World AI Apps, MCP and Agents
Add your email to the mailing list to get the latest updates.
Welcome to this comprehensive journey into AI and LLMs! In this course, you'll learn how to build, secure, and defend AI/LLM-based applications. Let's get started.
Our AI Security Training is a hands-on training focused on learning AI security from first principles and with an engineering mindset. We heavily focus on building a fundamental understanding of how real-world GenAI applications are built, based on our experience working with AI-native engineering teams.
We will use hands-on labs to interact with LLM APIs, then go deep into:
—all with real-world examples and labs.
Once we understand and do labs around these concepts, we will actually go ahead and build our own threat model agent.
We then dive into the offensive component with real-world apps in our labs. Some examples of the labs we cover in the course include:
We will then go over practical techniques, covering both tools and architecture-level thinking on how to secure AI applications:
By the end, you'll know how production-grade GenAI apps are built, how to assess them for risks, and how to provide actionable recommendations.
Chapter 1 - Course Overview
Welcome to the Course! Course Preview.
How to Use This Course and setup your lab
Chapter 2 - From AI to ML to LLMs: Foundations
What is AI? — A quick look at how AI evolved over the decades
AI → ML → GenAI — How we moved from traditional models to today's generative systems
Generative AI Explained — What it is, why it matters, and what makes it different
Understanding LLMs — How large models learn using data, neural networks, and scale
Open Source vs. Closed Source — When to use which, and why the ecosystem matters
Real Example: Inside an LLM-powered customer support chatbot
Where AI Helps Today — Practical use cases across teams and industries
Foundations of AI Security — Why securing ML and LLM systems is becoming critical
Chapter 3 - First App leveraging LLM (Hello world of LLM)
Using this course
Lab Setup
The Hello World of AI: Build Your First LLM App
Hands-on experience creating your first AI application
Chapter 4 - Leveraging Internal Data in LLMs
A Practical Introduction to Embeddings
What are Embeddings?
How Embeddings Help LLMs Understand Internal Data
Chapter 5 - Creating Embeddings
Chapter 6 - RAG 1: Storing Embeddings in Vector Databases and querying them
Storing Embeddings in Vector Databases
Querying Embeddings for AI Applications
Chapter 7 - RAG 2 : Storing Embeddings in Vector Databases and querying them
Project: Internal Security Chatbot using RAG
Combining Embeddings and Vector DBs for Real Use Cases
Chapter 8 - Understanding Langchain Ecosystem
Introduction to Langchain
Introduction to Langsmith
Building with Langchain Tools and Workflows
Chapter 9 - Understanding and integrating LangSmith and Security Considerations in LLM Workflows
Chapter 10 - Agents Series 1 : Intro to Agents
What are Agents? Practical understanding of AI agents and their purpose
How Does an Agent Work? Think, Act, React & Observe, Loop Until Success
Think → Act → React → Observe Cycle is key to agent intelligence and autonomy
Chapter 11 - Agents Series 2 : Tools/Function Calls
What Are Tools/Function Calls? Interfaces the agent uses to perform specific actions
Examples of Tools: Nmap, ExploitDB, Custom APIs
Role in Agents: Tools execute the "Act" phase by carrying out specific tasks
Chapter 12 - Agents Series 3 : Digging deeper into Tools/Function Calls using Langsmith
Chapter 13 - Agents Series 4 : Building your own Web Security Scanner Agent
Overview: Build a simple Python-based agent integrated with an LLM
Agent Capabilities: Think, Act, Observe & React
Project: Build a web scanning agent that uses tools like Nmap with the LLM orchestrating tasks
Goal: Understand how agents coordinate LLM intelligence with tool execution to automate complex workflows
Chapter 14 - Building our own AI Security Tool
Understanding the Architecture of the Tool
Components, Data Flow, and Design Patterns
Chapter 15 - Getting hands on and implemeting the tool
Implementing the LLM Security Tool
Coding, Integration, and Security Considerations
Chapter 16 - Attacks across LLM Ecosystem
Dive into Practical Attacks Across LLM Ecosystems
Understanding the Threat Landscape
Common Attacks on LLMs and AI Systems
Why Understanding These Attacks Is Vital
Chapter 17 - Prompt Injection : We will learn how it works using an Essay AI App we built for this lab
Hands-on: Prompt Injection Using an Essay AI App
Learn how attackers manipulate prompts to alter AI behavior
Chapter 18 - Sensitive Information Disclosure : Attacking AI Support Bot
Hands-on: Attacking an AI Support Bot
Understand how sensitive data leaks from poorly secured AI systems
Chapter 19 - Indirect Prompt Injection : Personal Assistant AI Bot
Hands-on: Personal Assistant AI Bot Exploit
Explore attacks using indirect vectors like embedded user inputs
Chapter 20 - AI Supply Chain Security: Understanding the Attack Vector
Exploring AI Supply Chain Architecture
Model Distribution and Local Deployment
Vulnerability Analysis: How Model Backdooring Exploits the Supply Chain
Chapter 21 - AI Supply Chain Security: Hands-On Attack Demonstration
Downloading and Querying a Verified Safe Model
Attack Implementation: Injecting a Backdoor into the Model
Deep Dive Analysis: Understanding the Complete Attack Lifecycle and Exploitation Process
Chapter 22 - Introduction to MCP
What is MCP?
What Problems Does MCP Solve?
Building Your Own MCP Server – Step-by-step
Attacking MCP Servers – Common Techniques
Defending Against MCP Vulnerabilities
Chapter 23 - MCP In Depth : Building your own MCP Server
Chapter 24 - MCP Security : Offensive Part 1
Chapter 25 - MCP Security : Offensive Part 2
Chapter 26 - MCP Security : Defense/ Guardrails
Chapter 27 - Introduction to LLM Defense
Overview of Defensive Mechanisms
Why Defense Needs to Be Multi-Layered
Concepts: Prompt Sanitization, Model Monitoring, Robust Prompt Design
Chapter 28 - Defending Prompt Injections (Part 1)
Chapter 29 - Defending Prompt Injections (Part 2)
Chapter 30 - Defending Sensitive Information Disclosure
Strategies and Tools for Input/Output Control
Validating Prompts and Managing AI Responses
Real-World Examples and Patterns to Harden Systems
Chapter 31 - Mitigating Model Attacks
Defending Against Backdoors and Poisoning
Techniques: Model Vetting, Anomaly Detection, Secure Training Practices, Third-Party Model Risk Assessment
Thank you!
Course Feedback
We tried our best to build a comprehensive AI security training course with a hands-on and practical approach, making it ideal for those pursuing an artificial intelligence security certification.
No Prior AI/LLM Knowledge Required: Start from the basics and build a strong foundation in AI and the LLM ecosystem.
Core AI Concepts: Gain essential knowledge of AI fundamentals before exploring the security landscape.
Practical AI Security Attacks: Learn about real-world attacks targeting AI and LLM-based applications.
Security Reviews and Threat Modeling: Master the process of conducting security reviews and creating effective threat models for AI systems.
Defensive Strategies: Implement practical defense and detection techniques to secure your AI applications.
Harish Ramadoss is an experienced Cybersecurity Professional with a Software Engineering background and several years of expertise in Product Security, Red Teaming, and Security Research.
He is the Co-Founder of Camolabs.io, where the team built an open-source deception platform.
In the past, he has presented at conferences including Black Hat, DEF CON, HITB, and several others worldwide.
Yes - You will receive a certificate of completion
You can reach us on [email protected] if you have any issues.
This course is ideal for security engineers, developers, cybersecurity professionals, and technical leaders who want to build and secure AI/LLM-based systems.
No prior experience is required. The course starts from absolute basics and is beginner-friendly, making it accessible even if you're new to AI or LLM space.
You’ll work on projects like building an end-to-end security tool using LLMs, performing real-world AI attacks, and developing an internal RAG-based chatbot.
You’ll learn about prompt injection, model poisoning, backdoors, sensitive data leaks, and corresponding defensive techniques such as validation, monitoring, and model vetting.
Absolutely. The course provides foundational knowledge and hands-on experience that align with the current needs of AI Security roles.
Yes, it’s a self-paced course. You can move through the modules/labs at your own speed based on your schedule and comfort level.