We have presented research in the past at various conferences

Speaker presenting AI Security Training at DEFCON
Speaker presenting AI Security Training at HITB
Speaker presenting AI Security Training at Bsides
Speaker presenting AI Security Training at DEFCON

AI Security Course Overview

  • What You Will Learn In Ai Security Certification course​

    • Start from Scratch: No prior AI or LLM experience needed. We start from the absolute basics of how Gen AI works — covering Agents, Retrieval-Augmented Generation (RAG), and the essential tooling around it.
    • Real-World AI Security Attacks: Understand how attackers target AI and LLM systems through practical labs specifically designed for hands-on learning.
    • Security Reviews & Threat Modeling: Learn how to perform detailed security assessments and develop threat models for AI systems as part of your AI Security Training Course.
    • Defensive Techniques: Apply hands-on defense strategies to strengthen and secure your AI applications against real-world threats.
  • What Will You Get in this AI Security Training?

    • Video Tutorials: In-depth explanations of all techniques to help you understand AI security concepts thoroughly.
    • Labs: Custom-built labs for each technique that you can download and practice at your own pace.
    • Source Code: Full access to all lab source code, allowing you to explore and test everything on your own machines.
  • Intended Participants

    • Security Engineers: Looking to integrate AI and LLMs into their workflows with a strong focus on security.
    • Developers: Interested in learning how to build and secure AI-powered applications effectively.
    • Cybersecurity Professionals: With basic security knowledge who want to explore the unique challenges of AI-driven systems.
    • Technical Leaders: Aiming to understand the risks, vulnerabilities, and defense strategies related to AI and LLM technologies.

AI Security Training Course Curriculum

  1. Certified AI Security Expert Course

About this course

  • $599.00
  • 33 lessons
  • Hands-On

Course Updates

Add your email to the mailing list to get the latest updates.

Thank You

Course in Depth

Hands-On AI Security Training and Certification Course

Welcome to this comprehensive journey into AI and LLMs! In this course, you'll learn how to build, secure, and defend AI/LLM-based applications. Let's get started.

How to Use This Course

This course is structured into modules that cover fundamental concepts, hands-on projects, attacks, and defenses in AI/LLM security. Follow each module sequentially for the best learning experience. Engage with hands-on labs to solidify your understanding.

📘 Chapter 1: Course Welcome

  • Welcome to the Course! Course Preview.
  • How to Use This Course and setup your lab

📘 Chapter 2: What is Artificial Intelligence and LLMs?

  • What is AI? A Brief History
  • Everyday Examples of AI (YouTube, Google Maps, Spell Check, etc.)
  • From Traditional AI to Generative AI
  • What is Generative AI?
  • What are LLMs?
  • Open Source vs Closed Source LLMs
  • Example: Smart Customer Support Chatbot Powered by LLM
  • How Can an AI Bot Help? Use Cases in Support

📘 Chapter 3: Let's Write Your First App Leveraging LLM

  • The Hello World of AI: Build Your First LLM App
  • Hands-on experience creating your first AI application

📘 Chapter 4: Leveraging Internal Data in LLMs

  • A Practical Introduction to Embeddings
  • What are Embeddings?
  • How Embeddings Help LLMs Understand Internal Data

📘 Chapter 5: Storing and Querying Embeddings

  • Storing Embeddings in Vector Databases
  • Querying Embeddings for AI Applications

📘 Chapter 6: Creating a Practical RAG Application

  • Project: Internal Security Chatbot using RAG
  • Combining Embeddings and Vector DBs for Real Use Cases

📘 Chapter 7: Practical Prompt Engineering

  • Principles of Good Prompt Engineering
  • Techniques for Prompt Optimization
  • Prompt Patterns and Anti-patterns

📘 Chapter 8: Langchain and Langsmith

  • Introduction to Langchain
  • Introduction to Langsmith
  • Building with Langchain Tools and Workflows

📘 Chapter 9a: Building Agents

  • What are Agents? Practical understanding of AI agents and their purpose.
  • How Does an Agent Work?
    • 🟡 Think: The agent analyzes the task and plans its course of action.
      "What do I need to do to solve this?"
    • 🔵 Act: The agent executes actions using available tools (APIs, functions, searches).
      "Let me use a tool to get information or complete part of the task."
    • 🟢 React & Observe: The agent evaluates results, updates its plan, and thinks again.
      "Did that work? What should I do next?"
    • 🔁 Loop Until Success: This cycle continues until the agent completes the task.
  • Example Workflow:
    • Task: Reserve Dinner at an Italian Restaurant
    • Agent Action: Web security checks, scans, or queries to fulfill the request
  • Example Agent Components:
    • LLM: Understands task, plans actions
    • Tools: Nmap scanner, ExploitDB, APIs
    • Output: Scan reports, findings, reservation confirmations
  • Think → Act → React → Observe Cycle is key to agent intelligence and autonomy.

📘 Chapter 9b: Tools and Function Calls

  • What Are Tools/Function Calls? Interfaces the agent uses to perform specific actions (e.g., scanning, searching, reserving).
  • Examples of Tools:
    • Nmap: Finds open ports and service versions on target domains
    • ExploitDB: Searches known vulnerabilities
    • Custom APIs: Used to integrate with external systems
  • Role in Agents: Tools execute the "Act" phase by carrying out specific tasks.

📘 Chapter 9c: Building Your Own Agent

  • Overview: Build a simple Python-based agent integrated with an LLM.
  • Agent Capabilities:
    • Think – Understands the task
    • Act – Calls a tool (function or API)
    • Observe & React – Refines next steps based on results
  • Project: Build a web scanning agent that uses tools like Nmap with the LLM orchestrating tasks.
  • Goal: Understand how agents coordinate LLM intelligence with tool execution to automate complex workflows.

📘 Chapter 10: Model Context Protocol (MCP)

  • What is MCP?
  • What Problems Does MCP Solve?
  • Building Your Own MCP Server – Step-by-step
  • Attacking MCP Servers – Common Techniques
  • Defending Against MCP Vulnerabilities

📘 Chapter 11: Security Tool Architecture

  • Understanding the Architecture of the Tool
  • Components, Data Flow, and Design Patterns

📘 Chapter 12: Implementation

  • Implementing the LLM Security Tool
  • Coding, Integration, and Security Considerations

📘 Chapter 13: Attack Surface Overview

  • Dive into Practical Attacks Across LLM Ecosystems
  • Understanding the Threat Landscape
  • Common Attacks on LLMs and AI Systems
  • Why Understanding These Attacks Is Vital

📘 Chapter 14: Prompt Injection

  • Hands-on: Prompt Injection Using an Essay AI App
  • Learn how attackers manipulate prompts to alter AI behavior

📘 Chapter 15: Sensitive Information Disclosure

  • Hands-on: Attacking an AI Support Bot
  • Understand how sensitive data leaks from poorly secured AI systems

📘 Chapter 16: Indirect Prompt Injection

  • Hands-on: Personal Assistant AI Bot Exploit
  • Explore attacks using indirect vectors like embedded user inputs

📘 Chapter 17: Model Backdoor

  • Real-World Backdoor Example from Hugging Face
  • Learn how adversaries embed hidden behavior into AI models

📘 Chapter 18: Model Poisoning

  • Hands-on: Poisoning Attacks and Impacts
  • Understand the risk of compromised training data and its consequences

📘 Chapter 19: Performing Security Reviews

  • How to Perform a Security Review for LLM/AI Apps
  • Tools, Checklists, and Best Practices for Auditing AI Applications

📘 Chapter 20: Introduction to LLM Defense

  • Overview of Defensive Mechanisms
  • Why Defense Needs to Be Multi-Layered
  • Concepts: Prompt Sanitization, Model Monitoring, Robust Prompt Design

📘 Chapter 21: Input Validation and Response Management

  • Strategies and Tools for Input/Output Control
  • Validating Prompts and Managing AI Responses
  • Real-World Examples and Patterns to Harden Systems

📘 Chapter 22: Mitigating Model Attacks

  • Defending Against Backdoors and Poisoning
  • Techniques:
    • Model Vetting
    • Anomaly Detection
    • Secure Training Practices
    • Third-Party Model Risk Assessment

Hands on AI/LLM Security course

What to Expect from the AI Security Certification Course?

We tried our best to build a comprehensive AI security training course with a hands-on and practical approach, making it ideal for those pursuing an artificial intelligence security certification.

  • No Prior AI/LLM Knowledge Required: Start from the basics and build a strong foundation in AI and the LLM ecosystem.

  • Core AI Concepts: Gain essential knowledge of AI fundamentals before exploring the security landscape.

  • Practical AI Security Attacks: Learn about real-world attacks targeting AI and LLM-based applications.

  • Security Reviews and Threat Modeling: Master the process of conducting security reviews and creating effective threat models for AI systems.

  • Defensive Strategies: Implement practical defense and detection techniques to secure your AI applications.

Instructor(s)

Harish Ramadoss

Harish Ramadoss is an experienced Cybersecurity Professional with a Software Engineering background and several years of expertise in Product Security, Red Teaming, and Security Research.

He is the Co-Founder of Camolabs.io, where the team built an open-source deception platform.

In the past, he has presented at conferences including Black Hat, DEF CON, HITB, and several others worldwide.

FAQs

  • Will I receive a certificate of completion when I finish a course?

    Yes - You will receive a certificate of completion

  • Where do I contact if I face any issues?

    You can reach us on [email protected] if you have any issues.

  • Who is this AI Security Certification course designed for?

    This course is ideal for security engineers, developers, cybersecurity professionals, and technical leaders who want to build and secure AI/LLM-based systems.

  • Do I need any prior experience in AI or LLMs to take this course?

    No prior experience is required. The course starts from absolute basics and is beginner-friendly, making it accessible even if you're new to AI or LLM space.

  • What hands-on projects are included in the course?

    You’ll work on projects like building an end-to-end security tool using LLMs, performing real-world AI attacks, and developing an internal RAG-based chatbot.

  • What kind of attacks and defenses will I learn?

    You’ll learn about prompt injection, model poisoning, backdoors, sensitive data leaks, and corresponding defensive techniques such as validation, monitoring, and model vetting.

  • Can this course help in preparing for LLM Security training in the industry?

    Absolutely. The course provides foundational knowledge and hands-on experience that align with the current needs of AI Security roles.

  • Can I take the course at my own pace?

    Yes, it’s a self-paced course. You can move through the modules/labs at your own speed based on your schedule and comfort level.