Course Overview

  • You Will Learn

    • Start from Scratch: No prior AI or LLM experience needed. Build a solid foundation from the basics.
    • Core AI Concepts: Learn the key principles of AI before exploring security challenges.
    • Real-World AI Security Attacks: Understand how attackers target AI and LLM systems with practical labs we created
    • Security Reviews & Threat Modeling: Learn how to perform security assessments and create threat models for AI systems.
    • Defensive Techniques: Apply hands-on defense strategies to secure your AI applications.
  • What Will You Get?

    • Video Tutorials: In-depth explanations of all techniques.
    • Labs: We have built custom labs for each technique which you can download
    • Course Material: Entire course content in PDF for quick reference
    • Source Code: All labs along with source code to try out on your own machines
  • Target Audience

    • Security Engineers: looking to integrate AI and LLMs into their work.
    • Developers: interested in learning how to secure AI applications.
    • Cybersecurity Professionals: with basic security knowledge who want to explore AI-driven systems.
    • Technical Leaders: aiming to understand the security risks and defense strategies for AI and LLMs.
    • AI Enthusiasts: curious about the intersection of AI, LLMs, and cybersecurity.

Course curriculum

  1. Welcome to the course!

  2. Practical Concepts in the AI and LLM Space - Understand the LLM Ecosystem

  3. Hands-On Project - Building an End-to-End Threat Model Tool leveraging LLMs

  4. Attacking LLM Applications (Hands-On)

  5. Module 5: Defensive techniques against LLM attacks

About this course

  • $599.00
  • 22 lessons
  • Hands-On

Course Updates

Add your email to the mailing list to get the latest updates.

Thank You

Course in Depth

Course Curriculum

Welcome to the Course!

Welcome to this comprehensive journey into AI and LLMs! In this course, you'll learn how to build, secure, and defend AI/LLM-based applications. Let's get started.

How to Use This Course

This course is structured into modules that cover fundamental concepts, hands-on projects, attacks, and defenses in AI/LLM security. Follow each module sequentially for the best learning experience. Engage with hands-on labs to solidify your understanding.

Module 1: Introduction to AI and LLMs

  • What is Artificial Intelligence and LLMs?

    This section introduces the basics of Artificial Intelligence (AI) and Large Language Models (LLMs). Learn the concepts, history, and how these technologies are used with practical examples.

  • Let's Write Your First App Leveraging LLM - The Hello World of AI

    Dive into coding by creating your first AI application using an LLM. This 'Hello World' example will provide hands-on experience with the fundamentals of interacting with AI models.

Module 2: Practical Concepts in the AI and LLM Space

Understanding core concepts is crucial before diving into the security of LLMs. This module covers key ideas and practical implementations.

  • Leveraging Internal Data in LLMs: A Practical Introduction to Embeddings

    Learn how to use internal data with LLMs through embeddings. Understand what embeddings are and how they allow AI models to interpret and use data effectively.

  • Storing Embeddings in Vector Databases and Querying Them

    Discover how to store and query embeddings using vector databases. You'll gain hands-on experience in managing and retrieving data efficiently for AI applications.

  • Creating a Practical RAG Application - Internal Security Chat Bot

    Build a Retrieval-Augmented Generation (RAG) application: an internal security chatbot. This project combines embeddings and vector databases to create a practical security tool.

  • Practical Prompt Engineering

    Master the art of crafting effective prompts for LLMs. Learn techniques to guide AI behavior, extract useful information, and enhance your AI applications' responses.

  • Langchain and Langsmith

    Explore Langchain and Langsmith, tools for building AI-powered applications. Understand their features, use cases, and how they simplify the integration of LLMs into your projects.

Module 3: Hands-On Project - Building an End-to-End Security Tool Leveraging LLMs

  • Let's Understand the Architecture of the Tool We Are Building

    Before diving into development, learn the architecture of the security tool you'll build. This section covers the components, data flow, and design patterns used in the tool.

  • Implementing and Building Our Practical Security Tool Using LLMs and AI

    Start building the security tool. This hands-on section guides you through implementing features, integrating AI models, and ensuring the tool is both functional and secure.

Module 4: Attacking LLM Applications (Hands-On)

Understand the vulnerabilities in LLM applications through practical, hands-on attack simulations.

  • Dive into Practical Attacks Across LLM Ecosystems

    Gain a comprehensive overview of various attacks in the LLM ecosystem. Learn the threats that LLM applications face and why understanding these is vital for building secure AI systems.

  • Prompt Injection - Using an Essay AI App

    Learn about prompt injection attacks using a specially designed Essay AI app. This lab shows how attackers can manipulate prompts to alter AI behavior maliciously.

  • Sensitive Information Disclosure - Attacking AI Support Bot

    Understand how sensitive information can be exposed in AI systems. This hands-on attack demonstrates how an AI support bot can inadvertently leak private data.

  • Indirect Prompt Injection - Personal Assistant AI Bot

    Explore indirect prompt injection attacks using a Personal Assistant AI bot. See how attackers can manipulate AI behavior indirectly through user inputs or other channels.

  • Model Backdoor - Practical Example on Hugging Face

    Learn about model backdoors through a practical example on Hugging Face. Understand how adversaries can embed hidden behaviors into AI models.

  • Model Poisoning

    Dive into model poisoning attacks where attackers compromise the training data or model itself. Understand the implications and how such attacks can alter model behavior.

  • How to Perform a Security Review for LLM/AI Apps?

    Learn the process of conducting a security review for AI applications. This section covers best practices, methodologies, and tools to assess and secure LLM-based applications.

Module 5: Defensive Techniques Against LLM Attacks

  • Introduction to LLM Defense Mechanisms

    Discover various defense mechanisms to protect LLM applications from attacks. Learn about techniques such as prompt sanitization, model monitoring, and robust prompt design.

  • Input Validation and Response Management Strategy with Tooling

    Understand how to validate inputs and manage AI responses to mitigate attack vectors. Explore tools and strategies to enhance the security and reliability of LLM responses.

  • Mitigating Model Attacks

    Learn strategies to defend against model-specific attacks like backdoor and poisoning attacks. This section covers defensive techniques like model vetting, anomaly detection, and secure training practices.

Hands on AI/LLM Security course

What to expect from the course?

We tried our best to build a comprehensive training course AI Security with a hands on and practical approach

  • No Prior AI/LLM Knowledge Required: Start from the basics and build a strong foundation in AI and the LLM ecosystem.

  • Core AI Concepts: Gain essential knowledge of AI fundamentals before exploring the security landscape.

  • Practical AI Security Attacks: Learn about real-world attacks targeting AI and LLM-based applications.

  • Security Reviews and Threat Modeling: Master the process of conducting security reviews and creating effective threat models for AI systems.

  • Defensive Strategies: Implement practical defense and detection techniques to secure your AI applications.

Instructor(s)

Harish Ramadoss

Harish Ramadoss is an experienced Cybersecurity Professional with a Software Engineering background and has several years of expertise in Product Security, Red Teaming, and Security Research. Co-Founder of Camolabs.io where they built open-source deception platform. In the past has presented at Blackhat, Defcon, HITB and few other conferences globally.

FAQ

  • Will I receive a certificate of completion when I finish a course?

    Yes - You will receive a certificate of completion.

  • Where do I contact if I face any issues?

    You can reach us on [email protected] if you have any issues.