Course Curriculum
Welcome to the Course!
Welcome to this comprehensive journey into AI and LLMs! In this course, you'll learn how to build, secure, and defend AI/LLM-based applications. Let's get started.
How to Use This Course
This course is structured into modules that cover fundamental concepts, hands-on projects, attacks, and defenses in AI/LLM security. Follow each module sequentially for the best learning experience. Engage with hands-on labs to solidify your understanding.
Module 1: Introduction to AI and LLMs
-
What is Artificial Intelligence and LLMs?
This section introduces the basics of Artificial Intelligence (AI) and Large Language Models (LLMs). Learn the concepts, history, and how these technologies are used with practical examples.
-
Let's Write Your First App Leveraging LLM - The Hello World of AI
Dive into coding by creating your first AI application using an LLM. This 'Hello World' example will provide hands-on experience with the fundamentals of interacting with AI models.
Module 2: Practical Concepts in the AI and LLM Space
Understanding core concepts is crucial before diving into the security of LLMs. This module covers key ideas and practical implementations.
-
Leveraging Internal Data in LLMs: A Practical Introduction to Embeddings
Learn how to use internal data with LLMs through embeddings. Understand what embeddings are and how they allow AI models to interpret and use data effectively.
-
Storing Embeddings in Vector Databases and Querying Them
Discover how to store and query embeddings using vector databases. You'll gain hands-on experience in managing and retrieving data efficiently for AI applications.
-
Creating a Practical RAG Application - Internal Security Chat Bot
Build a Retrieval-Augmented Generation (RAG) application: an internal security chatbot. This project combines embeddings and vector databases to create a practical security tool.
-
Practical Prompt Engineering
Master the art of crafting effective prompts for LLMs. Learn techniques to guide AI behavior, extract useful information, and enhance your AI applications' responses.
-
Langchain and Langsmith
Explore Langchain and Langsmith, tools for building AI-powered applications. Understand their features, use cases, and how they simplify the integration of LLMs into your projects.
Module 3: Hands-On Project - Building an End-to-End Security Tool Leveraging LLMs
-
Let's Understand the Architecture of the Tool We Are Building
Before diving into development, learn the architecture of the security tool you'll build. This section covers the components, data flow, and design patterns used in the tool.
-
Implementing and Building Our Practical Security Tool Using LLMs and AI
Start building the security tool. This hands-on section guides you through implementing features, integrating AI models, and ensuring the tool is both functional and secure.
Module 4: Attacking LLM Applications (Hands-On)
Understand the vulnerabilities in LLM applications through practical, hands-on attack simulations.
-
Dive into Practical Attacks Across LLM Ecosystems
Gain a comprehensive overview of various attacks in the LLM ecosystem. Learn the threats that LLM applications face and why understanding these is vital for building secure AI systems.
-
Prompt Injection - Using an Essay AI App
Learn about prompt injection attacks using a specially designed Essay AI app. This lab shows how attackers can manipulate prompts to alter AI behavior maliciously.
-
Sensitive Information Disclosure - Attacking AI Support Bot
Understand how sensitive information can be exposed in AI systems. This hands-on attack demonstrates how an AI support bot can inadvertently leak private data.
-
Indirect Prompt Injection - Personal Assistant AI Bot
Explore indirect prompt injection attacks using a Personal Assistant AI bot. See how attackers can manipulate AI behavior indirectly through user inputs or other channels.
-
Model Backdoor - Practical Example on Hugging Face
Learn about model backdoors through a practical example on Hugging Face. Understand how adversaries can embed hidden behaviors into AI models.
-
Model Poisoning
Dive into model poisoning attacks where attackers compromise the training data or model itself. Understand the implications and how such attacks can alter model behavior.
-
How to Perform a Security Review for LLM/AI Apps?
Learn the process of conducting a security review for AI applications. This section covers best practices, methodologies, and tools to assess and secure LLM-based applications.
Module 5: Defensive Techniques Against LLM Attacks
-
Introduction to LLM Defense Mechanisms
Discover various defense mechanisms to protect LLM applications from attacks. Learn about techniques such as prompt sanitization, model monitoring, and robust prompt design.
-
Input Validation and Response Management Strategy with Tooling
Understand how to validate inputs and manage AI responses to mitigate attack vectors. Explore tools and strategies to enhance the security and reliability of LLM responses.
-
Mitigating Model Attacks
Learn strategies to defend against model-specific attacks like backdoor and poisoning attacks. This section covers defensive techniques like model vetting, anomaly detection, and secure training practices.