Course Overview

Generative AI and Large Language Models (LLMs) are transforming industries with their groundbreaking capabilities, but their adoption has also introduced significant security concerns. From adversarial attacks to privacy vulnerabilities and biases, securing these systems requires innovative tools and strategies that go beyond traditional approaches. 

This course offers an in-depth introduction to PyRIT (Python Risk Identification Tool), a cutting-edge framework designed to simplify and automate red teaming for generative AI systems. PyRIT enables security professionals and AI practitioners to uncover vulnerabilities, identify attack vectors, and enhance the overall security of their AI models. Throughout this hands-on course, you will gain a solid foundation in Generative AI and LLMs, including their structure, training processes, and optimization techniques, explore the risks and attack surfaces unique to LLMs, such as prompt injections, jailbreak attempts, and bias exploitation, learn to install, configure, and utilize PyRIT, while understanding its architecture and components like datasets, scoring mechanisms, and attack strategies, work through practical exercises to test models, orchestrate attacks, evaluate responses, and refine your security assessments, Discover effective strategies for mitigating risks, enhancing AI defenses, and ensuring ethical AI practices and develop actionable insights by creating detailed red teaming reports and understanding future challenges in AI security. 

By the end of the course, you’ll be equipped to use PyRIT for evaluating and securing generative AI models, helping organizations build more robust and resilient AI systems. Designed for AI/ML engineers, penetration testers, and security professionals, this course is an essential step toward mastering AI security in today’s evolving landscape. 

What You Will Learn

  • Understand the architecture and inner workings of Generative AI and Large Language Models (LLMs) including their vulnerabilities and potential security risks.
  • Use PyRIT an open-source automation framework to conduct red teaming activities and identify security threats in AI models.
  • Apply various attack strategies such as prompt injections, jailbreaking and multi-turn malicious prompts to assess vulnerabilities in LLMs.
  • Implement best practices for mitigating risks in Generative AI models, ensuring privacy and addressing bias and security concerns.
  • Create comprehensive red teaming reports and actionable insights to enhance the security and integrity of AI systems.

Program Curriculum

  • Understanding the AI Security Landscape
  • Common LLM Vulnerabilities and Attack Vectors
  • Microsoft's AI Red Teaming Framework Overview
  • PyRIT Architecture and Components
  • Hands-on Lab: Setting Up Your PyRIT Environment
  • Chapter 1 Quiz

  • Attack Primitives and Classifications
  • Psychology of LLM Manipulation
  • Understanding Attack Chains
  • PyRIT Components Deep Dive
  • Hands-on Lab: Implementing Basic Attacks
  • Constructing AI Red Teaming Orchestrators with PyRIT - Lab
  • Chapter 2 Quiz

  • Orchestrator Architecture and Patterns
  • Managing Complex Attack States
  • Component Integration Strategies
  • Advanced Feature Implementation
  • Hands-on Lab: Building Custom Orchestrators
  • Advanced AI Red Teaming with PyRIT - Lab
  • Chapter 3 Quiz

  • PAIR and TAP Attack Methodologies
  • GCG and nanoGCG Optimization Techniques
  • Cross-domain and Transfer Attacks
  • Hands-on Lab: Advanced Attack Implementation
  • Orchestrating Hybrid AI Red Team Operations - Lab
  • Chapter 4 Quiz

  • Attack Success Measurement
  • Security Scoring Systems
  • Defensive Analysis
  • Security Report Creation
  • Hands-on Lab: Comprehensive Security Reports
  • Deep Dive 1
  • Deep Dive 2
  • Deep Dive 3
  • Deep Dive 4
  • Chapter 5 Quiz
Load more modules

Instructor

David Pierce

A seasoned AI/ML expert and Enterprise Architect, David has led AI/ML and data strategy initiatives at Fortune-ranked organizations, overseeing teams of Principal Engineers and Architects. With a deep focus on integrating Discriminative Machine Learning, Deep Reinforcement Learning, and Generative AI, he has authored key software contributions, frameworks for model interpretability, and risk management strategies across hybrid cloud platforms. David is recognized for pioneering Federated Knowledge Graphs, AI/ML observability, and low-code/no-code integrations, driving impactful solutions for scalable AI/ML systems. His thought leadership includes published white papers and speaking engagements at global conferences, including the ML for DevOps Summit and IAI ‘Leaders in AI Summit’. He is also the primary author of a forthcoming arXiv publication.

Join over 1 Million professionals from the most renowned Companies in the world!

certificate

Empower Your Learning with Our Flexible Plans

Invest in your future with our flexible subscription plans. Whether you're just starting out or looking to enhance your expertise, there's a plan tailored to meet your needs. Gain access to in-demand skills and courses for your continuous learning needs.

Monthly Plans
Annual Plans
Save 20% with our annual plans!

Pro

Ideal for continuous learning, offering video-based learning with 840+ courses and diverse Learning Paths to enhance your skills.

$ 69.00
Billed monthly or $599.00 billed annually

What is included

  • 840+ Premium Short Courses
  • 70+ Structured Learning Paths
  • Validation of Completion with all courses and learning paths
  • New Courses added every month
Early Access Offer

Pro +

Experience immersive learning with Practice Labs and CTF Challenges for comprehensive skill-building.

$ 79.00
Billed monthly or $699.00 billed annually

Everything in Pro and

  • 1400+ Practice Lab exercises with guided instructions
  • 150+ CTF Challenges with detailed walkthroughs
  • New Practice Labs and Challenges added every month

Related Courses

1 of 50