Course Overview

Generative AI and Large Language Models (LLMs) are transforming industries with their groundbreaking capabilities, but their adoption has also introduced significant security concerns. From adversarial attacks to privacy vulnerabilities and biases, securing these systems requires innovative tools and strategies that go beyond traditional approaches. 

This course offers an in-depth introduction to PyRIT (Python Risk Identification Tool), a cutting-edge framework designed to simplify and automate red teaming for generative AI systems. PyRIT enables security professionals and AI practitioners to uncover vulnerabilities, identify attack vectors, and enhance the overall security of their AI models. Throughout this hands-on course, you will gain a solid foundation in Generative AI and LLMs, including their structure, training processes, and optimization techniques, explore the risks and attack surfaces unique to LLMs, such as prompt injections, jailbreak attempts, and bias exploitation, learn to install, configure, and utilize PyRIT, while understanding its architecture and components like datasets, scoring mechanisms, and attack strategies, work through practical exercises to test models, orchestrate attacks, evaluate responses, and refine your security assessments, Discover effective strategies for mitigating risks, enhancing AI defenses, and ensuring ethical AI practices and develop actionable insights by creating detailed red teaming reports and understanding future challenges in AI security. 

By the end of the course, you’ll be equipped to use PyRIT for evaluating and securing generative AI models, helping organizations build more robust and resilient AI systems. Designed for AI/ML engineers, penetration testers, and security professionals, this course is an essential step toward mastering AI security in today’s evolving landscape. 

What You Will Learn

  • Understand the architecture and inner workings of Generative AI and Large Language Models (LLMs) including their vulnerabilities and potential security risks.
  • Use PyRIT an open-source automation framework to conduct red teaming activities and identify security threats in AI models.
  • Apply various attack strategies such as prompt injections, jailbreaking and multi-turn malicious prompts to assess vulnerabilities in LLMs.
  • Implement best practices for mitigating risks in Generative AI models , ensuring privacy and addressing bias and security concerns.
  • Create comprehensive red teaming reports and actionable insights to enhance the security and integrity of AI systems.

Program Curriculum

  • Chapter 1 Common LLM Vulnerabilities and Attack Vectors
  • Hands-on Lab: Setting Up Your PyRIT Environment
  • Microsoft's AI Red Teaming Framework Overview
  • PyRIT Architecture and Components
  • Understanding the AI Security Landscape
  • Quiz

  • Chapter 2 Attack Primitives and Classifications
  • Hands-on Lab: Implementing Basic Attacks
  • Psychology of LLM Manipulation
  • PyRIT Components Deep Dive
  • Understanding Attack Chains
  • Quiz

  • Chapter 3 Advanced Feature Implementation
  • Component Integration Strategies
  • Hands-on Lab: Building Custom Orchestrators
  • Managing Complex Attack States
  • Orchestrator Architecture and Patterns
  • Quiz

  • Chapter 4 Cross-domain and Transfer Attacks
  • GCG and nanoGCG Optimization Techniques
  • Hands-on Lab: Advanced Attack Implementation
  • PAIR and TAP Attack Methodologies
  • Quiz

  • Chapter 5 Attack Success Measurement
  • Deep Dive 1
  • Deep Dive 2
  • Deep Dive 3
  • Deep Dive 4
  • Defensive Analysis
  • Hands-on Lab: Comprehensive Security Reports
  • Security Report Creation
  • Security Scoring Systems
  • Quiz
Load more modules

Instructor

David Pierce

A seasoned AI/ML expert and Enterprise Architect, David has led AI/ML and data strategy initiatives at Fortune-ranked organizations, overseeing teams of Principal Engineers and Architects. With a deep focus on integrating Discriminative Machine Learning, Deep Reinforcement Learning, and Generative AI, he has authored key software contributions, frameworks for model interpretability, and risk management strategies across hybrid cloud platforms. David is recognized for pioneering Federated Knowledge Graphs, AI/ML observability, and low-code/no-code integrations, driving impactful solutions for scalable AI/ML systems. His thought leadership includes published white papers and speaking engagements at global conferences, including the ML for DevOps Summit and IAI ‘Leaders in AI Summit’. He is also the primary author of a forthcoming arXiv publication.

Join over 1 Million professionals from the most renowned Companies in the world!

certificate

Empower Your Learning with Our Flexible Plans

Invest in your future with our flexible subscription plans. Whether you're just starting out or looking to enhance your expertise, there's a plan tailored to meet your needs. Gain access to in-demand skills and courses for your continuous learning needs.

Monthly Plans
Annual Plans
Save 20% with our annual plans!

Pro

Ideal for continuous learning, offering video-based learning with 700+ courses and diverse Learning Paths to enhance your skills.

$ 69.00
Billed monthly or $499.00 billed annually

What is included

  • 700+ Premium Short Courses
  • 50+ Structured Learning Paths
  • Validation of Completion with all courses and learning paths
  • New Courses added every month
Early Access Offer

Pro +

Experience immersive learning with Practice Labs, CTF Challenges, and exclusive EC-Council certifications for comprehensive skill-building.

$ 79.00
Billed monthly or $699.00 billed annually

Everything in Pro and

  • 800+ Practice Lab exercises with guided instructions
  • 150+ CTF Challenges with detailed walkthroughs
  • New Practice Labs and Challenges added every month
  • ⁠⁠3 Official EC-Council Essentials Certifications¹ (retails at $897!)
    Exclusive Bonus with Annual Plans

¹This plan includes Digital Forensics Essentials (DFE), Ethical Hacking Essentials (EHE), and Network Defense Essentials (NDE) certifications. No other EC-Council certifications are included.

Related Courses

1 of 8