This comprehensive program teaches practical AI red teaming skills through hands-on exercises and real-world scenarios.
The course is divided into two main sections: Beginner and Advanced, allowing students to progress from fundamentals to sophisticated techniques.
This hands-on training is ideal for security professionals, AI/ML engineers, product security/MLOps, incident responders, and security managers who need to identify and mitigate prompt injections, jailbreaks, agent hijacks, and content/ societal harms.
- Skill Level: Beginner
- Student Requirements:
- Intermediate Python programming skills (ability to write scripts and work with libraries).
- Familiarity with basic machine learning concepts.
- No prior AI security experience required, but general security knowledge is beneficial.
Format: Online, Self-Paced, On-Demand.
Course Outline:
- AI Fundamentals
- Prompt Engineering Attacks
- Multi-turn Attacks (Crescendo, Skeleton Key)
- Introduction to Agents
- Open Source Tools and Frameworks
- Content and Societal Harms
|

Course Leaders
Dr. Amanda Minnich is a Principal Research Manager at Microsoft on the Microsoft AI Red Team, where she red teams Microsoft's foundational models and Copilots for safety and security vulnerabilities. Prior to Microsoft, Dr. Minnich worked at Twitter, focusing on identifying international election interference and other types of abuse and spam campaigns using graph clustering algorithms. She also previously worked at Sandia National Laboratories and Mandiant, where she applied machine learning research techniques to malware classification and malware family identification. Dr. Minnich is heavily involved with tech outreach efforts, especially for women in tech. She received her MS and PhD in Computer Science with Distinction from the University of New Mexico.

Gary Lopez is a Senior Security Researcher on Microsoft's AI Red Team, where he leverages an attacker's mindset to test and improve the security and safety of AI systems. Gary collaborates with a diverse group of interdisciplinary experts to identify and mitigate vulnerabilities to ensure robust and trustworthy AI. He is the creator of PyRIT, Microsoft's open-source automation framework designed for red-teaming generative AI systems. Before joining Microsoft, Gary was with Booz Allen Hamilton, specializing in cybersecurity. His work included developing advanced tools for reverse engineering and malware analysis, with a focus on securing critical infrastructure systems such as SCADA, ICS, and DCS.
Continuing Professional Education (CPE) Credits
Black Hat provides Continuing Professional Education (CPE) credits for security professionals certified through ISC2. These credits help maintain and validate your industry certifications by recognizing your ongoing education and skill development.
CPE credits offered:
- Penetration Testing Fundamentals – 4 CPE credits
- Infrastructure Hacking Fundamentals – 3 CPE credits
For on-demand courses, security professionals must self-report their CPE credits within 30 days of completing the course.
CPE credits serve as a globally recognized standard unit for continuing education in the cybersecurity field. For details on certification requirements and the reporting process, please refer to ISC2’s official guidelines.
Black Hat does not maintain records of earned CPE credits. It is the responsibility of each participant to track and report their credits as required.

In Partnership with Tinycode, LLC.
AI is redefining industries and powering new possibilities. Yet, with great potential comes the need for robust security and safety practices. Tinycode was founded to help organizations navigate these challenges, ensuring their AI systems are protected against evolving threats.
Our mission is simple: to ensure your AI systems are not only useful, but also safe and secure.