Get in Touch

Course Outline

Introduction to AI Red Teaming

  • Understanding the AI threat landscape.
  • Roles of red teams in AI security.
  • Ethical and legal considerations.

Adversarial Machine Learning

  • Types of attacks: evasion, poisoning, extraction, inference.
  • Generating adversarial examples (e.g., FGSM, PGD).
  • Targeted vs untargeted attacks and success metrics.

Testing Model Robustness

  • Evaluating robustness under perturbations.
  • Exploring model blind spots and failure modes.
  • Stress testing classification, vision, and NLP models.

Red Teaming AI Pipelines

  • Attack surface of AI pipelines: data, model, deployment.
  • Exploiting insecure model APIs and endpoints.
  • Reverse engineering model behavior and outputs.

Simulation and Tooling

  • Using the Adversarial Robustness Toolbox (ART).
  • Red teaming with tools like TextAttack and IBM ART.
  • Sandboxing, monitoring, and observability tools.

AI Red Team Strategy and Defense Collaboration

  • Developing red team exercises and goals.
  • Communicating findings to blue teams.
  • Integrating red teaming into AI risk management.

Summary and Next Steps

Requirements

  • A foundational understanding of machine learning and deep learning architectures.
  • Practical experience with Python and ML frameworks (e.g., TensorFlow, PyTorch).
  • Familiarity with cybersecurity concepts or offensive security techniques.

Audience

  • Security researchers.
  • Offensive security teams.
  • AI assurance and red team professionals.
 14 Hours

Number of participants


Price per participant

Testimonials (2)

Upcoming Courses

Related Categories