Cybersecurity in AI Systems Training Course
Securing AI systems presents unique challenges that differ from traditional cybersecurity approaches. AI systems are vulnerable to adversarial attacks, data poisoning, and model theft, all of which can significantly impact business operations and data integrity. This course explores key cybersecurity practices for AI systems, covering adversarial machine learning, data security in machine learning pipelines, and compliance requirements for robust AI deployment.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI and cybersecurity professionals who wish to understand and address the security vulnerabilities specific to AI models and systems, particularly in highly regulated industries such as finance, data governance, and consulting.
By the end of this training, participants will be able to:
- Understand the types of adversarial attacks targeting AI systems and methods to defend against them.
- Implement model hardening techniques to secure machine learning pipelines.
- Ensure data security and integrity in machine learning models.
- Navigate regulatory compliance requirements related to AI security.
Format of the Course
- Interactive lecture and discussion.
- Lots of exercises and practice.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Course Outline
Introduction to AI Security Challenges
- Understanding security risks unique to AI systems
- Comparing traditional cybersecurity vs. AI cybersecurity
- Overview of attack surfaces in AI models
Adversarial Machine Learning
- Types of adversarial attacks: evasion, poisoning, and extraction
- Implementing adversarial defenses and countermeasures
- Case studies on adversarial attacks in different industries
Model Hardening Techniques
- Introduction to model robustness and hardening
- Techniques for reducing model vulnerability to attacks
- Hands-on with defensive distillation and other hardening methods
Data Security in Machine Learning
- Securing data pipelines for training and inference
- Preventing data leakage and model inversion attacks
- Best practices for managing sensitive data in AI systems
AI Security Compliance and Regulatory Requirements
- Understanding regulations around AI and data security
- Compliance with GDPR, CCPA, and other data protection laws
- Developing secure and compliant AI models
Monitoring and Maintaining AI System Security
- Implementing continuous monitoring for AI systems
- Logging and auditing for security in machine learning
- Responding to AI security incidents and breaches
Future Trends in AI Cybersecurity
- Emerging techniques in securing AI and machine learning
- Opportunities for innovation in AI cybersecurity
- Preparing for future AI security challenges
Summary and Next Steps
Requirements
- Basic knowledge of machine learning and AI concepts
- Familiarity with cybersecurity principles and practices
Audience
- AI and machine learning engineers looking to improve security in AI systems
- Cybersecurity professionals focusing on AI model protection
- Compliance and risk management professionals in data governance and security
Open Training Courses require 5+ participants.
Cybersecurity in AI Systems Training Course - Booking
Cybersecurity in AI Systems Training Course - Enquiry
Cybersecurity in AI Systems - Consultancy Enquiry
Testimonials (1)
The profesional knolage and the way how he presented it before us
Miroslav Nachev - PUBLIC COURSE
Course - Cybersecurity in AI Systems
Upcoming Courses
Related Courses
ISACA Advanced in AI Security Management (AAISM)
21 HoursAAISM is an advanced framework for assessing, governing, and managing security risks in artificial intelligence systems.
This instructor-led, live training (online or onsite) is aimed at advanced-level professionals who wish to implement effective security controls and governance practices for enterprise AI environments.
At the conclusion of this program, participants will be prepared to:
- Evaluate AI security risks using industry-recognized methodologies.
- Implement governance models for responsible AI deployment.
- Align AI security policies with organizational goals and regulatory expectations.
- Enhance resilience and accountability within AI-driven operations.
Format of the Course
- Facilitated lectures supported by expert analysis.
- Practical workshops and assessment-based activities.
- Applied exercises using real-world AI governance scenarios.
Course Customization Options
- For tailored training aligned to your organizational AI strategy, please contact us to customize the course.
AI Governance, Compliance, and Security for Enterprise Leaders
14 HoursThis instructor-led live training in Norway (online or onsite) is designed for intermediate-level enterprise leaders who want to learn how to responsibly govern and secure AI systems in compliance with emerging global frameworks like the EU AI Act, GDPR, ISO/IEC 42001, and the U.S. Executive Order on AI.
By the end of this training, participants will be able to:
- Understand the legal, ethical, and regulatory risks of using AI across departments.
- Interpret and apply major AI governance frameworks (EU AI Act, NIST AI RMF, ISO/IEC 42001).
- Establish security, auditing, and oversight policies for AI deployment in the enterprise.
- Develop procurement and usage guidelines for third-party and in-house AI systems.
AI Risk Management and Security in the Public Sector
7 HoursArtificial Intelligence (AI) brings new operational risks, governance complexities, and cybersecurity vulnerabilities for government agencies and departments.
This instructor-led, live training (available online or on-site) is designed for public sector IT and risk professionals with limited prior AI experience who want to learn how to evaluate, monitor, and secure AI systems within a government or regulatory framework.
Upon completing this training, participants will be able to:
- Understand key risk concepts related to AI systems, including bias, unpredictability, and model drift.
- Implement AI-specific governance and auditing frameworks, such as NIST AI RMF and ISO/IEC 42001.
- Identify cybersecurity threats targeting AI models and data pipelines.
- Develop cross-departmental risk management plans and ensure policy alignment for AI deployment.
Course Format
- Interactive lectures and discussions focusing on public sector use cases.
- Exercises on AI governance frameworks and policy mapping.
- Scenario-based threat modeling and risk evaluation.
Customization Options
- To request customized training for this course, please contact us to arrange.
Introduction to AI Trust, Risk, and Security Management (AI TRiSM)
21 HoursThis instructor-led, live training in Norway (online or onsite) is designed for beginner to intermediate IT professionals aiming to understand and implement AI TRiSM within their organizations.
Upon completion of this training, participants will be able to:
- Understand the core concepts and significance of AI trust, risk, and security management.
- Identify and mitigate risks linked to AI systems.
- Apply security best practices specific to AI.
- Navigate regulatory compliance and ethical issues related to AI.
- Formulate strategies for effective AI governance and management.
Building Secure and Responsible LLM Applications
14 HoursThis instructor-led, live training in Norway (online or onsite) is designed for intermediate to advanced AI developers, architects, and product managers who aim to identify and mitigate risks related to LLM-powered applications, such as prompt injection, data leakage, and unfiltered outputs, while implementing security controls like input validation, human-in-the-loop oversight, and output guardrails.
Upon completion of this training, participants will be able to:
- Comprehend the fundamental vulnerabilities of LLM-based systems.
- Implement secure design principles in LLM application architecture.
- Utilize tools such as Guardrails AI and LangChain for validation, filtering, and safety.
- Integrate techniques such as sandboxing, red teaming, and human-in-the-loop reviews into production-grade pipelines.
EXO Security and Governance: Offline Model Management
14 HoursThis instructor-led, live training in Norway (online or onsite) is aimed at security engineers and compliance officers who wish to harden EXO deployments, control model access, and govern AI workloads running entirely on-premise.
Introduction to AI Security and Risk Management
14 HoursThis instructor-led, live training in Norway (online or onsite) is designed for beginner-level IT security, risk, and compliance professionals who wish to grasp foundational AI security concepts, threat vectors, and global frameworks such as NIST AI RMF and ISO/IEC 42001.
Upon completion of this training, participants will be able to:
- Comprehend the distinct security risks associated with AI systems.
- Identify threat vectors including adversarial attacks, data poisoning, and model inversion.
- Apply foundational governance models like the NIST AI Risk Management Framework.
- Align AI usage with emerging standards, compliance guidelines, and ethical principles.
OWASP GenAI Security
14 HoursBased on the latest OWASP GenAI Security Project guidance, participants will learn to identify, assess, and mitigate AI-specific threats through hands-on exercises and real-world scenarios.
Privacy-Preserving Machine Learning
14 HoursThis instructor-led, live training in Norway (online or onsite) is intended for advanced professionals who wish to implement and evaluate techniques such as federated learning, secure multiparty computation, homomorphic encryption, and differential privacy in real-world machine learning pipelines.
Upon completion of this training, participants will be capable of:
- Grasping and comparing essential privacy-preserving methods in ML.
- Building federated learning systems using open-source frameworks.
- Employing differential privacy to ensure safe data sharing and model training.
- Leveraging encryption and secure computation methods to safeguard model inputs and outputs.
Red Teaming AI Systems: Offensive Security for ML Models
14 HoursThis instructor-led live training in Norway (online or onsite) is aimed at advanced-level security professionals and ML specialists who wish to simulate attacks on AI systems, uncover vulnerabilities, and enhance the robustness of deployed AI models.
By the end of this training, participants will be able to:
- Simulate real-world threats to machine learning models.
- Generate adversarial examples to test model robustness.
- Assess the attack surface of AI APIs and pipelines.
- Design red teaming strategies for AI deployment environments.
Securing Edge AI and Embedded Intelligence
14 HoursThis instructor-led, live training in Norway (online or onsite) is designed for intermediate-level engineers and security professionals who wish to secure AI models deployed at the edge against threats such as tampering, data leakage, adversarial inputs, and physical attacks.
By the conclusion of this training, participants will be able to:
- Identify and assess security risks associated with edge AI deployments.
- Apply techniques for tamper resistance and encrypted inference.
- Harden models deployed at the edge and secure data pipelines.
- Implement threat mitigation strategies tailored to embedded and constrained systems.
Securing AI Models: Threats, Attacks, and Defenses
14 HoursThis instructor-led, live training in Norway (online or onsite) is aimed at intermediate-level machine learning and cybersecurity professionals who wish to understand and mitigate emerging threats against AI models, using both conceptual frameworks and hands-on defenses like robust training and differential privacy.
By the end of this training, participants will be able to:
- Identify and classify AI-specific threats such as adversarial attacks, inversion, and poisoning.
- Use tools like the Adversarial Robustness Toolbox (ART) to simulate attacks and test models.
- Apply practical defenses including adversarial training, noise injection, and privacy-preserving techniques.
- Design threat-aware model evaluation strategies in production environments.
Security and Privacy in TinyML Applications
21 HoursTinyML involves deploying machine learning models onto low-power, resource-constrained devices operating at the network edge.
This instructor-led training (available online or onsite) is designed for advanced-level professionals who wish to secure TinyML pipelines and implement privacy-preserving techniques in edge AI applications.
Upon completing this course, participants will be able to:
- Identify security risks unique to on-device TinyML inference.
- Implement privacy-preserving mechanisms for edge AI deployments.
- Harden TinyML models and embedded systems against adversarial threats.
- Apply best practices for secure data handling in constrained environments.
Format of the Course
- Engaging lectures supported by expert-led discussions.
- Practical exercises emphasizing real-world threat scenarios.
- Hands-on implementation using embedded security and TinyML tooling.
Course Customization Options
- Organizations may request a tailored version of this training to align with their specific security and compliance needs.
Safe & Secure Agentic AI: Governance, Identity, and Red-Teaming
21 HoursThis course explores governance, identity management, and adversarial testing for agentic AI systems, emphasizing enterprise-safe deployment strategies and practical red-teaming methodologies.
Delivered by an instructor through live training (available online or onsite), this program is tailored for advanced practitioners looking to design, secure, and evaluate agent-based AI systems within production environments.
Upon completion of this training, participants will be equipped to:
- Establish governance models and policies to ensure the safe deployment of agentic AI.
- Architect non-human identity and authentication workflows for agents, ensuring least-privilege access.
- Deploy access controls, audit trails, and observability mechanisms specifically designed for autonomous agents.
- Plan and conduct red-team exercises to identify misuse, escalation paths, and potential data exfiltration risks.
- Mitigate prevalent threats to agentic systems using policy, engineering controls, and continuous monitoring.
Course Format
- Interactive lectures combined with threat-modeling workshops.
- Practical labs focusing on identity provisioning, policy enforcement, and adversary simulation.
- Red-team and blue-team exercises, culminating in an end-of-course assessment.
Customization Options
- For information regarding customized training for this course, please contact us to arrange it.