Course Duration
3 Days

Cyber
Authorized Training

IT

Course cost:
£3,200.00

IT Certification Overview

This hands-on practical three-day course for private cohorts only, it empowers AI engineers, developers, and security professionals to design secure AI systems by mastering AI-specific threat modelling. Using the DICE methodology (Diagramming, Identification, Countermeasures, Evaluation), participants gain practical skills in identifying vulnerabilities, applying countermeasures, and conducting structured security assessments across the AI lifecycle.

Attendees engage in hands-on labs and a red vs. blue team exercise simulating attacks on a rogue AI research assistant. The course addresses real-world AI threats, including prompt injection, data poisoning, adversarial manipulation, and model abuse, while aligning with emerging standards such as the EU AI Act and OWASP Top 10 for LLM applications.

Newto Training Reviews

What Our Happy Alumni Say About Us

Prerequisites

Participants should have:

  • A basic understanding of AI principles (e.g. neural networks, training processes, common architectures)
  • Familiarity with general security fundamentals
  • No prior threat modelling experience is required
  • Pre-course materials are provided to ensure readiness
  • This course only runs as a private event, for cohorts of 8 people or more, ideally from the same organisation

Target audience

This course is intended for:

  • AI engineers and machine learning developers
  • Software engineers building AI-powered systems
  • Solution architects integrating AI into enterprise applications
  • Security professionals securing AI pipelines and APIs
  • Security architects responsible for risk and compliance in AI deployments

Learning Objectives

By the end of this course, participants will be able to:

  • Model AI threats using the DICE methodology
  • Assess AI components for vulnerabilities and attack surfaces
  • Develop countermeasures for threats like prompt injection, data leakage, and poisoning
  • Integrate threat modelling into AI/ML development pipelines
  • Apply design patterns to build secure, privacy-aware AI systems
  • Conduct risk assessments for AI projects with business and regulatory context
  • Lead security discussions and implement governance frameworks aligned with AI compliance

AI Threat Modelling Certificate Course Content

Day 1: Foundations and methodology

Morning session

  • Welcome and course orientation
  • Introduction to AI-specific security risks and compliance requirements
  • Threat modelling fundamentals in AI system development
  • Comparison of traditional vs. AI threat modelling techniques
  • Hands-on: Future-focused threat mapping exercise
  • AI-DICE framework introduction
  • AI system decomposition and trust boundary identification
  • Data flow diagramming for AI workflows
  • Hands-on: Diagramming an AI assistant's infrastructure

Afternoon session

  • STRIDE-AI threat taxonomy and attack vectors
  • Prompt injection, poisoning, model theft, and misuse scenarios
  • Hands-on: Threat analysis using STRIDE-AI for UrbanFlow
  • Attack tree development for AI use cases
  • Using Mermaid to document and visualise attack paths
  • Hands-on: Threat tree analysis of an autonomous vehicle system

Day 2: Implementation and defence

Morning session

  • AI attack scenario analysis: prompt injection, data extraction, model manipulation
  • Hands-on: The Curious Chatbot Challenge (LLM-based attack simulation)
  • Overview of AI security libraries and frameworks:

    • OWASP Top 10 for LLMs
    • MITRE ATLAS
    • OWASP AI Exchange
    • MIT AI Risk Library
      • Hands-on: Applying OWASP AI Exchange to RAG-enabled CareBot

Afternoon session

  • Security by design for AI systems:

    • Model lifecycle security
    • Data pipeline protection
    • API and endpoint hardening
      • Hands-on: AI Security Architecture design workshop
  • AI risk assessment methodologies:

    • OWASP risk rating
    • Technical vs. business risk alignment
    • Risk matrices for AI
      • Hands-on: Healthcare robot risk assessment simulation

Day 3: Advanced concepts and practical application

Morning session

  • AI governance and regulatory alignment:

    • Overview of GDPR, EU AI Act, and other standards
    • Ethical frameworks and explainability
    • Bias mitigation and transparency
      • Hands-on: The FairCredit AI ethics incident
  • Privacy and safety in AI design:

    • Privacy-by-design in training and inference
    • Safe agent deployment and secure data handling
      • Hands-on: Data minimisation and AI privacy assessment

Afternoon session

  • Securing MLOps and AI DevSecOps pipelines:

    • Threat modelling integration across lifecycle
    • Security incident handling for AI systems
      • Hands-on: Attack/control mapping in an MLOps architecture
  • Final red/blue team wargame:

    • Simulated attack on a rogue AI research assistant
    • Threat identification, countermeasure design, and defence
    • Group debrief and lessons learned
  • Course wrap-up and certification guidance

Exams and certification

Participants who complete the following will be awarded the AI Threat Modelling Practitioner Certificate:

  • Completion of all hands-on exercises
  • Creation and submission of an original AI threat model
  • Passing the final examination, which is taken post class

Hands-on learning

This course includes:

  • Live, instructor-led sessions and expert mentoring
  • Scenario-based labs targeting real-world AI security use cases
  • Red/blue team simulation for applied learning
  • Ongoing access to pre-course study materials

Upcoming Dates

Dates and locations are available on request. Please contact us for the latest schedule.

Advance Your Career with AI Threat Modelling Certificate

Gain the skills you need to succeed. Enrol in AI Threat Modelling Certificate with Newto Training today.

New Year Offer 1st Jan - 31st Jan
UP TO 80% OFF
Sale ends in
00Days
00Hours
00Mins
00Secs
Claim Discount