Securing AI Using Zero Trust Principles, 1st edition

Published by Cisco Press (May 18, 2026) © 2026

  • Cindy Green-Ortiz
  • Zig Zsiga
  • Saskia Laura Schröer
Products list

Access details

  • Instant access once purchased
  • Fulfilled by VitalSource

Currently unavailable

Products list

Access details

  • Instant access once purchased
  • Fulfilled by VitalSource

Currently unavailable

Title overview

Securing AI Using Zero Trust Principles

Artificial intelligence is reshaping industries, driving innovation in critical sectors such as healthcare, finance, energy, and government. Yet, as organisations integrate AI into business operations, they inherit new risks, many of which conventional security models fail to address. Adversaries are weaponising AI to automate reconnaissance, bypass defenses, and exploit vulnerable systems. The solution is not more trust, but less.

Zero Trust offers a foundational paradigm shift: no identity, device, system, or interaction is inherently trusted. Security must be continuously enforced, context-aware, and resilient by design. This book demonstrates how Zero Trust, when strategically applied to AI environments, enables organisations to secure data pipelines, mitigate emergent threats, and maintain control over evolving digital ecosystems.

Key insights include

  • AI Through a Security Lens: Demystifies machine learning, generative AI, and large language models with a focus on operational and business impact.
  • Zero Trust Foundations: Provides a historical and architectural overview of Zero Trust, including Cisco's Five Zero Trust Categories.
  • Security by Design for AI: Offers guidance on protecting AI development workflows, from data ingestion and model training to inference and deployment.
  • Threat Mitigation Strategies: Addresses adversarial AI, data poisoning, shadow AI, and insider misuse through identity enforcement, segmentation, and telemetry.
  • Strategic Execution: Maps Zero Trust principles to regulatory frameworks including NIST AI RMF, EU AI Act, DORA, and ISO 27001, and provides actionable templates for running successful Zero Trust Segmentation Workshops.

Who Should Read This Book:

  • CISOs and security architects building AI-resilient architectures
  • AI and data leaders embedding AI into enterprise infrastructure
  • Risk, compliance, and governance professionals navigating regulatory change
  • Technical teams seeking secure-by-design methodologies for AI initiatives

Why This Matters Now:

AI systems are expanding faster than most organisations can govern them. The risks, ranging from operational disruption to model corruption, require proactive, architectural defenses. This book bridges the gap between AI innovation and trusted enterprise security.

Securing AI Using Zero Trust Principles delivers the strategic playbook for building resilient, trustworthy, and standards-aligned AI systems that can withstand the threats of today and tomorrow.

Table of contents

Part I: Defining Responsible AI and the Evolving AI Landscape
Chapter 1
Overview 1
Foundations of Zero Trust in AI Security 3
The Origins and Evolution of Zero Trust 3
Zero Trust Principles in AI Security 4
Key Frameworks and Regulations 5
The Intersections of AI and Security 8
Zero Trust as a Paradigm Shift in Securing AI 11
Ways to Build AI-Ready Data Centers and Cloud Architecture 12
Network Design Basics with AI in Mind 13
Key Components Required for an AI-Ready Environment 14
AI Data Center Deployment Options 17
Summary 21
Key Terms 22
End-of-Chapter Questions and Answers 22
Chapter 2 Responsible AI and Integrated Awareness 29
Definition and Principles of Responsible AI 29
Ethical AI Development 30
The Landscape of AI: From Basics to Advanced Concepts 31
Foundations of AI Architectures 31
Agentic AI 33
Chain-of-Thought Reasoning Models 33
Key Zero Trust Principles for AI Agents and Reasoning Models 35
Foundational Considerations in AI 35
Ways to Overcome Organizational Barriers to Secure AI Adoption 36
AI/ML Pipeline 39
No Free Lunch Theorem: Common Challenges in AI Development 41
Explainable AI (Is AI a Black Box?) 41
AI in Organizations 43
AI Adoption Framework 43
Essential Skills for the AI Era 44
Ethical Considerations and Bias Mitigation 45
Ethical Frameworks and Guidelines for AI 46
Additional Considerations for AI 47
Emerging Technologies 48
Defining an AI Maturity Model 51
Applying Zero Trust to AI Deployment Models 53
Understanding Risk, Control, and Governance Across the AI Landscape 53
Securing AI Agents Through Zero Trust Guardrails 55
Summary 58
Key Terms 59
End-of-Chapter Questions and Answers 59
Chapter 3 Artificial Intelligence Threat Landscape 67
Overview of AI Threats 67
AI as Target: Adversarial Machine Learning 69
Threat Model 71
Integrity: Evasion, Poisoning, and Backdoor Attacks 73
Confidentiality: Model Inversion, Extraction, and Membership Inference Attacks 76
Availability: Energy Latency Attacks 79
Other Common Attacks: Supply Chain and Third Party 80
Specific Considerations for Attacks on Generative AI 83
Attacking AI Systems vs. AI Models 86
Libraries for Testing AI Models 88
AI Systems vs. AI Models 88
Case Studies of AI Security Events: MITRE ATLAS 89
Overview 89
AI as Attack Vector: Offensive AI in Generative Adversarial Networks 93
Summary 97
Key Terms 97
End-of-Chapter Questions and Answers 98
Chapter 4 Zero Trust Principles and Methods 107
Benefits of Zero Trust for AI Security: A Proof of Value 107
The Evolution of Zero Trust: A Foundation for Securing AI 108
AI as a Catalyst for Zero Trust Transformation 110
Applying the Five Zero Trust Categories to AI 111
Policy and Governance 112
Identity 122
Vulnerability Management 131
Enforcement 135
Analytics 144
Practical Workshop Design: Zero Trust for AI 150
Risk and Regulation 151
Implementation Guidance 151
Capability Alignment 151
Organizational Dynamics in Zero Trust for AI 152
Risk and Regulation 152
Implementation Guidance 152
Capability Alignment 153
Roadmap: Zero Trust for AI Security Maturity 153
Risk and Regulation 153
Implementation Guidance 154
Capability Alignment 154
Application of Zero Trust: Securing Embodied AI Through Zero Trust 155
The Trust Gap in Embodied AI 155
Securing Perception, Planning, and Action 155
Simulation, Noise, and Real-World Deployment 156
Collaborative, Ethical, and Societal Risks 156
Case Study: Application of Zero Trust—Salt Typhoon and Advanced Threat Campaigns Against Embodied AI 157
Case Study: Application of Zero Trust—Implications of State-Sponsored Network Compromise Campaigns 158
Case Study: Application of Zero Trust—Real-World Technology Shift at Scale to AI-Native Software Development 160
Case Study: Nation-State Espionage, the Quantum Threat, and Harvest Now Decrypt Later 161
HNDL Description and Analysis 162
PQC Recommendations 162
PQC Insights and Business Implications 163
Summary 164
Key Terms 165
End-of-Chapter Questions and Answers 165
Chapter 5 Securing AI from the Start 173
Importance of Early Data Classification 174
Data Classification Tools 176
Data Classification 177
Governance and Legal Requirements 181
Potential Threats and Consequences from Missing Data Classification 183
Ways to Build Security into the AI Development Lifecycle 189
Proactive vs. Reactive Security Measures 191
Business and Operational Benefits 193
Quantitative and Qualitative Metrics 193
Value Propositions 193
Cost Savings from Early Security Implementation 194
Improved Trust and Compliance 196
Ways to Future-Proof AI Systems by Building Crypto-Agility for Post-Quantum Resilience 197
Scalability and Adaptability of Secure AI 198
The Need to Secure AI from the Start: Challenges and Considerations 199
Securing AI Application Development 200
Securing AI Application Deployment 201
Moving from Software Development to AI Application Development 201
Securing AI Chatbots and Agents 206
Understanding the Advanced Threat Landscape and Mitigation 206
Securing Agentic AI and Retrieval-Augmented Generation 207
AI Security Readiness Framework 209
1. Embedding Security in AI Governance and Strategy 211
2. Strengthening Data Security and Privacy 212
3. Ensuring Model Integrity and Robustness 212
4. Mitigating AI-Specific Threats and Attack Vectors 214
5. Addressing Compliance and Ethical Requirements 214
6. Building a Security-Resilient Infrastructure 215
7. Cultivating a Security-Aware Culture 216
Summary 217
Key Terms 218
End-of-Chapter Questions and Answers 218
Part II: Building Operational Resilience—People, Processes, and Infrastructure
Chapter 6
Organizational AI Security Readiness 225
Assessing Organizational Readiness 226
Stakeholder Engagement and Ownership 226
Security Readiness Assessments 226
Baseline of Current Capabilities 227
Risk Assessment and Prioritization 228
Gap Analysis and Areas for Improvement 229
Compliance and Regulatory Readiness 230
Technical Infrastructure and Tooling Evaluation 232
Culture and Awareness Readiness 233
Incident Response and Recovery Preparedness 233
Actionability and Roadmap Development 234
Building a Security-First Culture 234
Leadership and Commitment 235
Security Policies and Governance 235
Risk Management and Accountability 236
Integration of Security in AI Lifecycle 236
Cultural Change Strategies 237
Training and Awareness Programs 237
Tailored Training Programs 238
Awareness Campaigns 238
Hands-On Exercises 239
Continuous Learning 239
The Reasons to Measure Effectiveness 239
Organizational AI Security Readiness: Challenges and Considerations 240
AI Model Security Readiness 240
Data Governance and Privacy for AI Readiness 241
AI-Specific Incident Response Readiness 243
Explainable AI (XAI) Readiness 244
Zero Trust Principles Applied to AI Security Readiness 245
AI Supply Chain Readiness 246
Summary 247
Key Terms 248
End-of-Chapter Questions and Answers 248
Chapter 7 AI-Ready Data Privacy and Business Impact 255
The Strategic Value of Data in the Age of AI 256
Data as an Enterprise and National Security Asset 256
The Criticality of Protecting Strategic, Classified, and Proprietary Data 257
How AI-Driven Decision Automation Amplifies Business Impact from Data Compromise 258
The Convergence of AI Data Governance and Digital Sovereignty 258
Evolving Attack Surfaces in AI Ecosystems 259
The Influence of Visionary Fiction on the AI Landscape 259
From Imagination to Implementation: Agentic and Embedded AI 259
AI as a Living System: Expanding the Threat Model 260
Zero Trust for AI Systems Reimagined 260
Science Fiction Realized, Responsibility Required 261
Privacy and Security Challenges in Agentic and Embedded AI 262
Autonomous Data Processing and Contextual Inference Without Human Oversight 263
Data Lineage, Provenance, and Chain of Custody in Distributed AI Environments 263
The Difficulty of Enforcing Access Control and Policy Verification Within Embedded Architectures 263
Risk Propagation Across Cross-Domain AI Collaboration Systems 264
Monitoring, Containment, and Assurance for Self-Adaptive Models 265
AI Model Protection and PQC Readiness 265
Model Inversion, Prompt Injection, and Data Poisoning Threats 266
Techniques for Model Watermarking, Signing, and Integrity Validation 266
Confidential Computing, Secure Enclaves, and Trusted Execution Environments 267
PQC Readiness and the Transition to Post-Quantum Encryption (FIPS 203, FIPS 204, FIPS 205) 268
Cryptographic Agility and Lifecycle Management for AI Models and Data Pipelines 269
Privacy-Preserving Data Engineering for Next-Generation AI 269
Differential Privacy, Homomorphic Encryption, and Secure Multi-Party Computation 270
Federated Learning and Encryption-in-Use for Distributed AI Training 271
PQC-Based Encryption Methods for AI Inference and Storage Environments 271
The Role of Hardware-Based Isolation and Zero-Knowledge Proofs in Preserving Privacy 272
Regulatory and Compliance Integration for AI Privacy 272
Techniques for Mapping Global Privacy and Security Mandates to AI Systems 273
Alignment with NIST AI RMF, EU AI Act, GDPR, DORA, NIS2, and Other Sectoral Frameworks 274
Industry-Specific Considerations for Financial Services, Healthcare, Energy, and Defense 275
Data Residency, Retention, and Erasure Requirements in AI-Driven Environments 276
Compliance Telemetry and Assurance Reporting for Continuous Verification 276
Zero Trust Strategies for AI Compliance and Enforcement 276
Techniques for Applying Zero Trust Principles Across AI Data Pipelines and Model Lifecycles 277
Identity-Aware Access and Microsegmentation for AI Workloads 278
Enforcement of Contextual Access Policies for Training and Inference Data 278
Continuous Assurance and Compliance-as-Code for AI Infrastructure 279
Integration of PQC Within Zero Trust Data Protection Models 279
Data Governance, Configuration Management, and Lineage 280
AI-Ready Data Dictionary for Zero Trust Configuration Management 281
Integration Guidance 282
AI Bills of Materials for Transparency, Auditability, and Accountability 286
Model Lineage and Dependency Mapping for Explainability and Forensic Readiness 286
Version Control, Rollback, and Governance Across Distributed AI Systems 287
Lifecycle Governance for Agentic and Embedded AI Deployments 287
Business and Operational Impact of AI Privacy Failures 288
Consequences of Data Compromise in Autonomous and Embedded AI Systems 289
Regulatory Penalties, Contractual Risk, and Loss of Market Trust 289
Case Studies: Leakage from Generative AI Platforms and Autonomous Decision Engines 291
Financial and Operational Disruption from Model Exfiltration or Training Data Theft 291
Long-Term Strategic and Reputational Impacts on Global Enterprises 292
Final Recommendations 292
Data as a Continuously Governed Enterprise Asset 293
Agentic and Embedded AI as Critical Expansion Points for Privacy Risk 293
PQC Readiness as a Foundation for Future-Proof AI Data Protection 294
Key Zero Trust Enablers for AI Privacy 294
Strategic Guidance for Maintaining Resilience, Compliance, and Stakeholder Trust 295
Summary 295
Key Terms 296
End-of-Chapter Questions and Answers 296
Chapter 8 Third-Party AI Risk 303
Third-Party Risk Questionnaire Limitations for Assessing AI Risk 304
What to Know Before Assessing Third-Party AI Risk 305
Data Protection and General Information Security 305
Data Protection and Mobile Security 306
Human Resources Management 306
Asset Management and Media Handling 307
Access Management and User Controls 307
Cryptographic Controls 308
Physical and Environmental Security 308
Operations and Network Security 308
Application Security/Development 309
Supplier Relationships/Vendor Management 309
Incident Management 309
Business Continuity and Disaster Recovery 310
Governance and Compliance 310
Cloud Security 310
Data Center Operations 311
Offshore Delivery Center Controls 311
Continuous Monitoring of Vendor Performance 311
AI Third-Party Risk Assessments 313
Securing the AI Supply Chain Through Zero Trust and Cryptographic Resilience 314
Securing Core AI Supply Chain Components 316
Zero Trust Foundations for the AI Supply Chain 317
Policy and Governance 318
Identity and Access Management 320
Vulnerability Management 321
Enforcement (Policy Enforcement Points) 322
Analytics and Continuous Monitoring 323
How to Use This Questionnaire: A Practical Guide for Executives, Architects, and Engineers 324
Strategic Takeaways 328
Continuous Vendor Monitoring: The AI-Specific Playbook 329
Implementation Roadmap, Turning Theory into Practice 330
AI Third-Party Supply Chain—Additional Risks 331
AI Third-Party Dependencies and Risk Amplification 331
Vector Databases in Zero Trust AI Architectures 332
Third-Party AI Post-Quantum Cryptography Risk 334
Services as Code, Digitized Delivery, and Network as Code in Third-Party AI Audits 338
Real-World Case Studies That Illustrate the Threat Landscape 341
Case Study: Model-Namespace-Reuse on Hugging Face 342
Case Study: Dual Dependency—CrowdStrike and AWS Outages 343
Summary 346
Key Terms 347
End-of-Chapter Questions and Answers 347
Chapter 9 Build AI-Ready Environments 353
AI-Ready Environments 354
Overview of Network Design 355
Network Design Fundamentals 356
Network Design Principles 366
Architect-Focused Network Design Techniques 372
Network Design Pitfalls 375
Techniques for Designing AI-Ready Environments 379
Key Components Required for an AI-Ready Environment 380
AI Use Cases 387
Sustainability Intersection with AI-Ready Environments 389
Sustainable Practices and Energy Efficiency with AI Workloads 390
Sourcing Matters 390
Cost Optimization Strategies 390
Ways to Optimize Established or Existing Data Centers to Become AI-Ready 391
Greenfield vs. Brownfield AI Environments 392
Cloud Provider Offerings 392
Organizations with Mature AI Environments 392
Businesses Still Exploring AI Solutions 392
Summary 393
Key Terms 393
End-of-Chapter Questions and Answers 394
Part III: Defending at Scale—Platform Protection, Monitoring, and Response
Chapter 10
Build and Secure Enterprise-Grade Generative AI Applications: ChatAI, RAG, MCP, Agentic AI, and Embedded AI 401
Enterprise Generative AI Applications 402
Types of Generative AI 402
GenAI in the Enterprise Landscape 406
Techniques to Bridge GenAI to Zero Trust 409
Security Considerations of ChatAI Applications 411
Enterprise Use Cases for ChatAI 411
Security Considerations for ChatAI 414
Security Considerations for Agentic AI 422
The Need to Secure MCP 424
Agentic AI: Design, Architecture, and Security Considerations 425
Additional Considerations for Cyber-Physical Interactions: Embedded AI 428
Case Study: Cyber-Physical Interactions—Musk Robot Army 430
Foundational Zero Trust Security for Enterprise Generative AI 431
Zero Trust Controls Mapped Across the AI System Lifecycle 436
Zero Trust Applied to the AI Lifecycle 438
Case Study: Governance and Shared Responsibility—The Deloitte AI Incident 439
Summary 440
Key Terms 441
End-of-Chapter Questions and Answers 441
Chapter 11 Monitor and Respond to AI Security Threat Vectors 447
Continuous Monitoring Strategies 448
Monitoring the AI System 448
Using AIOps vs. MLOps 453
Monitoring Changes in External Regulations 455
Monitoring Changes in the Threat Landscape 455
Using Organizational Processes 457
Implementing Real-Time Monitoring Solutions 458
Monitoring Solutions for AI Systems 458
Vulnerability Management 460
Software Bill of Materials 463
Monitoring AI Systems in the SIEM 464
Incident Response Plans 465
Business Continuity and Disaster Recovery for AI-Ready Data Centers 469
Description, Analysis, and Recommendations 469
Executive Imperatives for CIOs and CISOs 470
Foundational Architecture for AI-Ready Data Centers 471
RoCEv2: Driving Secure, Scalable Performance for Modern Infrastructures 471
AI Design Considerations 472
Core Risks Unique to AI Workloads 474
Business Continuity and Disaster Recovery for AI-Enabled Applications 474
Core BC/DR Challenges for AI-Enabled Applications 475
Ways to Build AI-Specific BC/DR Strategies 476
Summary 476
Key Terms 477
End-of-Chapter Questions and Answers 478
Chapter 12 Case Study 485
Offensive AI: A Deep Dive into How Attackers Can Use AI 486
Offensive AI as a Threat to Individuals 490
Offensive AI as a Threat to Systems 491
Offensive AI as a Threat to Society 492
An Adversary’s Perspective: Cost vs. Benefit of Utilizing AI 492
Detailed Case Studies of Offensive AI Attacks 494
“Hey Google, Remind Me to Be Phished.” 499
Lessons Learned and Best Practices 502
Summary 505
Key Terms 506
End-of-Chapter Questions and Answers 506
Chapter 13 Conclusion: Vigilance, Policy, Iteration, and Beyond 515
Part I: Defining Responsible AI and the Evolving AI Landscape (Chapters 1–5) 516
Addressing AI Deployment Models and Zero Trust Principles 516
Overcoming Organizational Barriers and Applying AI/ML Pipelines 517
Ethics, Governance, Zero Trust, and Security in AI Implementation 518
Future of AI: Quantum Computing and the Evolving Threat Landscape 518
Introduction to the AI Threat Landscape and Ethical Foundations 519
AI as a Target: Evasion Attacks, Data Poisoning, and Threat Modeling 519
Backdoor Attacks 520
Model Inversion and Membership Inference Attacks and the Impact on Availability 520
Supply Chain Attacks and Offensive AI 521
Introduction to Zero Trust for AI Security and Historical Context 521
Zero Trust Implementation: The Categories and Foundational Controls 523
The Power of “Know What You Have” 523
Testing, Roadmaps, Zero Trust, and Long-Term Planning 524
The Importance of Data Classification 525
Data Classification, Ethical Considerations, and Governance 526
Threat Modeling for AI, Mitigations, and AI Governance 527
AI Security and the Rundown 528
Part I Summary: Defining Responsible AI and the Evolving AI Landscape 529
Part II: Building Operational Resilience—People, Processes, and Infrastructure (Chapters 6–9) 530
Organizational Readiness and Cultural Maturity 530
Secure Data Practices and Privacy Engineering 530
Third-Party AI Risk and Shared Responsibility 531
Infrastructure for AI: Cloud, Edge, and Hybrid Environments 531
Part II Summary: Aligning Strategy to Execution 531
Part III: Defending at Scale—Platform Protection, Monitoring, and Response (Chapters 10–12) 532
Enterprise-Grade AI Security and GenAI Platforms 532
Secure Integration and API Governance 532
Monitoring and Threat Detection in AI Systems 533
Incident Response and Adaptive Resilience 533
Offensive AI: AI as an Attack Vector 533
Part III Summary: Lifecycle-Driven Defense 533
Conclusion: The Work Ahead—From Readiness to Relentless Execution 534
Continuous Adaptation: Securing the Lifecycle 535
Governance, Culture, and Organizational Readiness 535
Measuring Maturity and Progress 535
Final Guidance for Security Leaders 536
Future Threats: From Visionaries to Reality 536
Conclusion: The Road Ahead 539
References 540
Curated Reading, Film, and TV Series List on AI, Zero Trust, and Future Threats 542
Appendix Study Guide 545
Study Guide Questions 546
Category 1: Policy and Governance 546
Category 1 Answer Key 554
Category 2: Identity 556
Category 2 Answer Key 564
Category 3: Vulnerability Management 565
Category 3 Answer Key 573
Category 4: Enforcement 574
Category 4 Answer Key 583
Category 5: Analytics 584
Category 5 Answer Key 592
Closing 593
Glossary 595
Afterword 609
Final Word 613


9780138363413 TOC 4/3/2026

Need help?Get in touch