Layer 3 — Framework Mappings

AI Frameworks → Business Frameworks

Every major AI framework is built on top of business frameworks you already operate. This matrix shows exactly which ones map, how strongly, and what that means in practice.

9 AI frameworks×12 business frameworks=108 intersections
Strong
Partial
None

Hover or tap any cell for details. Hover a framework name for its definition.

AI Framework ↓COSO ERMITILCOBITTOGAFPMBOKSix Sigma / LeanBalanced ScorecardSAFe / AgileFAIRDMBOKISO 27001SOX
NIST AI RMF
EU AI Act
ISO/IEC 42001
IEEE Ethically Aligned Design
OECD AI Principles
Google's PAIR / Responsible AI
Microsoft Responsible AI
Model Risk Management (SR 11-7 / OCC 2011-12)
NIST CSF

Scroll the table above horizontally, or read the details below.

AI Framework Reference

NIST AI RMF

National Institute of Standards and Technology AI Risk Management Framework

A voluntary U.S. government framework for managing AI risks across four functions — Govern, Map, Measure, Manage — designed to help organizations build trustworthy AI systems while balancing innovation with risk mitigation.

The closest thing to a universal AI governance standard in the U.S. Widely referenced in federal procurement and increasingly adopted by private sector organizations.

COSO ERMCOBITFAIRDMBOKISO 27001ITILTOGAFPMBOKSix Sigma / LeanBalanced ScorecardSAFe / AgileSOX

Strong Mappings

COSO ERM: NIST AI RMF's four functions (Govern, Map, Measure, Manage) map directly to COSO ERM's risk identification, assessment, response, and monitoring components. Both treat risk governance as a board-level responsibility.

COBIT: COBIT's governance and management objectives align well with NIST AI RMF. Both address governance structures, stakeholder needs, risk optimization, and resource management — COBIT at the IT level, NIST AI RMF at the AI level.

FAIR: FAIR's quantitative risk analysis methodology directly supports NIST AI RMF's Measure function. FAIR provides the financial quantification that NIST AI RMF calls for but doesn't prescribe.

DMBOK: NIST AI RMF's Map function (understanding AI context and data) directly depends on DMBOK's data governance, quality, and lineage practices. You can't manage AI risk without managing data risk.

ISO 27001: NIST AI RMF's security and resilience requirements map to ISO 27001 controls. Both require risk assessments, control implementation, monitoring, and continuous improvement — ISO 27001 for information security, NIST AI RMF for AI trustworthiness.

EU AI Act

European Union Artificial Intelligence Act

The world's first comprehensive AI regulation, classifying AI systems into risk tiers (unacceptable, high-risk, limited, minimal) with mandatory requirements for high-risk applications including transparency, human oversight, and conformity assessments.

Any organization deploying AI that touches EU citizens or operates in EU markets must comply. Sets the global regulatory benchmark for AI governance.

COSO ERMCOBITDMBOKITILTOGAFPMBOKSAFe / AgileFAIRISO 27001

Strong Mappings

COSO ERM: EU AI Act's risk classification tiers (unacceptable, high, limited, minimal) directly feed COSO ERM risk assessment processes. Organizations already doing ERM can classify AI systems within their existing risk taxonomy.

COBIT: COBIT's governance and compliance objectives align with EU AI Act's mandatory requirements. COBIT provides the IT governance structure to ensure AI systems meet the Act's documentation, transparency, and oversight requirements.

DMBOK: EU AI Act's requirements for training data quality, bias testing, and data governance directly require DMBOK-level data management practices. You can't comply with the Act's data provisions without mature data management.

ISO/IEC 42001

ISO/IEC 42001 — Artificial Intelligence Management System

The international standard for establishing, implementing, and maintaining an AI management system (AIMS), following the familiar ISO management system structure (Plan-Do-Check-Act) used in ISO 27001 and ISO 9001.

Certifiable standard — organizations can be audited against it. Provides the management system structure that NIST AI RMF and EU AI Act don't prescribe.

COSO ERMITILCOBITDMBOKISO 27001TOGAFPMBOKSix Sigma / LeanBalanced ScorecardSAFe / AgileFAIRSOX

Strong Mappings

COSO ERM: ISO 42001's AI management system structure (Plan-Do-Check-Act) integrates naturally with COSO ERM. Both use risk-based approaches with governance, assessment, treatment, and monitoring cycles.

ITIL: ISO 42001 follows the same management system structure as other ISO standards. Organizations already running ITIL-aligned IT service management can extend their service management practices to cover AI management system requirements.

COBIT: Both are governance frameworks with complementary scope. COBIT governs IT broadly; ISO 42001 governs AI specifically. Organizations can nest ISO 42001 AI governance within their COBIT IT governance structure.

DMBOK: ISO 42001 requires data management for AI systems. DMBOK provides the established practices for meeting those requirements. Data governance, quality, and lifecycle management are foundational to ISO 42001 compliance.

ISO 27001: Both are ISO management system standards using the same PDCA structure. Organizations certified in ISO 27001 can extend their ISMS to incorporate ISO 42001 AI management system requirements with significant structural reuse.

IEEE Ethically Aligned Design

IEEE Ethically Aligned Design — A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems

A comprehensive set of ethical principles and practical recommendations for AI/autonomous systems design, covering human rights, well-being, data agency, effectiveness, transparency, and accountability.

Focused on the ethical design layer that technical standards don't fully address. Influential in shaping organizational AI ethics policies and review boards.

COSO ERMCOBITTOGAFBalanced ScorecardSAFe / AgileDMBOK

OECD AI Principles

OECD Principles on Artificial Intelligence

International principles adopted by 40+ countries promoting AI that is innovative, trustworthy, respects human rights, and operates with transparency, accountability, security, and safety.

The most widely endorsed international AI policy framework. Referenced by G7/G20 AI governance discussions and national AI strategies worldwide.

COSO ERMCOBITDMBOKISO 27001

Google's PAIR / Responsible AI

People + AI Research / Google Responsible AI Practices

Google's internal framework for responsible AI development, covering fairness, interpretability, privacy, safety, and human-AI interaction design, published as open guidance for the broader AI community.

Practical, implementation-focused guidance from one of the largest AI deployers. Especially relevant for UX/design aspects of AI systems that governance frameworks don't cover.

TOGAFBalanced ScorecardSAFe / Agile

Microsoft Responsible AI

Microsoft Responsible AI Standard

Microsoft's internal governance framework requiring AI systems to meet standards for fairness, reliability/safety, privacy/security, inclusiveness, transparency, and accountability — with mandatory impact assessments for sensitive use cases.

Operationalized across Azure AI services. Relevant for organizations using Microsoft AI tools or seeking a model for internal AI governance implementation.

COSO ERMCOBITDMBOKITILTOGAFPMBOKSix Sigma / LeanBalanced ScorecardSAFe / AgileFAIRISO 27001

Strong Mappings

COSO ERM: Microsoft's Responsible AI Standard requires impact assessments and risk mitigation for AI systems, directly paralleling COSO ERM's risk assessment and response processes. Both emphasize governance structures and accountability.

COBIT: Both are governance-oriented frameworks. Microsoft's six principles (fairness, reliability, privacy, inclusiveness, transparency, accountability) map to COBIT governance objectives with AI-specific requirements.

DMBOK: Microsoft's privacy, fairness, and transparency requirements directly depend on data management practices. Training data quality, bias detection in data, and data provenance tracking require DMBOK-level data management.

Model Risk Management (SR 11-7 / OCC 2011-12)

Supervisory Guidance on Model Risk Management

U.S. banking regulatory guidance (Federal Reserve SR 11-7 and OCC Bulletin 2011-12) establishing expectations for model validation, ongoing monitoring, documentation, and governance for any quantitative model used in business decisions.

The original 'AI governance' framework — predates the AI hype by a decade. Mandatory for banks and increasingly referenced by non-bank organizations managing ML/AI model risk.

COSO ERMCOBITSix Sigma / LeanFAIRDMBOKSOXITILTOGAFPMBOKSAFe / AgileISO 27001

Strong Mappings

COSO ERM: Model Risk Management (SR 11-7) IS enterprise risk management applied to models. Model validation, monitoring, and governance are a subset of COSO ERM's risk management processes. Natural structural fit.

COBIT: SR 11-7's governance requirements (model owner, validator, independent review) directly parallel COBIT's IT governance structure. Both require clear roles, documentation, and oversight for technology-driven decisions.

Six Sigma / Lean: SR 11-7's emphasis on model performance measurement, validation testing, and ongoing monitoring directly parallels Six Sigma's statistical rigor. Model validation IS statistical quality control applied to predictive models.

FAIR: SR 11-7 requires model risk quantification. FAIR provides the methodology to quantify that risk in financial terms. FAIR is arguably the best tool for meeting SR 11-7's risk quantification expectations.

DMBOK: Model risk is inseparable from data risk. SR 11-7 requires understanding model inputs, and DMBOK provides the data governance, quality, and lineage practices that make model validation possible.

SOX: When models affect financial reporting (CECL loss estimates, revenue forecasting, asset valuation), SR 11-7 model governance directly supports SOX internal control requirements. The model IS the control.

NIST CSF

NIST Cybersecurity Framework

A widely adopted cybersecurity risk management framework organized around five functions — Identify, Protect, Detect, Respond, Recover — used by organizations of all sizes and sectors to manage cyber risk.

Not AI-specific, but critical for AI systems that process sensitive data, operate in production environments, or face adversarial threats. AI security risks map to CSF functions.

COSO ERMITILCOBITFAIRISO 27001TOGAFPMBOKBalanced ScorecardSAFe / AgileDMBOKSOX

Strong Mappings

COSO ERM: NIST CSF's five functions (Identify, Protect, Detect, Respond, Recover) are cybersecurity risk management — a direct subset of COSO ERM. Cyber risk IS enterprise risk, and the frameworks nest naturally.

ITIL: NIST CSF's Detect, Respond, and Recover functions align directly with ITIL's incident management, problem management, and IT service continuity management. Security operations run on ITIL processes.

COBIT: NIST CSF's governance-oriented approach aligns with COBIT's IT governance framework. COBIT provides the governance structure; NIST CSF provides the cybersecurity-specific control framework within it.

FAIR: FAIR was originally designed for information risk quantification — the same domain as NIST CSF. FAIR provides the financial quantification methodology for NIST CSF risk assessments.

ISO 27001: NIST CSF and ISO 27001 are the two most widely adopted cybersecurity/information security frameworks globally. They cover the same domain with different structures — NIST CSF is function-based, ISO 27001 is control-based. Highly complementary.

Reverse Lookup

Already run one of these business frameworks? Here's every AI framework that maps to it.

COSO ERM

Committee of Sponsoring Organizations Enterprise Risk Management

NIST AI RMFEU AI ActISO/IEC 42001Microsoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)NIST CSFIEEE Ethically Aligned DesignOECD AI Principles

ITIL

Information Technology Infrastructure Library

ISO/IEC 42001NIST CSFNIST AI RMFEU AI ActMicrosoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)

COBIT

Control Objectives for Information and Related Technologies

NIST AI RMFEU AI ActISO/IEC 42001Microsoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)NIST CSFIEEE Ethically Aligned DesignOECD AI Principles

TOGAF

The Open Group Architecture Framework

NIST AI RMFEU AI ActISO/IEC 42001IEEE Ethically Aligned DesignGoogle's PAIR / Responsible AIMicrosoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)NIST CSF

PMBOK

Project Management Body of Knowledge

NIST AI RMFEU AI ActISO/IEC 42001Microsoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)NIST CSF

Six Sigma / Lean

Six Sigma and Lean Management

Model Risk Management (SR 11-7 / OCC 2011-12)NIST AI RMFISO/IEC 42001Microsoft Responsible AI

Balanced Scorecard

Balanced Scorecard (BSC)

NIST AI RMFISO/IEC 42001IEEE Ethically Aligned DesignGoogle's PAIR / Responsible AIMicrosoft Responsible AINIST CSF

SAFe / Agile

Scaled Agile Framework / Agile Methodology

NIST AI RMFEU AI ActISO/IEC 42001IEEE Ethically Aligned DesignGoogle's PAIR / Responsible AIMicrosoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)NIST CSF

FAIR

Factor Analysis of Information Risk

NIST AI RMFModel Risk Management (SR 11-7 / OCC 2011-12)NIST CSFEU AI ActISO/IEC 42001Microsoft Responsible AI

DMBOK

Data Management Body of Knowledge

NIST AI RMFEU AI ActISO/IEC 42001Microsoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)IEEE Ethically Aligned DesignOECD AI PrinciplesNIST CSF

ISO 27001

ISO/IEC 27001 — Information Security Management System

NIST AI RMFISO/IEC 42001NIST CSFEU AI ActOECD AI PrinciplesMicrosoft Responsible AIModel Risk Management (SR 11-7 / OCC 2011-12)

SOX

Sarbanes-Oxley Act

Model Risk Management (SR 11-7 / OCC 2011-12)NIST AI RMFISO/IEC 42001NIST CSF

Related Standards & Frameworks

These standards don't map 1:1 into the matrix above, but practitioners working with AI governance need to know they exist. Each one connects to frameworks in the matrix.

WCAG

governmenthealthcaresaas

Web Content Accessibility Guidelines

W3C standards for making web content accessible to people with disabilities — covering perceivable, operable, understandable, and robust design principles. Relevant to AI systems with user-facing interfaces.

AI interfaces must be accessible. Screen readers need to work with AI-generated content. Voice interfaces need alternatives. Section 508 (government) mandates WCAG compliance.

Relates to:IEEE Ethically Aligned DesignEU AI Act

HITRUST CSF

healthcare

Health Information Trust Alliance Common Security Framework

A certifiable security framework that harmonizes requirements from HIPAA, NIST, ISO 27001, and other standards — widely used in healthcare and by organizations handling health data.

Healthcare AI systems handling PHI often need HITRUST certification. HITRUST provides the bridge between HIPAA requirements and operational security controls.

Relates to:ISO 27001NIST CSF

NIST 800-53

governmenthealthcarebanking

NIST Special Publication 800-53 — Security and Privacy Controls

The comprehensive catalog of security and privacy controls used by U.S. federal agencies and many private sector organizations — over 1,000 controls organized into 20 families.

Federal AI systems must comply with NIST 800-53 controls. The control catalog includes AI-relevant controls for system integrity, audit, access control, and risk assessment.

Relates to:NIST CSFISO 27001

FedRAMP

governmentsaas

Federal Risk and Authorization Management Program

The U.S. government's standardized approach to security assessment, authorization, and monitoring for cloud services used by federal agencies — based on NIST 800-53 controls.

Cloud-based AI services sold to federal agencies must be FedRAMP authorized. The authorization process evaluates the AI system's security posture against NIST 800-53 controls.

Relates to:NIST CSFNIST 800-53ISO 27001

SOC 2

saasconsultingbanking

System and Organization Controls 2

An audit framework assessing an organization's controls against five Trust Services Criteria: security, availability, processing integrity, confidentiality, and privacy. The de facto standard for SaaS vendor security assurance.

AI SaaS vendors need SOC 2 reports for enterprise sales. AI systems that process customer data must be included in the SOC 2 scope — covering data handling, model security, and access controls.

Relates to:ISO 27001NIST CSFCOBIT

PCI DSS

bankingsaas

Payment Card Industry Data Security Standard

Security standard for organizations that handle credit card data — covering network security, encryption, access control, monitoring, and testing requirements.

AI systems that process, store, or transmit payment card data must comply with PCI DSS. Fraud detection models, payment processing AI, and customer-facing payment interfaces are all in scope.

Relates to:ISO 27001NIST CSF

Section 508

government

Section 508 of the Rehabilitation Act

U.S. federal law requiring that electronic and information technology developed, procured, or maintained by federal agencies be accessible to people with disabilities — referencing WCAG standards.

Federal AI systems with user interfaces must be Section 508 compliant. This includes chatbots, dashboards, decision support tools, and any AI-powered interface used by federal employees or the public.

Relates to:WCAGIEEE Ethically Aligned Design

GDPR

All industries

General Data Protection Regulation

The EU's comprehensive data protection regulation governing the collection, processing, and storage of personal data — including rights to explanation for automated decision-making (Article 22).

GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing, including profiling. AI systems making decisions about EU residents must provide meaningful information about the logic involved.

Relates to:EU AI ActISO 27001

CCPA/CPRA

All industries

California Consumer Privacy Act / California Privacy Rights Act

California's comprehensive privacy law giving consumers rights over their personal data — including the right to opt out of automated decision-making technology (CPRA addition).

CPRA added specific provisions for automated decision-making that affect AI deployments. Businesses must disclose when they use automated decision-making and honor opt-out requests.

Relates to:ISO 27001

BCBS 239

banking

Basel Committee on Banking Supervision — Principles for Risk Data Aggregation and Risk Reporting

International banking standard requiring systemically important banks to have strong data governance, accurate risk data aggregation, and timely risk reporting capabilities.

AI models used for risk management in banking must be built on data infrastructure that meets BCBS 239 principles. Data quality, lineage, and aggregation capabilities are prerequisites for reliable AI risk models.

Relates to:DMBOKCOSO ERM

NAIC Model Laws & AI Bulletins

insurance

National Association of Insurance Commissioners Model Laws and AI Guidance

NAIC model laws and bulletins governing the use of AI/ML in insurance — including requirements for transparency, fairness testing, and governance of predictive models used in underwriting, rating, and claims.

State insurance regulators are adopting NAIC guidance on AI governance. Insurers using AI in rating, underwriting, or claims must demonstrate fairness, transparency, and appropriate governance.

Relates to:NIST AI RMFModel Risk Management (SR 11-7 / OCC 2011-12)

See the Full Picture

Three layers of translation: universal concepts, industry-specific mappings, and governance frameworks.