AI governance
The operating model used to review AI systems, assign responsibilities, track evidence, and make accountable decisions over time.
AI management system
A structured way to govern AI through defined roles, processes, evidence, and ongoing review activity instead of relying on one-time assessments alone.
Audit trail
A record of actions, approvals, updates, and evidence that helps reviewers understand who changed what and when.
Control evidence
Artifacts, notes, or linked records that support a claim that a policy, review, or governance action actually occurred.
Dataset lineage
Context showing where data came from, how it was related to models, and how downstream use can be reviewed later.
Governance workflow
A structured sequence for collecting context, routing review, assigning follow-up work, and preserving decisions for later reference.
High-risk AI system
A system that requires deeper review because of the potential impact of its use case, deployment context, or applicable policy and regulatory expectations.
Model registry
A governed inventory of AI systems with ownership, intended use, lifecycle state, and supporting review context.
Ontology
A structured definition of entity types and relationship types that helps governance teams describe how records connect and what those connections mean.
Remediation
Follow-up work used to address an identified gap, control issue, or governance concern.
Taxonomy
A controlled vocabulary used to classify governed records consistently across workflows, reports, and operational reviews.
Tenant workspace
An isolated operating context used to scope users, records, approvals, and audit visibility for a specific organization or environment.
Use-case intake
The early workflow used to capture business purpose, accountable owners, and operating context before later governance review and approval work begins.
Related content