Runtime system records
Maintain AI-system records with ownership, lifecycle stage, deployment target, endpoint references, and business-unit context in one governed workspace.
Feature · AI systems
SentinelAI extends governance beyond model records by giving teams a dedicated AI systems layer for the operational units that go live, inherit governed dependencies, and move through release and oversight decisions.
What this area covers
AI systems help teams govern what runs in production, not just the underlying components. Each record can connect models, approved use cases, datasets, readiness posture, and current release references so governance decisions reflect the deployed system boundary.
Related product areas
Maintain a governed inventory for AI models and use-case context with lifecycle state, ownership, risk posture, and supporting evidence.
Govern versioned prompts, retrieval settings, linked AI systems, and evaluation posture from a dedicated prompt operations record.
Register governed retrieval sources with ingestion status, version history, citation context, and AI-system linkage.
Define governed prompt evaluation suites with baselines, regression thresholds, run evidence, and release-blocking posture.
Manage AI-system release records with approval state, rollback references, dependency snapshots, and invalidation handling.
Bring live assurance signals, telemetry connector management, trigger rules, and evidence-ready monitoring context into AI governance workflows.
Core capabilities
Maintain AI-system records with ownership, lifecycle stage, deployment target, endpoint references, and business-unit context in one governed workspace.
Roll up the models, use cases, and datasets that support the system so reviewers can see the operational dependency set without manual reconciliation.
Track readiness states such as draft, ready, needs review, attention required, and retired so teams know which systems can progress and which need follow-up.
Keep the current release reference and downstream release-governance state tied to the same runtime record used during review and monitoring.
Give governance, platform, and product teams a clearer picture of what is actually deployed instead of relying on model inventories alone.
Target users
Governance value
How teams use it
Step 1
Register the runtime AI system with owners, deployment context, lifecycle state, and the release reference that matters operationally.
Step 2
Connect the models, use cases, and datasets that should roll up into that runtime system before approval and monitoring work begins.
Step 3
Use the AI-system record as the operational source of truth when readiness changes, releases move forward, or incidents require investigation.
Continue exploring
Maintain a governed inventory for AI models and use-case context with lifecycle state, ownership, risk posture, and supporting evidence.
Govern versioned prompts, retrieval settings, linked AI systems, and evaluation posture from a dedicated prompt operations record.
Register governed retrieval sources with ingestion status, version history, citation context, and AI-system linkage.
Define governed prompt evaluation suites with baselines, regression thresholds, run evidence, and release-blocking posture.
Manage AI-system release records with approval state, rollback references, dependency snapshots, and invalidation handling.
Bring live assurance signals, telemetry connector management, trigger rules, and evidence-ready monitoring context into AI governance workflows.