Governed source registry
Track source name, kind, version, URI, citation label, and descriptive metadata for each retrieval asset that supports an AI system.
Feature · RAG sources
SentinelAI gives teams a governed source registry for files, URLs, policies, FAQs, runbooks, and other retrieval inputs with versioning, citation context, ingestion status, and AI-system linkage.
What this area covers
RAG source workflows help teams preserve a structured record of the governed content behind retrieval-augmented AI systems. Instead of treating source collections as opaque vector-store inputs, SentinelAI keeps source metadata, version history, chunking signals, and system dependencies reviewable.
Related product areas
Track governed runtime systems that combine models, approved use cases, datasets, release state, and readiness into one operational record.
Govern versioned prompts, retrieval settings, linked AI systems, and evaluation posture from a dedicated prompt operations record.
Bring datasets, lineage, approvals, taxonomy-backed controls, catalog integrations, and quality gates into the AI governance workflow.
Manage AI-system release records with approval state, rollback references, dependency snapshots, and invalidation handling.
Define governed prompt evaluation suites with baselines, regression thresholds, run evidence, and release-blocking posture.
Operate taxonomy, ontology, relationship, and graph-backed governance workflows across models, use cases, datasets, controls, and evidence.
Core capabilities
Track source name, kind, version, URI, citation label, and descriptive metadata for each retrieval asset that supports an AI system.
See whether a source is draft, ingesting, active, error, or archived so teams can understand which knowledge assets are usable right now.
Preserve chunk counts and citation-oriented context so reviewers can understand how governed knowledge becomes retrievable evidence.
Connect each source to the AI systems that rely on it instead of leaving retrieval dependencies implicit or undocumented.
Create a durable record for source updates and retirements so retrieval changes can be reviewed alongside release and prompt changes.
Target users
Governance value
How teams use it
Step 1
Capture the type, version, URI, citation labeling, and metadata for each governed retrieval source entering the workflow.
Step 2
Monitor whether a source is ingesting cleanly, active for use, or needs remediation before it can support governed retrieval.
Step 3
Use the source record as the knowledge-layer reference when prompts change, releases move forward, or incidents require source review.
Continue exploring
Track governed runtime systems that combine models, approved use cases, datasets, release state, and readiness into one operational record.
Govern versioned prompts, retrieval settings, linked AI systems, and evaluation posture from a dedicated prompt operations record.
Bring datasets, lineage, approvals, taxonomy-backed controls, catalog integrations, and quality gates into the AI governance workflow.
Manage AI-system release records with approval state, rollback references, dependency snapshots, and invalidation handling.
Define governed prompt evaluation suites with baselines, regression thresholds, run evidence, and release-blocking posture.
Operate taxonomy, ontology, relationship, and graph-backed governance workflows across models, use cases, datasets, controls, and evidence.