Overview

Guardrails is a production-grade safety, security, and compliance platform for Large Language Model (LLM) applications.

It helps teams control, observe, and scale AI systems by enforcing guardrails at runtime — before, during, and after model execution.

What problems does Guardrails solve?

Modern AI applications face challenges such as:

  • Prompt injection and jailbreak attempts
  • Leakage of secrets, PII, or internal data
  • Unsafe or policy-violating content
  • Uncontrolled tool or agent behavior
  • Lack of observability and auditability
  • Regulatory and compliance requirements

Guardrails addresses these issues with a modular, extensible enforcement engine designed for real-world production environments.

Core principles

Guardrails is built on the following principles:

Safety by design

Guardrails are enforced automatically, not manually, reducing human error and ensuring consistent behavior.

Modular architecture

Each guardrail is an independent unit that can be composed, configured, and reused across applications.

Runtime enforcement

Guardrails execute in real time, enabling blocking, warning, redaction, or modification of content.

Observability first

Every execution is tracked, enabling analytics, monitoring, and auditing.

Enterprise readiness

Designed for scalability, multi-tenant systems, and regulated industries.

Where Guardrails fits

Guardrails can be used in:

  • Chatbots and assistants
  • Agentic AI systems
  • Internal tools
  • SaaS platforms
  • Developer APIs
  • Regulated applications (healthcare, finance, education)

It integrates with your application via:

  • REST APIs
  • SDKs
  • Profiles and policies
  • Analytics pipelines

Next steps

  • Learn how Guardrails is structured → Architecture
  • Understand the core building blocks → Core Concepts
  • Get running quickly → Getting Started