Skip to main content

Guardrails AI

🛤️ What is Guardrails AI?

Guardrails AI is the leading open-source framework to define and enforce assurance for LLM applications. It offers

✅ Framework for creating custom validations at an application level

✅ Orchestration of prompting → verification → re-prompting

✅ Library of commonly used validators for multiple use cases

✅ Specification language for communicating requirements to LLM

🚒 Under the hood

Guardrails provides an object definition called a Rail for enforcing a specification on an LLM output, and a lightweight wrapper called a Guard around LLM API calls to implement this spec.

  1. rail (Reliable AI markup Language) files for specifying structure and type information, validators and corrective actions over LLM outputs. The concept of a Rail has evolved from markup - Rails can be defined in either Pydantic or RAIL for structured outputs, or directly in Python for string outputs.
  2. Guard wraps around LLM API calls to structure, validate and correct the outputs.
graph LR
A[Create `RAIL` spec] --> B["Initialize `guard` from spec"];
B --> C["Wrap LLM API call with `guard`"];

Check out the Getting Started guide to learn how to use Guardrails.

📍 Roadmap

  • Javascript SDK
  • Wider variety of language support (TypeScript, Go, etc)
  • Informative logging
  • VSCode extension for .rail files
  • Next version of .rail format
  • Validator playground
  • Input Validation
  • Pydantic 2.0
  • Improving reasking logic
  • Integration with LangChain
  • Add more LLM providers