Skip to main content

What is Guardrails Hub

The Guardrails Hub is a community-driven effort to create and share guardrails for common LLM validation use cases. The Hub is a place where you can find guardrails for common LLM validation use cases, and also contribute your own guardrails to help others. You can check out the hub at here.

The hub allows you to mix and match guardrails and build your own “guard” that runs in your critical path in production, and makes sure that when any of the risks in your guard occur, then an appropriate action is taken.

What are Guardrails?

Guardrails are a set of ML models or rules that are used to validate the output of a language model. They are used to ensure that the output of the model is safe, accurate, and meets the requirements of the user. Guardrails can be used to check for things like bias, toxicity, and other issues that may arise from using a language model.

Each guardrail validates the presence of a specific type of risk – that risk may range from unsafe code, hallucinations, regulation violations, company disrepute, toxicity or unsatisfactory user experience. Some examples of guardrails on the hub are:

  • Anti-Hallucination guardrails
  • No mentions of Competitors (for an organization)
  • No toxic language
  • Accurate summarization
  • PII Leakage
  • For code-gen use cases — generating invalid / unsafe code