Skip to main content

Use Guardrails with any LLM

Guardrails' Guard wrappers provide a simple way to add Guardrails to your LLM API calls. The wrappers are designed to be used with any LLM API.

There are three ways to use Guardrails with an LLM API:

  1. Natively-supported LLMs: Guardrails provides out-of-the-box wrappers for OpenAI, Cohere, Anthropic and HuggingFace. If you're using any of these APIs, check out the documentation in this section.
  2. LLMs supported through LiteLLM: Guardrails provides an easy integration with liteLLM, a lightweight abstraction over LLM APIs that supports over 100+ LLMs. If you're using an LLM that isn't natively supported by Guardrails, you can use LiteLLM to integrate it with Guardrails. Check out the documentation in this section.
  3. Build a custom LLM wrapper: If you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. Check out the documentation in this section.

Natively-supported LLMs

Guardrails provides native support for a select few LLMs and Manifest. If you're using any of these LLMs, you can use Guardrails' out-of-the-box wrappers to add Guardrails to your LLM API calls.

import openai
from guardrails import Guard
from guardrails.hub import ProfanityFree

# Create a Guard
guard = Guard().use(ProfanityFree())

# Wrap openai API call
validated_response = guard(
openai.chat.completions.create,
prompt="Can you generate a list of 10 things that are not food?",
model="gpt-3.5-turbo",
max_tokens=100,
temperature=0.0,
)

LLMs supported via LiteLLM

LiteLLM is a lightweight wrapper that unifies the interface for over 100+ LLMs. Guardrails only supports 4 LLMs natively, but you can use Guardrails with LiteLLM to support over 100+ LLMs. You can read more about the LLMs supported by LiteLLM here.

In order to use Guardrails with any of the LLMs supported through liteLLM, you need to do the following:

  1. Call the Guard.__call__ method with litellm.completion as the first argument.
  2. Pass any additional litellm arguments as keyword arguments to the Guard.__call method.

Some examples of using Guardrails with LiteLLM are shown below.

Use Guardrails with Ollama

import litellm
from guardrails import Guard
from guardrails.hub import ProfanityFree

# Create a Guard class
guard = Guard().use(ProfanityFree())

# Call the Guard to wrap the LLM API call
validated_response = guard(
litellm.completion,
model="ollama/llama2",
max_tokens=500,
api_base="http://localhost:11434",
msg_history=[{"role": "user", "content": "hello"}]
)

Use Guardrails with Azure's OpenAI endpoint

import os

import litellm
from guardrails import Guard
from guardrails.hub import ProfanityFree

validated_response = guard(
litellm.completion,
model="azure/<your deployment name>",
max_tokens=500,
api_base=os.environ.get("AZURE_OPENAI_API_BASE"),
api_version="2023-05-15",
api_key=os.environ.get("AZURE_OPENAI_API_KEY"),
msg_history=[{"role": "user", "content": "hello"}]
)

Build a custom LLM wrapper

In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that takes accepts a prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.

from guardrails import Guard
from guardrails.hub import ProfanityFree

# Create a Guard class
guard = Guard().use(ProfanityFree())

# Function that takes the prompt as a string and returns the LLM output as string
def my_llm_api(
prompt: Optional[str] = None,
instruction: Optional[str] = None,
msg_history: Optional[list[dict]] = None,
**kwargs
) -> str:
"""Custom LLM API wrapper.

At least one of prompt, instruction or msg_history should be provided.

Args:
prompt (str): The prompt to be passed to the LLM API
instruction (str): The instruction to be passed to the LLM API
msg_history (list[dict]): The message history to be passed to the LLM API
**kwargs: Any additional arguments to be passed to the LLM API

Returns:
str: The output of the LLM API
"""

# Call your LLM API here
llm_output = some_llm(prompt, instruction, msg_history, **kwargs)

return llm_output

# Wrap your LLM API call
validated_response = guard(
my_llm_api,
prompt="Can you generate a list of 10 things that are not food?",
**kwargs,
)