Skip to main content

History and Logs

Call

class Call(ICall, ArbitraryModel)

A Call represents a single execution of a Guard. One Call is created each time the user invokes the Guard.__call__, Guard.parse, or Guard.validate method.

Attributes:

  • iterations Stack[Iteration] - A stack of iterations for the initial validation round and one for each reask that occurs during a Call.
  • inputs CallInputs - The inputs as passed in to Guard.__call__, Guard.parse, or Guard.validate
  • exception Optional[Exception] - The exception that interrupted the Guard execution.

prompt

@property
def prompt() -> Optional[str]

The prompt as provided by the user when initializing or calling the Guard.

prompt_params

@property
def prompt_params() -> Optional[Dict]

The prompt parameters as provided by the user when initializing or calling the Guard.

compiled_prompt

@property
def compiled_prompt() -> Optional[str]

The initial compiled prompt that was passed to the LLM on the first call.

reask_prompts

@property
def reask_prompts() -> Stack[Optional[str]]

The compiled prompts used during reasks.

Does not include the initial prompt.

instructions

@property
def instructions() -> Optional[str]

The instructions as provided by the user when initializing or calling the Guard.

compiled_instructions

@property
def compiled_instructions() -> Optional[str]

The initial compiled instructions that were passed to the LLM on the first call.

reask_instructions

@property
def reask_instructions() -> Stack[str]

The compiled instructions used during reasks.

Does not include the initial instructions.

logs

@property
def logs() -> Stack[str]

Returns all logs from all iterations as a stack.

tokens_consumed

@property
def tokens_consumed() -> Optional[int]

Returns the total number of tokens consumed during all iterations with this call.

prompt_tokens_consumed

@property
def prompt_tokens_consumed() -> Optional[int]

Returns the total number of prompt tokens consumed during all iterations with this call.

completion_tokens_consumed

@property
def completion_tokens_consumed() -> Optional[int]

Returns the total number of completion tokens consumed during all iterations with this call.

raw_outputs

@property
def raw_outputs() -> Stack[str]

The exact outputs from all LLM calls.

parsed_outputs

@property
def parsed_outputs() -> Stack[Union[str, List, Dict]]

The outputs from the LLM after undergoing parsing but before validation.

validation_response

@property
def validation_response() -> Optional[Union[str, List, Dict, ReAsk]]

The aggregated responses from the validation process across all iterations within the current call.

This value could contain ReAsks.

fixed_output

@property
def fixed_output() -> Optional[Union[str, List, Dict]]

The cumulative output from the validation process across all current iterations with any automatic fixes applied.

Could still contain ReAsks if a fix was not available.

guarded_output

@property
def guarded_output() -> Optional[Union[str, List, Dict]]

The complete validated output after all stages of validation are completed.

This property contains the aggregate validated output after all validation stages have been completed. Some values in the validated output may be "fixed" values that were corrected during validation.

This will only have a value if the Guard is in a passing state OR if the action is no-op.

reasks

@property
def reasks() -> Stack[ReAsk]

Reasks generated during validation that could not be automatically fixed.

These would be incorporated into the prompt for the next LLM call if additional reasks were granted.

validator_logs

@property
def validator_logs() -> Stack[ValidatorLogs]

The results of each individual validation performed on the LLM responses during all iterations.

error

@property
def error() -> Optional[str]

The error message from any exception that raised and interrupted the run.

failed_validations

@property
def failed_validations() -> Stack[ValidatorLogs]

The validator logs for any validations that failed during the entirety of the run.

status

@property
def status() -> str

Returns the cumulative status of the run based on the validity of the final merged output.

tree

@property
def tree() -> Tree

Returns the tree.

Iteration

class Iteration(IIteration, ArbitraryModel)

An Iteration represents a single iteration of the validation loop including a single call to the LLM if applicable.

Attributes:

  • id str - The unique identifier for the iteration.
  • call_id str - The unique identifier for the Call that this iteration is a part of.
  • index int - The index of this iteration within the Call.
  • inputs Inputs - The inputs for the validation loop.
  • outputs Outputs - The outputs from the validation loop.

logs

@property
def logs() -> Stack[str]

Returns the logs from this iteration as a stack.

tokens_consumed

@property
def tokens_consumed() -> Optional[int]

Returns the total number of tokens consumed during this iteration.

prompt_tokens_consumed

@property
def prompt_tokens_consumed() -> Optional[int]

Returns the number of prompt/input tokens consumed during this iteration.

completion_tokens_consumed

@property
def completion_tokens_consumed() -> Optional[int]

Returns the number of completion/output tokens consumed during this iteration.

raw_output

@property
def raw_output() -> Optional[str]

The exact output from the LLM.

parsed_output

@property
def parsed_output() -> Optional[Union[str, List, Dict]]

The output from the LLM after undergoing parsing but before validation.

validation_response

@property
def validation_response() -> Optional[Union[ReAsk, str, List, Dict]]

The response from a single stage of validation.

Validation response is the output of a single stage of validation and could be a combination of valid output and reasks. Note that a Guard may run validation multiple times if reasks occur. To access the final output after all steps of validation are completed, check out Call.guarded_output."

guarded_output

@property
def guarded_output() -> Optional[Union[str, List, Dict]]

Any valid values after undergoing validation.

Some values in the validated output may be "fixed" values that were corrected during validation. This property may be a partial structure if field level reasks occur.

reasks

@property
def reasks() -> Sequence[ReAsk]

Reasks generated during validation.

These would be incorporated into the prompt or the next LLM call.

validator_logs

@property
def validator_logs() -> List[ValidatorLogs]

The results of each individual validation performed on the LLM response during this iteration.

error

@property
def error() -> Optional[str]

The error message from any exception that raised and interrupted this iteration.

exception

@property
def exception() -> Optional[Exception]

The exception that interrupted this iteration.

failed_validations

@property
def failed_validations() -> List[ValidatorLogs]

The validator logs for any validations that failed during this iteration.

error_spans_in_output

@property
def error_spans_in_output() -> List[ErrorSpan]

The error spans from the LLM response.

These indices are relative to the complete LLM output.

status

@property
def status() -> str

Representation of the end state of this iteration.

OneOf: pass, fail, error, not run

Inputs

class Inputs(IInputs, ArbitraryModel)

Inputs represent the input data that is passed into the validation loop.

Attributes:

  • llm_api Optional[PromptCallableBase] - The constructed class for calling the LLM.
  • llm_output Optional[str] - The string output from an external LLM call provided by the user via Guard.parse.
  • instructions Optional[Instructions] - The constructed Instructions class for chat model calls.
  • prompt Optional[Prompt] - The constructed Prompt class.
  • msg_history Optional[List[Dict]] - The message history provided by the user for chat model calls.
  • prompt_params Optional[Dict] - The parameters provided by the user that will be formatted into the final LLM prompt.
  • num_reasks Optional[int] - The total number of reasks allowed; user provided or defaulted.
  • metadata Optional[Dict[str, Any]] - The metadata provided by the user to be used during validation.
  • full_schema_reask Optional[bool] - Whether reasks we performed across the entire schema or at the field level.
  • stream Optional[bool] - Whether or not streaming was used.

Outputs

class Outputs(IOutputs, ArbitraryModel)

Outputs represent the data that is output from the validation loop.

Attributes:

  • llm_response_info Optional[LLMResponse] - Information from the LLM response
  • raw_output Optional[str] - The exact output from the LLM.
  • parsed_output Optional[Union[str, List, Dict]] - The output parsed from the LLM response as it was passed into validation.
  • validation_response Optional[Union[str, ReAsk, List, Dict]] - The response from the validation process.
  • guarded_output Optional[Union[str, List, Dict]] - Any valid values after undergoing validation. Some values may be "fixed" values that were corrected during validation. This property may be a partial structure if field level reasks occur.
  • reasks List[ReAsk] - Information from the validation process used to construct a ReAsk to the LLM on validation failure. Default [].
  • validator_logs List[ValidatorLogs] - The results of each individual validation. Default [].
  • error Optional[str] - The error message from any exception that raised and interrupted the process.
  • exception Optional[Exception] - The exception that interrupted the process.

failed_validations

@property
def failed_validations() -> List[ValidatorLogs]

Returns the validator logs for any validation that failed.

error_spans_in_output

@property
def error_spans_in_output() -> List[ErrorSpan]

The error spans from the LLM response.

These indices are relative to the complete LLM output.

status

@property
def status() -> str

Representation of the end state of the validation run.

OneOf: pass, fail, error, not run

CallInputs

class CallInputs(Inputs, ICallInputs, ArbitraryModel)

CallInputs represent the input data that is passed into the Guard from the user. Inherits from Inputs with the below overrides and additional attributes.

Attributes:

  • llm_api Optional[Callable[[Any], Awaitable[Any]]] - The LLM function provided by the user during Guard.call or Guard.parse.
  • prompt Optional[str] - The prompt string as provided by the user.
  • instructions Optional[str] - The instructions string as provided by the user.
  • args List[Any] - Additional arguments for the LLM as provided by the user. Default [].
  • kwargs Dict[str, Any] - Additional keyword-arguments for the LLM as provided by the user. Default .