Defend

Modules

Guard modules add provider prompt fragments for input or output; configure them under guards.input.modules and guards.output.modules.

Modules are small policy units implemented as system-prompt fragments. They are evaluated in the context of your configured LLM provider (claude or openai) when that provider runs semantic guarding. The local defend path focuses on the pipeline and classifier; still list modules under guards.input.modules / guards.output.modules when your input/output provider supports them.

Directions

  • input - applies on /v1/guard/input when the input provider stack uses modules.
  • output - applies on /v1/guard/output when output guarding is enabled.

The guard router filters modules by direction before calling the provider.

Optional context field

Every module accepts an optional context string in its YAML config. Trusted application owners can use it to append calibration guidance to the module fragment (see build_system_prompt in the codebase). Do not pass end-user controlled text into context.

Output guarding requires an LLM provider

POST /v1/guard/output requires guards.output.provider of claude or openai when output guarding is enabled. The defend provider does not drive module-based output evaluation.

Module reference

Use the sidebar sections (Security, Privacy, Safety, Policy, Quality, Reliability) or jump directly to a module page-for example injection.