Concepts
How Defend fits into your application, how the pipeline layers work, and how sessions tie input and output together.
Defend is a guardrail service: you call it at two boundaries—before the LLM (input) and before you expose model output (output). Configuration lives in defend.config.yaml; behavior is a mix of heuristics, optional local ML when you choose the defend provider, session state, and optional LLM-based evaluation.
Placement and processing
Where guards sit in your stack and what runs on the input path before a decision.
Architecture
Input guard, your LLM, and output guard in the request flow.
Pipeline
Normalization, regex, sessions, optional L2 intent (defend provider only), local classifier, and providers or modules.
Conversation and decisions
Linking turns and interpreting what the API returns.
Sessions
session_id, TTL, and multi-turn risk accumulation.
Actions and providers
pass / flag / block and defend versus claude / openai.
Evidence and boundaries
Benchmark context and what Defend does not promise.