Choose how PsiGuard polices
your models.
Start with lightweight monitoring, step up to Autopilot when you’re ready, and keep one eye on the locked tier that watches entire threads like a hawk.
Example prompt
Raw model answer (without PsiGuard)
PsiGuard response
PsiGuard can replace unstable answers, attach a warning, or refuse to answer. You control the policy with a simple API call.
Hallucination verdict
High risk: confident but wrong
The answer sounds sure of itself while ignoring basic facts.
What PsiGuard is seeing
- Fact alignment
- Low
- Confidence vs reality
- Very high
- Prompt pressure
- Strong
- Tone
- Persuasive, conspiratorial
- Recommended action
- Rewrite or refuse
What PsiGuard does in this case
- • Spots that the answer sounds confident while the facts do not line up.
- • Sees that the prompt is pushing for a dramatic, one-sided story.
- • Lowers risk settings and asks the model for a grounded explanation instead.
- • Returns a safe answer, or refuses, instead of shipping conspiracy fiction to users.
PsiGuard sits between you and the model
You send a prompt. Your model answers. PsiGuard looks at that answer the same way a human would and asks three questions: does it match reality, does it drift into hype, and does it sound more confident than it should.
1. Read the message
PsiGuard inspects the raw answer and tries to understand its intent and tone.
- Understands persuasion, hype, and pressure.
- Spots overconfident claims with weak grounding.
- Works with GPT APIs or your own model.
2. Score the risk
The guard estimates how believable the answer is versus how well supported it is.
- Understands when the prompt is steering things.
- Detects when tone shifts into bad directions.
- Flags answers that stretch beyond the facts.
3. Make a call
Based on the risk, PsiGuard either passes the answer through, rewrites it, or refuses.
- Blocks hype and conspiratorial language.
- Chooses safer wording when it’s needed.
- Returns a clean response your users can trust.
Pick how PsiGuard works for you
Start with a simple playground to show people what hallucinations look like. Move up to API metrics and then full Autopilot when you want the guard to step in on live traffic.
PsiGuard Analyze
Core hallucination metrics for teams that want visibility before control.
- 1 primary LLM provider connected.
- Per-message analysis, entropy, and risk score.
- ψ-state labels (stable, drift, tension, collapse variants).
- Up to 10,000 analyzed messages per month.
- Dashboard visualizations for per-response metrics.
Overage: $0.009 per additional analyzed message.
PsiGuard Autopilot
Automatically refuse, rewrite, or pass outputs based on real-time risk.
- Everything in Analyze
- Connect up to 3 LLM providers.
- Autopilot decisions (pass / review / refuse).
- Configurable risk thresholds & behaviors per route.
- Up to 50,000 guarded messages per month.
- Log of high-risk events for audit and debugging.
Overage: $0.006 per additional guarded message.
PsiGuard Sentinel
Thread-level stability, temporal drift analysis, and system-wide oversight for high-stakes environments.
- Designed for finance, healthcare, legal, and other regulated systems.
- Full thread-based κΛE analysis & state transitions.
- Drift, entropy & memory-coupling trends across conversations.
- Multi-tenant, multi-model observability for complex deployments.
Reserved for large-scale and regulated deployments.