peregrine/demo/config/llm.yaml
pyr0ball bc7e3c8952 feat: DEMO_MODE — isolated public menagerie demo instance
Adds a fully neutered public demo for menagerie.circuitforge.tech/peregrine
that shows the Peregrine UI without exposing any personal data or real LLM inference.

scripts/llm_router.py:
  - Block all inference when DEMO_MODE env var is set (1/true/yes)
  - Raises RuntimeError with a user-friendly "public demo" message

app/app.py:
  - IS_DEMO constant from DEMO_MODE env var
  - Wizard gate bypassed in demo mode (demo/config/user.yaml pre-seeds a fake profile)
  - Demo banner in sidebar: explains read-only status + links to circuitforge.tech

compose.menagerie.yml (new):
  - Separate Docker Compose project (peregrine-demo) on host port 8504
  - Mounts demo/config/ and demo/data/ — isolated from personal instance
  - DEMO_MODE=true, no API keys, no /docs mount
  - Project name: peregrine-demo (run alongside personal instance)

demo/config/user.yaml:
  - Generic "Demo User" profile, wizard_complete=true, no real personal info

demo/config/llm.yaml:
  - All backends disabled (belt-and-suspenders alongside DEMO_MODE block)

demo/data/.gitkeep:
  - staging.db is auto-created on first run, gitignored via demo/data/*.db

.gitignore: add demo/data/*.db

Caddy routes menagerie.circuitforge.tech/peregrine* → 8504 (demo instance).
Personal Peregrine remains on 8502, unchanged.
2026-03-02 11:22:38 -08:00

68 lines
1.6 KiB
YAML

# Demo LLM config — all backends disabled.
# DEMO_MODE=true in the environment blocks the router before any backend is tried,
# so these values are never actually used. Kept for schema completeness.
backends:
anthropic:
api_key_env: ANTHROPIC_API_KEY
enabled: false
model: claude-sonnet-4-6
supports_images: true
type: anthropic
claude_code:
api_key: any
base_url: http://localhost:3009/v1
enabled: false
model: claude-code-terminal
supports_images: true
type: openai_compat
github_copilot:
api_key: any
base_url: http://localhost:3010/v1
enabled: false
model: gpt-4o
supports_images: false
type: openai_compat
ollama:
api_key: ollama
base_url: http://localhost:11434/v1
enabled: false
model: llama3.2:3b
supports_images: false
type: openai_compat
ollama_research:
api_key: ollama
base_url: http://localhost:11434/v1
enabled: false
model: llama3.2:3b
supports_images: false
type: openai_compat
vision_service:
base_url: http://localhost:8002
enabled: false
supports_images: true
type: vision_service
vllm:
api_key: ''
base_url: http://localhost:8000/v1
enabled: false
model: __auto__
supports_images: false
type: openai_compat
vllm_research:
api_key: ''
base_url: http://localhost:8000/v1
enabled: false
model: __auto__
supports_images: false
type: openai_compat
fallback_order:
- ollama
- vllm
- anthropic
research_fallback_order:
- vllm_research
- ollama_research
- anthropic
vision_fallback_order:
- vision_service
- anthropic