feat: DEMO_MODE — isolated public menagerie demo instance

Adds a fully neutered public demo for menagerie.circuitforge.tech/peregrine
that shows the Peregrine UI without exposing any personal data or real LLM inference.

scripts/llm_router.py:
  - Block all inference when DEMO_MODE env var is set (1/true/yes)
  - Raises RuntimeError with a user-friendly "public demo" message

app/app.py:
  - IS_DEMO constant from DEMO_MODE env var
  - Wizard gate bypassed in demo mode (demo/config/user.yaml pre-seeds a fake profile)
  - Demo banner in sidebar: explains read-only status + links to circuitforge.tech

compose.menagerie.yml (new):
  - Separate Docker Compose project (peregrine-demo) on host port 8504
  - Mounts demo/config/ and demo/data/ — isolated from personal instance
  - DEMO_MODE=true, no API keys, no /docs mount
  - Project name: peregrine-demo (run alongside personal instance)

demo/config/user.yaml:
  - Generic "Demo User" profile, wizard_complete=true, no real personal info

demo/config/llm.yaml:
  - All backends disabled (belt-and-suspenders alongside DEMO_MODE block)

demo/data/.gitkeep:
  - staging.db is auto-created on first run, gitignored via demo/data/*.db

.gitignore: add demo/data/*.db

Caddy routes menagerie.circuitforge.tech/peregrine* → 8504 (demo instance).
Personal Peregrine remains on 8502, unchanged.
This commit is contained in:
pyr0ball 2026-03-02 11:22:38 -08:00
parent 044b25e838
commit bc7e3c8952
7 changed files with 189 additions and 1 deletions

2
.gitignore vendored
View file

@ -42,3 +42,5 @@ data/email_compare_sample.jsonl
config/label_tool.yaml
config/server.yaml
demo/data/*.db

View file

@ -8,6 +8,7 @@ Run: streamlit run app/app.py
bash scripts/manage-ui.sh start
"""
import logging
import os
import subprocess
import sys
from pathlib import Path
@ -16,6 +17,8 @@ sys.path.insert(0, str(Path(__file__).parent.parent))
logging.basicConfig(level=logging.WARNING, format="%(name)s %(levelname)s: %(message)s")
IS_DEMO = os.environ.get("DEMO_MODE", "").lower() in ("1", "true", "yes")
import streamlit as st
from scripts.db import DEFAULT_DB, init_db, get_active_tasks
import sqlite3
@ -76,7 +79,7 @@ except Exception:
from scripts.user_profile import UserProfile as _UserProfile
_USER_YAML = Path(__file__).parent.parent / "config" / "user.yaml"
_show_wizard = (
_show_wizard = not IS_DEMO and (
not _UserProfile.exists(_USER_YAML)
or not _UserProfile(_USER_YAML).wizard_complete
)
@ -151,6 +154,13 @@ def _get_version() -> str:
return "dev"
with st.sidebar:
if IS_DEMO:
st.info(
"**Public demo** — read-only sample data. "
"AI features and data saves are disabled.\n\n"
"[Get your own instance →](https://circuitforge.tech/software/peregrine)",
icon="🔒",
)
_task_indicator()
st.divider()
st.caption(f"Peregrine {_get_version()}")

52
compose.menagerie.yml Normal file
View file

@ -0,0 +1,52 @@
# compose.menagerie.yml — Public demo stack for menagerie.circuitforge.tech/peregrine
#
# Runs a fully isolated, neutered Peregrine instance:
# - DEMO_MODE=true: blocks all LLM inference in llm_router.py
# - demo/config/: pre-seeded demo user profile, all backends disabled
# - demo/data/: isolated SQLite DB (no personal job data)
# - No personal documents mounted
# - Port 8503 (separate from the personal instance on 8502)
#
# Usage:
# docker compose -f compose.menagerie.yml --project-name peregrine-demo up -d
# docker compose -f compose.menagerie.yml --project-name peregrine-demo down
#
# Caddy menagerie.circuitforge.tech/peregrine* → host port 8504
services:
app:
build: .
ports:
- "8504:8501"
volumes:
- ./demo/config:/app/config
- ./demo/data:/app/data
# No /docs mount — demo has no personal documents
environment:
- DEMO_MODE=true
- STAGING_DB=/app/data/staging.db
- DOCS_DIR=/tmp/demo-docs
- STREAMLIT_SERVER_BASE_URL_PATH=peregrine
- PYTHONUNBUFFERED=1
- PYTHONLOGGING=WARNING
# No API keys — inference is blocked by DEMO_MODE before any key is needed
depends_on:
searxng:
condition: service_healthy
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
searxng:
image: searxng/searxng:latest
volumes:
- ./docker/searxng:/etc/searxng:ro
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:8080/"]
interval: 10s
timeout: 5s
retries: 3
restart: unless-stopped
# No host port published — internal only; demo app uses it for job description enrichment
# (non-AI scraping is allowed; only LLM inference is blocked)

68
demo/config/llm.yaml Normal file
View file

@ -0,0 +1,68 @@
# Demo LLM config — all backends disabled.
# DEMO_MODE=true in the environment blocks the router before any backend is tried,
# so these values are never actually used. Kept for schema completeness.
backends:
anthropic:
api_key_env: ANTHROPIC_API_KEY
enabled: false
model: claude-sonnet-4-6
supports_images: true
type: anthropic
claude_code:
api_key: any
base_url: http://localhost:3009/v1
enabled: false
model: claude-code-terminal
supports_images: true
type: openai_compat
github_copilot:
api_key: any
base_url: http://localhost:3010/v1
enabled: false
model: gpt-4o
supports_images: false
type: openai_compat
ollama:
api_key: ollama
base_url: http://localhost:11434/v1
enabled: false
model: llama3.2:3b
supports_images: false
type: openai_compat
ollama_research:
api_key: ollama
base_url: http://localhost:11434/v1
enabled: false
model: llama3.2:3b
supports_images: false
type: openai_compat
vision_service:
base_url: http://localhost:8002
enabled: false
supports_images: true
type: vision_service
vllm:
api_key: ''
base_url: http://localhost:8000/v1
enabled: false
model: __auto__
supports_images: false
type: openai_compat
vllm_research:
api_key: ''
base_url: http://localhost:8000/v1
enabled: false
model: __auto__
supports_images: false
type: openai_compat
fallback_order:
- ollama
- vllm
- anthropic
research_fallback_order:
- vllm_research
- ollama_research
- anthropic
vision_fallback_order:
- vision_service
- anthropic

51
demo/config/user.yaml Normal file
View file

@ -0,0 +1,51 @@
# Demo user profile — pre-seeded for the public menagerie demo.
# No real personal information. All AI features are disabled (DEMO_MODE=true).
name: "Demo User"
email: "demo@circuitforge.tech"
phone: ""
linkedin: ""
career_summary: >
Experienced software engineer with a background in full-stack development,
cloud infrastructure, and data pipelines. Passionate about building tools
that help people navigate complex systems.
nda_companies: []
mission_preferences:
music: ""
animal_welfare: ""
education: ""
social_impact: "Want my work to reach people who need it most."
health: ""
candidate_voice: "Clear, direct, and human. Focuses on impact over jargon."
candidate_accessibility_focus: false
candidate_lgbtq_focus: false
tier: free
dev_tier_override: null
wizard_complete: true
wizard_step: 0
dismissed_banners: []
docs_dir: "/docs"
ollama_models_dir: "~/models/ollama"
vllm_models_dir: "~/models/vllm"
inference_profile: "remote"
services:
streamlit_port: 8501
ollama_host: localhost
ollama_port: 11434
ollama_ssl: false
ollama_ssl_verify: true
vllm_host: localhost
vllm_port: 8000
vllm_ssl: false
vllm_ssl_verify: true
searxng_host: searxng
searxng_port: 8080
searxng_ssl: false
searxng_ssl_verify: true

0
demo/data/.gitkeep Normal file
View file

View file

@ -49,6 +49,11 @@ class LLMRouter:
are only tried when images is provided.
Raises RuntimeError if all backends are exhausted.
"""
if os.environ.get("DEMO_MODE", "").lower() in ("1", "true", "yes"):
raise RuntimeError(
"AI inference is disabled in the public demo. "
"Run your own instance to use AI features."
)
order = fallback_order if fallback_order is not None else self.config["fallback_order"]
for name in order:
backend = self.config["backends"][name]