peregrine/config/llm.yaml
pyr0ball 92b13c32ff feat: expanded first-run wizard — complete implementation
13-task implementation covering:
- UserProfile wizard fields (wizard_complete, wizard_step, tier, dev_tier_override,
  dismissed_banners, effective_tier) + params column in background_tasks
- Tier system: FEATURES gate, can_use(), tier_label() (app/wizard/tiers.py)
- Six pure validate() step modules (hardware, tier, identity, resume, inference, search)
- Resume parser: PDF (pdfplumber) + DOCX (python-docx) extraction + LLM structuring
- Integration base class + auto-discovery registry (scripts/integrations/)
- 13 integration drivers (Notion, Google Sheets, Airtable, Google Drive, Dropbox,
  OneDrive, MEGA, Nextcloud, Google Calendar, Apple Calendar, Slack, Discord,
  Home Assistant) + config/integrations/*.yaml.example files
- wizard_generate task type with 8 LLM generation sections + iterative refinement
  (previous_result + feedback support)
- step_integrations module: validate(), get_available(), is_connected()
- Wizard orchestrator rewrite (0_Setup.py): 7 steps, crash recovery, LLM polling
- app.py gate: checks wizard_complete flag in addition to file existence
- Home page: 13 dismissible contextual setup banners (wizard_complete-gated)
- Settings: Developer tab — tier override selectbox + wizard reset button

219 tests passing.
2026-02-25 10:54:24 -08:00

63 lines
1.3 KiB
YAML

backends:
anthropic:
api_key_env: ANTHROPIC_API_KEY
enabled: false
model: claude-sonnet-4-6
supports_images: true
type: anthropic
claude_code:
api_key: any
base_url: http://localhost:3009/v1
enabled: false
model: claude-code-terminal
supports_images: true
type: openai_compat
github_copilot:
api_key: any
base_url: http://localhost:3010/v1
enabled: false
model: gpt-4o
supports_images: false
type: openai_compat
ollama:
api_key: ollama
base_url: http://localhost:11434/v1
enabled: true
model: meghan-cover-writer:latest
supports_images: false
type: openai_compat
ollama_research:
api_key: ollama
base_url: http://localhost:11434/v1
enabled: true
model: llama3.1:8b
supports_images: false
type: openai_compat
vision_service:
base_url: http://localhost:8002
enabled: true
supports_images: true
type: vision_service
vllm:
api_key: ''
base_url: http://localhost:8000/v1
enabled: true
model: __auto__
supports_images: false
type: openai_compat
fallback_order:
- ollama
- claude_code
- vllm
- github_copilot
- anthropic
research_fallback_order:
- claude_code
- vllm
- ollama_research
- github_copilot
- anthropic
vision_fallback_order:
- vision_service
- claude_code
- anthropic