LGBTQIA+ inclusion section in research briefs: - user_profile.py: add candidate_lgbtq_focus bool accessor - user.yaml.example: add candidate_lgbtq_focus flag (default false) - company_research.py: gate new LGBTQIA+ section behind flag; section count now dynamic (7 base + 1 per opt-in section, max 9) - 2_Settings.py: add "Research Brief Preferences" expander with checkboxes for both accessibility and LGBTQIA+ focus flags; mission_preferences now round-trips through save (no silent drop) Phase 2 fixes: - manage-vllm.sh: MODEL_DIR and VLLM_BIN now read from env vars (VLLM_MODELS_DIR, VLLM_BIN) with portable defaults - search_profiles.yaml: replace personal CS/TAM/Bay Area profiles with a documented generic starter profile Phase 3 fix: - llm.yaml: rename alex-cover-writer:latest → llama3.2:3b with inline comment for users to substitute their fine-tuned model; fix model-exclusion comment Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
66 lines
1.6 KiB
YAML
66 lines
1.6 KiB
YAML
backends:
|
|
anthropic:
|
|
api_key_env: ANTHROPIC_API_KEY
|
|
enabled: false
|
|
model: claude-sonnet-4-6
|
|
type: anthropic
|
|
supports_images: true
|
|
claude_code:
|
|
api_key: any
|
|
base_url: http://localhost:3009/v1
|
|
enabled: false
|
|
model: claude-code-terminal
|
|
type: openai_compat
|
|
supports_images: true
|
|
github_copilot:
|
|
api_key: any
|
|
base_url: http://localhost:3010/v1
|
|
enabled: false
|
|
model: gpt-4o
|
|
type: openai_compat
|
|
supports_images: false
|
|
ollama:
|
|
api_key: ollama
|
|
base_url: http://localhost:11434/v1
|
|
enabled: true
|
|
model: llama3.2:3b # replace with your fine-tuned cover letter model if you have one
|
|
type: openai_compat
|
|
supports_images: false
|
|
ollama_research:
|
|
api_key: ollama
|
|
base_url: http://localhost:11434/v1
|
|
enabled: true
|
|
model: llama3.1:8b
|
|
type: openai_compat
|
|
supports_images: false
|
|
vllm:
|
|
api_key: ''
|
|
base_url: http://localhost:8000/v1
|
|
enabled: true
|
|
model: __auto__
|
|
type: openai_compat
|
|
supports_images: false
|
|
vision_service:
|
|
base_url: http://localhost:8002
|
|
enabled: false
|
|
type: vision_service
|
|
supports_images: true
|
|
fallback_order:
|
|
- ollama
|
|
- claude_code
|
|
- vllm
|
|
- github_copilot
|
|
- anthropic
|
|
research_fallback_order:
|
|
- claude_code
|
|
- vllm
|
|
- ollama_research
|
|
- github_copilot
|
|
- anthropic
|
|
vision_fallback_order:
|
|
- vision_service
|
|
- claude_code
|
|
- anthropic
|
|
# Note: 'ollama' intentionally excluded from research order — research
|
|
# must never use the cover letter model, and this also avoids evicting
|
|
# the writer from GPU memory while a cover letter task is in flight.
|