Commit graph

10 commits

Author SHA1 Message Date
b06d596d4c feat(vue): open Vue SPA to all tiers; fix cloud nav and feedback button
Some checks failed
CI / test (pull_request) Failing after 1m16s
- Lower vue_ui_beta gate to "free" so all licensed users can access the
  new UI without a paid subscription
- Remove "Paid tier" wording from the Try New UI banner
- Fix Vue SPA navigation in cloud/demo deployments: add VITE_BASE_PATH
  build arg so Vite sets the correct subpath base, and pass
  import.meta.env.BASE_URL to createWebHistory() so router links
  emit /peregrine/... paths that Caddy can match
- Fix feedback button missing on cloud instance by passing
  FORGEJO_API_TOKEN through compose.cloud.yml
- Remove vLLM container from compose.yml (vLLM dropped from stack;
  cf-research service in cfcore covers the use case)
- Fix cloud config path in Apply page (use get_config_dir() so per-user
  cloud data roots resolve correctly for user.yaml and resume YAML)
- Refactor generate_cover_letter._build_system_context and
  _build_mission_notes to accept explicit profile arg (enables
  per-user cover letter generation in cloud multi-tenant mode)
- Add API proxy block to nginx.conf (Vue web container can now call
  /api/ directly without Vite dev proxy)
- Update .env.example: remove vLLM vars, add research model + tuning
  vars for external vLLM deployments
- Update llm.yaml: switch vllm base_url to host.docker.internal
  (vLLM now runs outside Docker stack)

Closes #63 (feedback button)
Related: #8 (Vue SPA), #50–#62 (parity milestone)
2026-04-02 17:41:35 -07:00
bc80922d61 chore(llm): swap model_candidates order — Qwen2.5-3B first, Phi-4-mini fallback
Phi-4-mini's cached modeling_phi3.py imports SlidingWindowCache which
was removed in transformers 5.x. Qwen2.5-3B uses built-in qwen2 arch
and works cleanly. Reorder so Qwen is tried first.
2026-04-02 16:36:38 -07:00
11fb3a07b4 chore(llm): switch vllm model_candidates from Ouro to Phi-4-mini + Qwen2.5-3B
Ouro models incompatible with transformers 5.x bundled in cf env.
Phi-4-mini-instruct tried first (stronger benchmarks, 7.2GB);
Qwen2.5-3B-Instruct as VRAM-constrained fallback (5.8GB).
2026-04-02 15:34:59 -07:00
7c9dcd2620 config(llm): add cf_orch block to vllm backend 2026-04-02 12:20:41 -07:00
f9a329fb57 feat: add vllm_research backend and update research_fallback_order 2026-02-27 00:09:00 -08:00
3518d63ec2 feat: smart service adoption in preflight — use external services instead of conflicting
preflight.py now detects when a managed service (ollama, vllm, vision,
searxng) is already running on its configured port and adopts it rather
than reassigning or conflicting:

- Generates compose.override.yml disabling Docker containers for adopted
  services (profiles: [_external_] — a profile never passed via --profile)
- Rewrites config/llm.yaml base_url entries to host.docker.internal:<port>
  so the app container can reach host-side services through Docker's
  host-gateway mapping
- compose.yml: adds extra_hosts host.docker.internal:host-gateway to the
  app service (required on Linux; no-op on macOS Docker Desktop)
- .gitignore: excludes compose.override.yml (auto-generated, host-specific)

Only streamlit is non-adoptable and continues to reassign on conflict.
2026-02-25 19:23:02 -08:00
7620a2ab8d fix: repair beta installer path for Docker-first deployment
- llm.yaml + example: replace localhost URLs with Docker service names
  (ollama:11434, vllm:8000, vision:8002); replace personal model names
  (alex-cover-writer, llama3.1:8b) with llama3.2:3b
- user.yaml.example: update service hosts to Docker names (ollama, vllm,
  searxng) and searxng port from 8888 (host-mapped) to 8080 (internal)
- wizard step 5: fix hardcoded localhost defaults — wizard runs inside
  Docker, so service name defaults are required for connection tests to pass
- scrapers/companyScraper.py: bundle scraper so Dockerfile COPY succeeds
- setup.sh: remove host Ollama install (conflicts with Docker Ollama on
  port 11434); Docker entrypoint handles model download automatically
- README + setup.sh banner: add Circuit Forge mission statement
2026-02-25 16:03:10 -08:00
bc56b50696 feat: expanded first-run wizard — complete implementation
13-task implementation covering:
- UserProfile wizard fields (wizard_complete, wizard_step, tier, dev_tier_override,
  dismissed_banners, effective_tier) + params column in background_tasks
- Tier system: FEATURES gate, can_use(), tier_label() (app/wizard/tiers.py)
- Six pure validate() step modules (hardware, tier, identity, resume, inference, search)
- Resume parser: PDF (pdfplumber) + DOCX (python-docx) extraction + LLM structuring
- Integration base class + auto-discovery registry (scripts/integrations/)
- 13 integration drivers (Notion, Google Sheets, Airtable, Google Drive, Dropbox,
  OneDrive, MEGA, Nextcloud, Google Calendar, Apple Calendar, Slack, Discord,
  Home Assistant) + config/integrations/*.yaml.example files
- wizard_generate task type with 8 LLM generation sections + iterative refinement
  (previous_result + feedback support)
- step_integrations module: validate(), get_available(), is_connected()
- Wizard orchestrator rewrite (0_Setup.py): 7 steps, crash recovery, LLM polling
- app.py gate: checks wizard_complete flag in addition to file existence
- Home page: 13 dismissible contextual setup banners (wizard_complete-gated)
- Settings: Developer tab — tier override selectbox + wizard reset button

219 tests passing.
2026-02-25 10:54:24 -08:00
2d1c48e7af feat: LGBTQIA+ focus + Phase 2/3 audit fixes
LGBTQIA+ inclusion section in research briefs:
- user_profile.py: add candidate_lgbtq_focus bool accessor
- user.yaml.example: add candidate_lgbtq_focus flag (default false)
- company_research.py: gate new LGBTQIA+ section behind flag; section
  count now dynamic (7 base + 1 per opt-in section, max 9)
- 2_Settings.py: add "Research Brief Preferences" expander with
  checkboxes for both accessibility and LGBTQIA+ focus flags;
  mission_preferences now round-trips through save (no silent drop)

Phase 2 fixes:
- manage-vllm.sh: MODEL_DIR and VLLM_BIN now read from env vars
  (VLLM_MODELS_DIR, VLLM_BIN) with portable defaults
- search_profiles.yaml: replace personal CS/TAM/Bay Area profiles
  with a documented generic starter profile

Phase 3 fix:
- llm.yaml: rename alex-cover-writer:latest → llama3.2:3b with
  inline comment for users to substitute their fine-tuned model;
  fix model-exclusion comment
2026-02-24 20:02:03 -08:00
1dc1ca89d7 chore: seed Peregrine from personal job-seeker (pre-generalization)
App: Peregrine
Company: Circuit Forge LLC
Source: github.com/pyr0ball/job-seeker (personal fork, not linked)
2026-02-24 18:25:39 -08:00