Compare commits

...

18 commits

Author SHA1 Message Date
dc508d7197 fix: update tests to match refactored scheduler and free-tier Vue SPA
Some checks failed
CI / test (push) Failing after 28s
- task_scheduler: extend LocalScheduler (concrete class), not TaskScheduler
  (Protocol); remove unsupported VRAM kwargs from super().__init__()
- dev-api: lazy import db_migrate inside _startup() to avoid worktree
  scripts cache issue in test_dev_api_settings.py
- test_task_scheduler: update VRAM-attribute tests to match LocalScheduler
  (no _available_vram/_reserved_vram); drop deepest-queue VRAM-gating
  ordering assertion (LocalScheduler is FIFO, not priority-gated);
  suppress PytestUnhandledThreadExceptionWarning on crash test; fix
  budget assertion to not depend on shared pytest tmp dir state
- test_dev_api_settings: patch path functions (_resume_path, _search_prefs_path,
  _license_path, _tokens_path, _config_dir) instead of removed module-level
  constants; mock _TRAINING_JSONL for finetune status idle test
- test_wizard_tiers: Vue SPA is free tier (issue #20), assert True
- test_wizard_api: patch _search_prefs_path() function, not SEARCH_PREFS_PATH
- test_ui_switcher: free-tier vue preference no longer downgrades to streamlit
2026-04-05 07:35:45 -07:00
fb9f751321 chore: bump circuitforge-core dep comment to >=0.8.0 (orch split)
Some checks failed
CI / test (push) Failing after 22s
2026-04-04 22:49:03 -07:00
ac3e97d6c8 feat(#62): Fine-Tune tab — training pair management + real submit
Some checks failed
CI / test (push) Failing after 21s
API (dev-api.py):
- GET /api/settings/fine-tune/pairs — list pairs from JSONL with index/instruction/source_file
- DELETE /api/settings/fine-tune/pairs/{index} — remove a pair and rewrite JSONL
- POST /api/settings/fine-tune/submit — now queues prepare_training task (replaces UUID stub)
- GET /api/settings/fine-tune/status — returns pairs_count from JSONL (not just DB task)

Store (fineTune.ts):
- TrainingPair interface
- pairs, pairsLoading refs
- loadPairs(), deletePair() actions

Vue (FineTuneView.vue):
- Step 2 shows scrollable pairs list with instruction + source file
- ✕ button on each pair calls deletePair(); list/count update immediately
- loadPairs() called on mount
2026-04-04 22:30:16 -07:00
42c9c882ee feat(#59): LLM-assisted generation for all settings form fields
Some checks failed
CI / test (push) Failing after 21s
API endpoints (dev-api.py):
- POST /api/settings/profile/generate-summary → {summary}
- POST /api/settings/profile/generate-missions → {mission_preferences}
- POST /api/settings/profile/generate-voice → {voice}
- POST /api/settings/search/suggest → replaces stub; handles titles/locations/exclude_keywords

Vue (MyProfileView.vue):
- Generate ✦ button on candidate_voice textarea (was missing)

Vue (SearchPrefsView.vue + search store):
- Suggest button for Exclude Keywords section (matches titles/locations pattern)
- suggestExcludeKeywords() in search store
- acceptSuggestion() extended to 'exclude' type
2026-04-04 22:27:20 -07:00
4f825d0f00 feat(#45): manual theme switcher (light/dark/solarized/colorblind-safe)
Some checks failed
CI / test (push) Failing after 18s
- theme.css: explicit [data-theme] blocks for light, dark, solarized-dark,
  solarized-light, colorblind (Wong 2011 palette); auto-dark media query
  updated to :root:not([data-theme]) so explicit themes always win
- useTheme.ts: singleton composable — setTheme(), restoreTheme(), initTheme();
  persists to localStorage + API; coordinates with hacker mode exit
- AppNav.vue: theme <select> in sidebar footer; exitHackerMode now calls
  restoreTheme() instead of deleting data-theme directly
- useEasterEgg.ts: hacker mode toggle-off calls restoreTheme()
- App.vue: calls initTheme() on mount before restore()
- dev-api.py: POST /api/settings/theme endpoint persists to user.yaml
2026-04-04 22:22:04 -07:00
64554dbef1 feat(#43): numbered SQL migration runner (Rails-style)
Some checks failed
CI / test (push) Failing after 19s
- migrations/001_baseline.sql: full schema baseline (all tables/cols)
- scripts/db_migrate.py: apply sorted *.sql files, track in schema_migrations
- Wired into FastAPI startup and Streamlit app.py startup
- Replaces ad-hoc digest_queue CREATE in _startup()
- 6 tests covering apply, idempotency, partial apply, failure rollback
- docs/developer-guide/contributing.md: migration authoring guide
2026-04-04 22:17:42 -07:00
065c02feb7 feat(vue): Home dashboard parity — Enrich button, Danger Zone, setup banners (closes #57)
Some checks failed
CI / test (push) Failing after 20s
API additions (dev-api.py):
- GET /api/tasks — list active background tasks
- DELETE /api/tasks/{task_id} — per-task cancel
- POST /api/tasks/kill — kill all stuck tasks
- POST /api/tasks/discovery|email-sync|enrich|score|sync — queue/trigger each workflow
- POST /api/jobs/archive — archive by statuses array
- POST /api/jobs/purge — hard delete by statuses or target (email/non_remote/rescrape)
- POST /api/jobs/add — queue URL imports
- POST /api/jobs/upload-csv — upload CSV with URL column
- GET  /api/config/setup-banners — list undismissed onboarding hints
- POST /api/config/setup-banners/{key}/dismiss — dismiss a banner

HomeView.vue:
- 4th WorkflowButton: "Fill Missing Descriptions" (always visible, not gated on enrichment_enabled)
- Danger Zone redesign: scope radio (pending-only vs pending+approved), Archive & reset (primary)
  vs Hard purge (secondary), inline confirm dialogs, active task list with per-task cancel,
  Kill all stuck button, More Options (email purge / non-remote / wipe+rescrape)
- Setup banners: dismissible onboarding hints pulled from /api/config/setup-banners,
  5-second polling for active task list to stay live

app/Home.py:
- Danger Zone redesign: same scope radio + archive/purge with confirm steps
- Background task list with per-task cancel and Kill all stuck button
- More options expander (email purge, non-remote, wipe+rescrape)
- Setup banners section at page bottom
2026-04-04 22:05:06 -07:00
53b07568d9 feat(vue): accumulated parity work — Q&A, Apply highlights, AppNav switcher, cloud API
API additions (dev-api.py split across this and next commit):
- /api/jobs/{job_id}/qa GET/PATCH/suggest — Interview Prep answer storage + LLM suggestions
- /api/settings/ui-preference POST — persist streamlit/vue preference to user.yaml
- cancel_task() added to scripts/db.py (per-task cancel for Danger Zone)

Vue / UI:
- AppNav: " Classic" button to switch back to Streamlit UI (writes cookie + persists to user.yaml)
- ApplyWorkspace: Resume Highlights panel (collapsible skills/domains/keywords with job-match highlighting)
- SettingsView: hide Data tab in cloud mode (showData guard)
- ResumeProfileView: minor improvements
- useApi.ts: error handling improvements

Infra:
- compose.cloud.yml: add api service (uvicorn dev_api running in cloud container)
- docker/web/nginx.conf: proxy /api/* to api service in cloud mode
- README.md: Vue SPA now listed as Free tier feature
2026-04-04 22:04:51 -07:00
173da49087 feat: wire circuitforge-core config.load_env at entry points (closes #68 partial)
Some checks failed
CI / test (push) Failing after 19s
- app/app.py: load_env at module level (safe in Docker, fills gaps on bare-metal)
- dev_api.py: load_env in startup handler (avoids test-env pollution)
- requirements.txt: note >= 0.7.0 requirement; TODO tag once cf-core cuts release

db.migration runner deferral: tracked in #43 (Rails-style numbered migrations)
CFOrchClient VRAM wiring: already present in task_scheduler via CF_ORCH_URL env var
2026-04-04 19:37:58 -07:00
1ab1dffc47 feat: cf-core env-var LLM config + coordinator auth (closes #67)
Some checks failed
CI / test (push) Failing after 38s
- LLMRouter shim: tri-level config priority (local yaml > user yaml > env-var)
- .env.example: document OLLAMA_HOST, OLLAMA_MODEL, OPENAI_MODEL, ANTHROPIC_MODEL,
  CF_LICENSE_KEY, CF_ORCH_URL
- Wizard Step 5: env-var setup hint + optional Ollama fields for remote profile
- Preflight: write OLLAMA_HOST to .env when Ollama is adopted from host process
2026-04-04 19:27:24 -07:00
9392ee2979 fix: address code review — drop OLLAMA_RESEARCH_HOST, fix test fidelity, simplify model guard 2026-04-04 19:26:08 -07:00
3f376347d6 feat(preflight): write OLLAMA_HOST to .env when Ollama is adopted from host
When preflight.py adopts a host-running Ollama (or ollama_research) service,
write OLLAMA_HOST (and OLLAMA_RESEARCH_HOST) into .env using host.docker.internal
so LLMRouter's env-var auto-config resolves the correct address from inside the
Docker container without requiring a config/llm.yaml to exist.
2026-04-04 19:02:26 -07:00
cd865a9e77 feat(wizard): surface env-var LLM setup hint + optional Ollama field in Step 5 2026-04-04 18:39:16 -07:00
c5e2dc975f chore: add LLM env-var config + CF coordinator vars to .env.example 2026-04-04 18:37:58 -07:00
f62a9d9901 feat: llm_router shim — tri-level config priority (local > user > env-var) 2026-04-04 18:36:29 -07:00
b79d13b4f2 feat(vue): parity gaps #50, #54, #61 — sort/filter, research modal, draft CL button
Some checks failed
CI / test (push) Failing after 30s
#50 Job Review list view — sort + filter controls:
- Sort by best match / newest first / company A-Z (client-side computed)
- Remote-only checkbox filter
- Job count indicator; filters reset on tab switch
- Remote badge on list items

#61 Cover letter generation from approved tab:
- ' Draft' button on each approved-list item → /apply/:id
- No extra API call; ApplyWorkspace handles generation from there

#54 Company research modal (all API endpoints already existed):
- CompanyResearchModal.vue: 3-state machine (empty→generating→ready)
  polling /research/task every 3s, displays all 7 research sections
  (company, leadership, talking points, tech, funding, red flags,
  accessibility), copy-to-clipboard for talking points, ↺ Refresh
- InterviewCard: new 'research' emit + '🔍 Research' button for
  phone_screen/interviewing/offer stages
- InterviewsView: wires modal with researchJobId/Title/AutoGen state;
  auto-opens modal with autoGenerate=true when a job is moved to
  phone_screen (mirrors Streamlit behaviour)
2026-04-02 19:26:13 -07:00
5f4eecbc02 chore(release): v0.8.5
Some checks failed
CI / test (push) Failing after 27s
2026-04-02 18:47:33 -07:00
9069447cfc Merge pull request 'feat(wizard): Vue onboarding wizard + user config isolation fixes' (#65) from feature/vue-wizard into main
Some checks failed
CI / test (push) Failing after 26s
2026-04-02 18:46:42 -07:00
44 changed files with 3602 additions and 401 deletions

View file

@ -19,6 +19,14 @@ VLLM_MAX_MODEL_LEN=4096 # increase to 8192 for Thinking models with
VLLM_GPU_MEM_UTIL=0.75 # lower to 0.6 if sharing GPU with other services VLLM_GPU_MEM_UTIL=0.75 # lower to 0.6 if sharing GPU with other services
OLLAMA_DEFAULT_MODEL=llama3.2:3b OLLAMA_DEFAULT_MODEL=llama3.2:3b
# ── LLM env-var auto-config (alternative to config/llm.yaml) ─────────────────
# Set any of these to configure LLM backends without needing a config/llm.yaml.
# Priority: Anthropic > OpenAI-compat > Ollama (always tried as local fallback).
OLLAMA_HOST=http://localhost:11434 # Ollama host; override if on a different machine
OLLAMA_MODEL=llama3.2:3b # model to request from Ollama
OPENAI_MODEL=gpt-4o-mini # model override for OpenAI-compat backend
ANTHROPIC_MODEL=claude-haiku-4-5-20251001 # model override for Anthropic backend
# API keys (required for remote profile) # API keys (required for remote profile)
ANTHROPIC_API_KEY= ANTHROPIC_API_KEY=
OPENAI_COMPAT_URL= OPENAI_COMPAT_URL=
@ -31,6 +39,12 @@ FORGEJO_API_URL=https://git.opensourcesolarpunk.com/api/v1
# GITHUB_TOKEN= # future — enable when public mirror is active # GITHUB_TOKEN= # future — enable when public mirror is active
# GITHUB_REPO= # future # GITHUB_REPO= # future
# ── CF-hosted coordinator (Paid+ tier) ───────────────────────────────────────
# Set CF_LICENSE_KEY to authenticate with the hosted coordinator.
# Leave both blank for local self-hosted cf-orch or bare-metal inference.
CF_LICENSE_KEY=
CF_ORCH_URL=https://orch.circuitforge.tech
# Cloud multi-tenancy (compose.cloud.yml only — do not set for local installs) # Cloud multi-tenancy (compose.cloud.yml only — do not set for local installs)
CLOUD_MODE=false CLOUD_MODE=false
CLOUD_DATA_ROOT=/devl/menagerie-data CLOUD_DATA_ROOT=/devl/menagerie-data

View file

@ -9,6 +9,44 @@ Format follows [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
--- ---
## [0.8.5] — 2026-04-02
### Added
- **Vue onboarding wizard** — 7-step first-run setup replaces the Streamlit wizard
in the Vue SPA: Hardware detection → Tier → Resume upload/build → Identity →
Inference & API keys → Search preferences → Integrations. Progress saves to
`user.yaml` on every step; crash-recovery resumes from the last completed step.
- **Wizard API endpoints**`GET /api/wizard/status`, `POST /api/wizard/step`,
`GET /api/wizard/hardware`, `POST /api/wizard/inference/test`,
`POST /api/wizard/complete`. Inference test always soft-fails so Ollama being
unreachable never blocks setup completion.
- **Cloud auto-skip** — cloud instances automatically complete steps 1 (hardware),
2 (tier), and 5 (inference) and drop the user directly on the Resume step.
- **`wizardGuard` router gate** — all Vue routes require wizard completion; completed
users are bounced away from `/setup` to `/`.
- **Chip-input search step** — job titles and locations entered as press-Enter/comma
chips; validates at least one title before advancing.
- **Integrations tile grid** — optional step 7 shows Notion, Calendar, Slack, Discord,
Drive with paid-tier badges; skippable on Finish.
### Fixed
- **User config isolation: dangerous fallback removed**`_user_yaml_path()` fell
back to `/devl/job-seeker/config/user.yaml` (legacy profile) when `user.yaml`
didn't exist at the expected path; new users now get an empty dict instead of
another user's data. Affects profile, resume, search, and all wizard endpoints.
- **Resume path not user-isolated**`RESUME_PATH = Path("config/plain_text_resume.yaml")`
was a relative CWD path shared across all users. Replaced with `_resume_path()`
derived from `_user_yaml_path()` / `STAGING_DB`.
- **Resume upload silently returned empty data**`upload_resume` was passing a
file path string to `structure_resume()` which expects raw text; now reads bytes
and dispatches to the correct extractor (`extract_text_from_pdf` / `_docx` / `_odt`).
- **Wizard resume step read wrong envelope field**`WizardResumeStep.vue` read
`data.experience` but the upload response wraps parsed data under `data.data`.
---
## [0.8.4] — 2026-04-02 ## [0.8.4] — 2026-04-02
### Fixed ### Fixed

View file

@ -154,7 +154,7 @@ Re-enter the wizard any time via **Settings → Developer → Reset wizard**.
| Calendar sync (Google, Apple) | Paid | | Calendar sync (Google, Apple) | Paid |
| Slack notifications | Paid | | Slack notifications | Paid |
| CircuitForge shared cover-letter model | Paid | | CircuitForge shared cover-letter model | Paid |
| Vue 3 SPA beta UI | Paid | | Vue 3 SPA — full UI with onboarding wizard, job board, apply workspace, sort/filter, research modal, draft cover letter | Free |
| **Voice guidelines** (custom writing style & tone) | Premium with LLM¹ ² | | **Voice guidelines** (custom writing style & tone) | Premium with LLM¹ ² |
| Cover letter model fine-tuning (your writing, your model) | Premium | | Cover letter model fine-tuning (your writing, your model) | Premium |
| Multi-user support | Premium | | Multi-user support | Premium |

View file

@ -19,8 +19,8 @@ _profile = UserProfile(_USER_YAML) if UserProfile.exists(_USER_YAML) else None
_name = _profile.name if _profile else "Job Seeker" _name = _profile.name if _profile else "Job Seeker"
from scripts.db import init_db, get_job_counts, purge_jobs, purge_email_data, \ from scripts.db import init_db, get_job_counts, purge_jobs, purge_email_data, \
purge_non_remote, archive_jobs, kill_stuck_tasks, get_task_for_job, get_active_tasks, \ purge_non_remote, archive_jobs, kill_stuck_tasks, cancel_task, \
insert_job, get_existing_urls get_task_for_job, get_active_tasks, insert_job, get_existing_urls
from scripts.task_runner import submit_task from scripts.task_runner import submit_task
from app.cloud_session import resolve_session, get_db_path from app.cloud_session import resolve_session, get_db_path
@ -376,178 +376,145 @@ _scrape_status()
st.divider() st.divider()
# ── Danger zone: purge + re-scrape ──────────────────────────────────────────── # ── Danger zone ───────────────────────────────────────────────────────────────
with st.expander("⚠️ Danger Zone", expanded=False): with st.expander("⚠️ Danger Zone", expanded=False):
# ── Queue reset (the common case) ─────────────────────────────────────────
st.markdown("**Queue reset**")
st.caption( st.caption(
"**Purge** permanently deletes jobs from the local database. " "Archive clears your review queue while keeping job URLs for dedup, "
"Applied and synced jobs are never touched." "so the same listings won't resurface on the next discovery run. "
"Use hard purge only if you want a full clean slate including dedup history."
) )
purge_col, rescrape_col, email_col, tasks_col = st.columns(4) _scope = st.radio(
"Clear scope",
["Pending only", "Pending + approved (stale search)"],
horizontal=True,
label_visibility="collapsed",
)
_scope_statuses = (
["pending"] if _scope == "Pending only" else ["pending", "approved"]
)
with purge_col: _qc1, _qc2, _qc3 = st.columns([2, 2, 4])
st.markdown("**Purge pending & rejected**") if _qc1.button("📦 Archive & reset", use_container_width=True, type="primary"):
st.caption("Removes all _pending_ and _rejected_ listings so the next discovery starts fresh.") st.session_state["confirm_dz"] = "archive"
if st.button("🗑 Purge Pending + Rejected", use_container_width=True): if _qc2.button("🗑 Hard purge (delete)", use_container_width=True):
st.session_state["confirm_purge"] = "partial" st.session_state["confirm_dz"] = "purge"
if st.session_state.get("confirm_purge") == "partial": if st.session_state.get("confirm_dz") == "archive":
st.warning("Are you sure? This cannot be undone.") st.info(
c1, c2 = st.columns(2) f"Archive **{', '.join(_scope_statuses)}** jobs? "
if c1.button("Yes, purge", type="primary", use_container_width=True): "URLs are kept for dedup — nothing is permanently deleted."
deleted = purge_jobs(get_db_path(), statuses=["pending", "rejected"]) )
st.success(f"Purged {deleted} jobs.") _dc1, _dc2 = st.columns(2)
st.session_state.pop("confirm_purge", None) if _dc1.button("Yes, archive", type="primary", use_container_width=True, key="dz_archive_confirm"):
st.rerun() n = archive_jobs(get_db_path(), statuses=_scope_statuses)
if c2.button("Cancel", use_container_width=True): st.success(f"Archived {n} jobs.")
st.session_state.pop("confirm_purge", None) st.session_state.pop("confirm_dz", None)
st.rerun() st.rerun()
if _dc2.button("Cancel", use_container_width=True, key="dz_archive_cancel"):
with email_col: st.session_state.pop("confirm_dz", None)
st.markdown("**Purge email data**")
st.caption("Clears all email thread logs and email-sourced pending jobs so the next sync starts fresh.")
if st.button("📧 Purge Email Data", use_container_width=True):
st.session_state["confirm_purge"] = "email"
if st.session_state.get("confirm_purge") == "email":
st.warning("This deletes all email contacts and email-sourced jobs. Cannot be undone.")
c1, c2 = st.columns(2)
if c1.button("Yes, purge emails", type="primary", use_container_width=True):
contacts, jobs = purge_email_data(get_db_path())
st.success(f"Purged {contacts} email contacts, {jobs} email jobs.")
st.session_state.pop("confirm_purge", None)
st.rerun()
if c2.button("Cancel ", use_container_width=True):
st.session_state.pop("confirm_purge", None)
st.rerun()
with tasks_col:
_active = get_active_tasks(get_db_path())
st.markdown("**Kill stuck tasks**")
st.caption(f"Force-fail all queued/running background tasks. Currently **{len(_active)}** active.")
if st.button("⏹ Kill All Tasks", use_container_width=True, disabled=len(_active) == 0):
killed = kill_stuck_tasks(get_db_path())
st.success(f"Killed {killed} task(s).")
st.rerun() st.rerun()
with rescrape_col: if st.session_state.get("confirm_dz") == "purge":
st.markdown("**Purge all & re-scrape**") st.warning(
st.caption("Wipes _all_ non-applied, non-synced jobs then immediately runs a fresh discovery.") f"Permanently delete **{', '.join(_scope_statuses)}** jobs? "
if st.button("🔄 Purge All + Re-scrape", use_container_width=True): "This removes the URLs from dedup history too. Cannot be undone."
st.session_state["confirm_purge"] = "full"
if st.session_state.get("confirm_purge") == "full":
st.warning("This will delete ALL pending, approved, and rejected jobs, then re-scrape. Applied and synced records are kept.")
c1, c2 = st.columns(2)
if c1.button("Yes, wipe + scrape", type="primary", use_container_width=True):
purge_jobs(get_db_path(), statuses=["pending", "approved", "rejected"])
submit_task(get_db_path(), "discovery", 0)
st.session_state.pop("confirm_purge", None)
st.rerun()
if c2.button("Cancel ", use_container_width=True):
st.session_state.pop("confirm_purge", None)
st.rerun()
st.divider()
pending_col, nonremote_col, approved_col, _ = st.columns(4)
with pending_col:
st.markdown("**Purge pending review**")
st.caption("Removes only _pending_ listings, keeping your rejected history intact.")
if st.button("🗑 Purge Pending Only", use_container_width=True):
st.session_state["confirm_purge"] = "pending_only"
if st.session_state.get("confirm_purge") == "pending_only":
st.warning("Deletes all pending jobs. Rejected jobs are kept. Cannot be undone.")
c1, c2 = st.columns(2)
if c1.button("Yes, purge pending", type="primary", use_container_width=True):
deleted = purge_jobs(get_db_path(), statuses=["pending"])
st.success(f"Purged {deleted} pending jobs.")
st.session_state.pop("confirm_purge", None)
st.rerun()
if c2.button("Cancel ", use_container_width=True):
st.session_state.pop("confirm_purge", None)
st.rerun()
with nonremote_col:
st.markdown("**Purge non-remote**")
st.caption("Removes pending/approved/rejected jobs where remote is not set. Keeps anything already in the pipeline.")
if st.button("🏢 Purge On-site Jobs", use_container_width=True):
st.session_state["confirm_purge"] = "non_remote"
if st.session_state.get("confirm_purge") == "non_remote":
st.warning("Deletes all non-remote jobs not yet applied to. Cannot be undone.")
c1, c2 = st.columns(2)
if c1.button("Yes, purge on-site", type="primary", use_container_width=True):
deleted = purge_non_remote(get_db_path())
st.success(f"Purged {deleted} non-remote jobs.")
st.session_state.pop("confirm_purge", None)
st.rerun()
if c2.button("Cancel ", use_container_width=True):
st.session_state.pop("confirm_purge", None)
st.rerun()
with approved_col:
st.markdown("**Purge approved (unapplied)**")
st.caption("Removes _approved_ jobs you haven't applied to yet — e.g. to reset after a review pass.")
if st.button("🗑 Purge Approved", use_container_width=True):
st.session_state["confirm_purge"] = "approved_only"
if st.session_state.get("confirm_purge") == "approved_only":
st.warning("Deletes all approved-but-not-applied jobs. Cannot be undone.")
c1, c2 = st.columns(2)
if c1.button("Yes, purge approved", type="primary", use_container_width=True):
deleted = purge_jobs(get_db_path(), statuses=["approved"])
st.success(f"Purged {deleted} approved jobs.")
st.session_state.pop("confirm_purge", None)
st.rerun()
if c2.button("Cancel ", use_container_width=True):
st.session_state.pop("confirm_purge", None)
st.rerun()
st.divider()
archive_col1, archive_col2, _, _ = st.columns(4)
with archive_col1:
st.markdown("**Archive remaining**")
st.caption(
"Move all _pending_ and _rejected_ jobs to archived status. "
"Archived jobs stay in the DB for dedup — they just won't appear in Job Review."
) )
if st.button("📦 Archive Pending + Rejected", use_container_width=True): _dc1, _dc2 = st.columns(2)
st.session_state["confirm_purge"] = "archive_remaining" if _dc1.button("Yes, delete", type="primary", use_container_width=True, key="dz_purge_confirm"):
n = purge_jobs(get_db_path(), statuses=_scope_statuses)
st.success(f"Deleted {n} jobs.")
st.session_state.pop("confirm_dz", None)
st.rerun()
if _dc2.button("Cancel", use_container_width=True, key="dz_purge_cancel"):
st.session_state.pop("confirm_dz", None)
st.rerun()
if st.session_state.get("confirm_purge") == "archive_remaining": st.divider()
st.info("Jobs will be archived (not deleted) — URLs are kept for dedup.")
c1, c2 = st.columns(2)
if c1.button("Yes, archive", type="primary", use_container_width=True):
archived = archive_jobs(get_db_path(), statuses=["pending", "rejected"])
st.success(f"Archived {archived} jobs.")
st.session_state.pop("confirm_purge", None)
st.rerun()
if c2.button("Cancel ", use_container_width=True):
st.session_state.pop("confirm_purge", None)
st.rerun()
with archive_col2: # ── Background tasks ──────────────────────────────────────────────────────
st.markdown("**Archive approved (unapplied)**") _active = get_active_tasks(get_db_path())
st.caption("Archive _approved_ listings you decided to skip — keeps history without cluttering the apply queue.") st.markdown(f"**Background tasks** — {len(_active)} active")
if st.button("📦 Archive Approved", use_container_width=True):
st.session_state["confirm_purge"] = "archive_approved"
if st.session_state.get("confirm_purge") == "archive_approved": if _active:
st.info("Approved jobs will be archived (not deleted).") _task_icons = {"cover_letter": "✉️", "research": "🔍", "discovery": "🌐", "enrich_descriptions": "📝"}
c1, c2 = st.columns(2) for _t in _active:
if c1.button("Yes, archive approved", type="primary", use_container_width=True): _tc1, _tc2, _tc3 = st.columns([3, 4, 2])
archived = archive_jobs(get_db_path(), statuses=["approved"]) _icon = _task_icons.get(_t["task_type"], "⚙️")
st.success(f"Archived {archived} approved jobs.") _tc1.caption(f"{_icon} `{_t['task_type']}`")
st.session_state.pop("confirm_purge", None) _job_label = f"{_t['title']} @ {_t['company']}" if _t.get("title") else f"job #{_t['job_id']}"
st.rerun() _tc2.caption(_job_label)
if c2.button("Cancel ", use_container_width=True): _tc3.caption(f"_{_t['status']}_")
st.session_state.pop("confirm_purge", None) if st.button("✕ Cancel", key=f"dz_cancel_task_{_t['id']}", use_container_width=True):
cancel_task(get_db_path(), _t["id"])
st.rerun() st.rerun()
st.caption("")
_kill_col, _ = st.columns([2, 6])
if _kill_col.button("⏹ Kill all stuck", use_container_width=True, disabled=len(_active) == 0):
killed = kill_stuck_tasks(get_db_path())
st.success(f"Killed {killed} task(s).")
st.rerun()
st.divider()
# ── Rarely needed (collapsed) ─────────────────────────────────────────────
with st.expander("More options", expanded=False):
_rare1, _rare2, _rare3 = st.columns(3)
with _rare1:
st.markdown("**Purge email data**")
st.caption("Clears all email thread logs and email-sourced pending jobs.")
if st.button("📧 Purge Email Data", use_container_width=True):
st.session_state["confirm_dz"] = "email"
if st.session_state.get("confirm_dz") == "email":
st.warning("Deletes all email contacts and email-sourced jobs. Cannot be undone.")
_ec1, _ec2 = st.columns(2)
if _ec1.button("Yes, purge emails", type="primary", use_container_width=True, key="dz_email_confirm"):
contacts, jobs = purge_email_data(get_db_path())
st.success(f"Purged {contacts} email contacts, {jobs} email jobs.")
st.session_state.pop("confirm_dz", None)
st.rerun()
if _ec2.button("Cancel", use_container_width=True, key="dz_email_cancel"):
st.session_state.pop("confirm_dz", None)
st.rerun()
with _rare2:
st.markdown("**Purge non-remote**")
st.caption("Removes pending/approved/rejected on-site listings from the DB.")
if st.button("🏢 Purge On-site Jobs", use_container_width=True):
st.session_state["confirm_dz"] = "non_remote"
if st.session_state.get("confirm_dz") == "non_remote":
st.warning("Deletes all non-remote jobs not yet applied to. Cannot be undone.")
_rc1, _rc2 = st.columns(2)
if _rc1.button("Yes, purge on-site", type="primary", use_container_width=True, key="dz_nonremote_confirm"):
deleted = purge_non_remote(get_db_path())
st.success(f"Purged {deleted} non-remote jobs.")
st.session_state.pop("confirm_dz", None)
st.rerun()
if _rc2.button("Cancel", use_container_width=True, key="dz_nonremote_cancel"):
st.session_state.pop("confirm_dz", None)
st.rerun()
with _rare3:
st.markdown("**Wipe all + re-scrape**")
st.caption("Deletes all non-applied jobs then immediately runs a fresh discovery.")
if st.button("🔄 Wipe + Re-scrape", use_container_width=True):
st.session_state["confirm_dz"] = "rescrape"
if st.session_state.get("confirm_dz") == "rescrape":
st.warning("Wipes ALL pending, approved, and rejected jobs, then re-scrapes. Applied and synced records are kept.")
_wc1, _wc2 = st.columns(2)
if _wc1.button("Yes, wipe + scrape", type="primary", use_container_width=True, key="dz_rescrape_confirm"):
purge_jobs(get_db_path(), statuses=["pending", "approved", "rejected"])
submit_task(get_db_path(), "discovery", 0)
st.session_state.pop("confirm_dz", None)
st.rerun()
if _wc2.button("Cancel", use_container_width=True, key="dz_rescrape_cancel"):
st.session_state.pop("confirm_dz", None)
st.rerun()
# ── Setup banners ───────────────────────────────────────────────────────────── # ── Setup banners ─────────────────────────────────────────────────────────────
if _profile and _profile.wizard_complete: if _profile and _profile.wizard_complete:

View file

@ -17,10 +17,16 @@ sys.path.insert(0, str(Path(__file__).parent.parent))
logging.basicConfig(level=logging.WARNING, format="%(name)s %(levelname)s: %(message)s") logging.basicConfig(level=logging.WARNING, format="%(name)s %(levelname)s: %(message)s")
# Load .env before any os.environ reads — safe to call inside Docker too
# (uses setdefault, so Docker-injected vars take precedence over .env values)
from circuitforge_core.config.settings import load_env as _load_env
_load_env(Path(__file__).parent.parent / ".env")
IS_DEMO = os.environ.get("DEMO_MODE", "").lower() in ("1", "true", "yes") IS_DEMO = os.environ.get("DEMO_MODE", "").lower() in ("1", "true", "yes")
import streamlit as st import streamlit as st
from scripts.db import DEFAULT_DB, init_db, get_active_tasks from scripts.db import DEFAULT_DB, init_db, get_active_tasks
from scripts.db_migrate import migrate_db
from app.feedback import inject_feedback_button from app.feedback import inject_feedback_button
from app.cloud_session import resolve_session, get_db_path, get_config_dir, get_cloud_tier from app.cloud_session import resolve_session, get_db_path, get_config_dir, get_cloud_tier
import sqlite3 import sqlite3
@ -36,6 +42,7 @@ st.set_page_config(
resolve_session("peregrine") resolve_session("peregrine")
init_db(get_db_path()) init_db(get_db_path())
migrate_db(Path(get_db_path()))
# Demo tier — initialize once per session (cookie persistence handled client-side) # Demo tier — initialize once per session (cookie persistence handled client-side)
if IS_DEMO and "simulated_tier" not in st.session_state: if IS_DEMO and "simulated_tier" not in st.session_state:

View file

@ -457,6 +457,11 @@ elif step == 5:
from app.wizard.step_inference import validate from app.wizard.step_inference import validate
st.subheader("Step 5 \u2014 Inference & API Keys") st.subheader("Step 5 \u2014 Inference & API Keys")
st.info(
"**Simplest setup:** set `OLLAMA_HOST` in your `.env` file — "
"Peregrine auto-detects it, no config file needed. "
"Or use the fields below to configure API keys and endpoints."
)
profile = saved_yaml.get("inference_profile", "remote") profile = saved_yaml.get("inference_profile", "remote")
if profile == "remote": if profile == "remote":
@ -466,8 +471,18 @@ elif step == 5:
placeholder="https://api.together.xyz/v1") placeholder="https://api.together.xyz/v1")
openai_key = st.text_input("Endpoint API Key (optional)", type="password", openai_key = st.text_input("Endpoint API Key (optional)", type="password",
key="oai_key") if openai_url else "" key="oai_key") if openai_url else ""
ollama_host = st.text_input("Ollama host (optional \u2014 local fallback)",
placeholder="http://localhost:11434",
key="ollama_host_input")
ollama_model = st.text_input("Ollama model (optional)",
value="llama3.2:3b",
key="ollama_model_input")
else: else:
st.info(f"Local mode ({profile}): Ollama provides inference.") st.info(f"Local mode ({profile}): Ollama provides inference.")
import os
_ollama_host_env = os.environ.get("OLLAMA_HOST", "")
if _ollama_host_env:
st.caption(f"OLLAMA_HOST from .env: `{_ollama_host_env}`")
anthropic_key = openai_url = openai_key = "" anthropic_key = openai_url = openai_key = ""
with st.expander("Advanced \u2014 Service Ports & Hosts"): with st.expander("Advanced \u2014 Service Ports & Hosts"):
@ -546,6 +561,14 @@ elif step == 5:
if anthropic_key or openai_url: if anthropic_key or openai_url:
env_path.write_text("\n".join(env_lines) + "\n") env_path.write_text("\n".join(env_lines) + "\n")
if profile == "remote":
if ollama_host:
env_lines = _set_env(env_lines, "OLLAMA_HOST", ollama_host)
if ollama_model:
env_lines = _set_env(env_lines, "OLLAMA_MODEL", ollama_model)
if ollama_host or ollama_model:
env_path.write_text("\n".join(env_lines) + "\n")
_save_yaml({"services": svc, "wizard_step": 5}) _save_yaml({"services": svc, "wizard_step": 5})
st.session_state.wizard_step = 6 st.session_state.wizard_step = 6
st.rerun() st.rerun()

View file

@ -45,6 +45,30 @@ services:
- "host.docker.internal:host-gateway" - "host.docker.internal:host-gateway"
restart: unless-stopped restart: unless-stopped
api:
build:
context: ..
dockerfile: peregrine/Dockerfile.cfcore
command: >
bash -c "uvicorn dev_api:app --host 0.0.0.0 --port 8601"
volumes:
- /devl/menagerie-data:/devl/menagerie-data
- ./config/llm.cloud.yaml:/app/config/llm.yaml:ro
environment:
- CLOUD_MODE=true
- CLOUD_DATA_ROOT=/devl/menagerie-data
- STAGING_DB=/devl/menagerie-data/cloud-default.db
- DIRECTUS_JWT_SECRET=${DIRECTUS_JWT_SECRET}
- CF_SERVER_SECRET=${CF_SERVER_SECRET}
- PLATFORM_DB_URL=${PLATFORM_DB_URL}
- HEIMDALL_URL=${HEIMDALL_URL:-http://cf-license:8000}
- HEIMDALL_ADMIN_TOKEN=${HEIMDALL_ADMIN_TOKEN}
- PYTHONUNBUFFERED=1
- FORGEJO_API_TOKEN=${FORGEJO_API_TOKEN:-}
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
web: web:
build: build:
context: . context: .
@ -53,6 +77,8 @@ services:
VITE_BASE_PATH: /peregrine/ VITE_BASE_PATH: /peregrine/
ports: ports:
- "8508:80" - "8508:80"
depends_on:
- api
restart: unless-stopped restart: unless-stopped
searxng: searxng:

File diff suppressed because it is too large Load diff

View file

@ -2,6 +2,8 @@ server {
listen 80; listen 80;
server_name _; server_name _;
client_max_body_size 20m;
root /usr/share/nginx/html; root /usr/share/nginx/html;
index index.html; index index.html;

View file

@ -102,6 +102,23 @@ Before opening a pull request:
--- ---
## Database Migrations
Peregrine uses a numbered SQL migration system (Rails-style). Each migration is a `.sql` file in the `migrations/` directory at the repo root, named `NNN_description.sql` (e.g. `002_add_foo_column.sql`). Applied migrations are tracked in a `schema_migrations` table in each user database.
### Adding a migration
1. Create `migrations/NNN_description.sql` where `NNN` is the next sequential number (zero-padded to 3 digits).
2. Write standard SQL — `CREATE TABLE IF NOT EXISTS`, `ALTER TABLE ADD COLUMN`, etc. Keep each migration idempotent where possible.
3. Do **not** modify `scripts/db.py`'s legacy `_MIGRATIONS` lists — those are superseded and will be removed once all active databases have been bootstrapped by the migration runner.
4. The runner (`scripts/db_migrate.py`) applies pending migrations at startup automatically (both FastAPI and Streamlit paths call `migrate_db(db_path)`).
### Rollbacks
SQLite does not support transactional DDL for all statement types. Write forward-only migrations. If you need to undo a schema change, add a new migration that reverses it.
---
## What NOT to Do ## What NOT to Do
- Do not commit `config/user.yaml`, `config/notion.yaml`, `config/email.yaml`, `config/adzuna.yaml`, or any `config/integrations/*.yaml` — all are gitignored - Do not commit `config/user.yaml`, `config/notion.yaml`, `config/email.yaml`, `config/adzuna.yaml`, or any `config/integrations/*.yaml` — all are gitignored

View file

@ -0,0 +1,97 @@
-- Migration 001: Baseline schema
-- Captures the full schema as of v0.8.5 (all columns including those added via ALTER TABLE)
CREATE TABLE IF NOT EXISTS jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT,
company TEXT,
url TEXT UNIQUE,
source TEXT,
location TEXT,
is_remote INTEGER DEFAULT 0,
salary TEXT,
description TEXT,
match_score REAL,
keyword_gaps TEXT,
date_found TEXT,
status TEXT DEFAULT 'pending',
notion_page_id TEXT,
cover_letter TEXT,
applied_at TEXT,
interview_date TEXT,
rejection_stage TEXT,
phone_screen_at TEXT,
interviewing_at TEXT,
offer_at TEXT,
hired_at TEXT,
survey_at TEXT,
calendar_event_id TEXT,
optimized_resume TEXT,
ats_gap_report TEXT
);
CREATE TABLE IF NOT EXISTS job_contacts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER,
direction TEXT,
subject TEXT,
from_addr TEXT,
to_addr TEXT,
body TEXT,
received_at TEXT,
is_response_needed INTEGER DEFAULT 0,
responded_at TEXT,
message_id TEXT,
stage_signal TEXT,
suggestion_dismissed INTEGER DEFAULT 0
);
CREATE TABLE IF NOT EXISTS company_research (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER UNIQUE,
generated_at TEXT,
company_brief TEXT,
ceo_brief TEXT,
talking_points TEXT,
raw_output TEXT,
tech_brief TEXT,
funding_brief TEXT,
competitors_brief TEXT,
red_flags TEXT,
scrape_used INTEGER DEFAULT 0,
accessibility_brief TEXT
);
CREATE TABLE IF NOT EXISTS background_tasks (
id INTEGER PRIMARY KEY AUTOINCREMENT,
task_type TEXT,
job_id INTEGER,
params TEXT,
status TEXT DEFAULT 'pending',
error TEXT,
created_at TEXT,
started_at TEXT,
finished_at TEXT,
stage TEXT,
updated_at TEXT
);
CREATE TABLE IF NOT EXISTS survey_responses (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER,
survey_name TEXT,
received_at TEXT,
source TEXT,
raw_input TEXT,
image_path TEXT,
mode TEXT,
llm_output TEXT,
reported_score REAL,
created_at TEXT
);
CREATE TABLE IF NOT EXISTS digest_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_contact_id INTEGER UNIQUE,
created_at TEXT
);

View file

@ -3,10 +3,12 @@
# Keep in sync with environment.yml # Keep in sync with environment.yml
# ── CircuitForge shared core ─────────────────────────────────────────────── # ── CircuitForge shared core ───────────────────────────────────────────────
# Requires circuitforge-core >= 0.8.0 (config.load_env, db, tasks; resources moved to circuitforge-orch).
# Local dev / Docker (parent-context build): path install works because # Local dev / Docker (parent-context build): path install works because
# circuitforge-core/ is a sibling directory. # circuitforge-core/ is a sibling directory.
# CI / fresh checkouts: falls back to the Forgejo VCS URL below. # CI / fresh checkouts: falls back to the Forgejo VCS URL below.
# To use local editable install run: pip install -e ../circuitforge-core # To use local editable install run: pip install -e ../circuitforge-core
# TODO: pin to @v0.7.0 tag once cf-core cuts a release tag.
git+https://git.opensourcesolarpunk.com/Circuit-Forge/circuitforge-core.git@main git+https://git.opensourcesolarpunk.com/Circuit-Forge/circuitforge-core.git@main
# ── Web UI ──────────────────────────────────────────────────────────────── # ── Web UI ────────────────────────────────────────────────────────────────

View file

@ -383,6 +383,19 @@ def mark_applied(db_path: Path = DEFAULT_DB, ids: list[int] = None) -> None:
conn.close() conn.close()
def cancel_task(db_path: Path = DEFAULT_DB, task_id: int = 0) -> bool:
"""Cancel a single queued/running task by id. Returns True if a row was updated."""
conn = sqlite3.connect(db_path)
count = conn.execute(
"UPDATE background_tasks SET status='failed', error='Cancelled by user',"
" finished_at=datetime('now') WHERE id=? AND status IN ('queued','running')",
(task_id,),
).rowcount
conn.commit()
conn.close()
return count > 0
def kill_stuck_tasks(db_path: Path = DEFAULT_DB) -> int: def kill_stuck_tasks(db_path: Path = DEFAULT_DB) -> int:
"""Mark all queued/running background tasks as failed. Returns count killed.""" """Mark all queued/running background tasks as failed. Returns count killed."""
conn = sqlite3.connect(db_path) conn = sqlite3.connect(db_path)

73
scripts/db_migrate.py Normal file
View file

@ -0,0 +1,73 @@
"""
db_migrate.py Rails-style numbered SQL migration runner for Peregrine user DBs.
Migration files live in migrations/ (sibling to this script's parent directory),
named NNN_description.sql (e.g. 001_baseline.sql). They are applied in sorted
order and tracked in the schema_migrations table so each runs exactly once.
Usage:
from scripts.db_migrate import migrate_db
migrate_db(Path("/path/to/user.db"))
"""
import logging
import sqlite3
from pathlib import Path
log = logging.getLogger(__name__)
# Resolved at import time: peregrine repo root / migrations/
_MIGRATIONS_DIR = Path(__file__).parent.parent / "migrations"
_CREATE_MIGRATIONS_TABLE = """
CREATE TABLE IF NOT EXISTS schema_migrations (
version TEXT PRIMARY KEY,
applied_at TEXT NOT NULL DEFAULT (datetime('now'))
)
"""
def migrate_db(db_path: Path) -> list[str]:
"""Apply any pending migrations to db_path. Returns list of applied versions."""
applied: list[str] = []
con = sqlite3.connect(db_path)
try:
con.execute(_CREATE_MIGRATIONS_TABLE)
con.commit()
if not _MIGRATIONS_DIR.is_dir():
log.warning("migrations/ directory not found at %s — skipping", _MIGRATIONS_DIR)
return applied
migration_files = sorted(_MIGRATIONS_DIR.glob("*.sql"))
if not migration_files:
return applied
already_applied = {
row[0] for row in con.execute("SELECT version FROM schema_migrations")
}
for path in migration_files:
version = path.stem # e.g. "001_baseline"
if version in already_applied:
continue
sql = path.read_text(encoding="utf-8")
log.info("Applying migration %s to %s", version, db_path.name)
try:
con.executescript(sql)
con.execute(
"INSERT INTO schema_migrations (version) VALUES (?)", (version,)
)
con.commit()
applied.append(version)
log.info("Migration %s applied successfully", version)
except Exception as exc:
con.rollback()
log.error("Migration %s failed: %s", version, exc)
raise RuntimeError(f"Migration {version} failed: {exc}") from exc
finally:
con.close()
return applied

View file

@ -1,19 +1,46 @@
""" """
LLM abstraction layer with priority fallback chain. LLM abstraction layer with priority fallback chain.
Reads config/llm.yaml. Tries backends in order; falls back on any error. Config lookup order:
1. <repo>/config/llm.yaml per-install local config
2. ~/.config/circuitforge/llm.yaml user-level config (circuitforge-core default)
3. env-var auto-config (ANTHROPIC_API_KEY, OPENAI_API_KEY, OLLAMA_HOST, )
""" """
from pathlib import Path from pathlib import Path
from circuitforge_core.llm import LLMRouter as _CoreLLMRouter from circuitforge_core.llm import LLMRouter as _CoreLLMRouter
# Kept for backwards-compatibility — external callers that import CONFIG_PATH
# from this module continue to work.
CONFIG_PATH = Path(__file__).parent.parent / "config" / "llm.yaml" CONFIG_PATH = Path(__file__).parent.parent / "config" / "llm.yaml"
class LLMRouter(_CoreLLMRouter): class LLMRouter(_CoreLLMRouter):
"""Peregrine-specific LLMRouter — defaults to Peregrine's config/llm.yaml.""" """Peregrine-specific LLMRouter — tri-level config path priority.
def __init__(self, config_path: Path = CONFIG_PATH): When ``config_path`` is supplied (e.g. in tests) it is passed straight
super().__init__(config_path) through to the core. When omitted, the lookup order is:
1. <repo>/config/llm.yaml (per-install local config)
2. ~/.config/circuitforge/llm.yaml (user-level, circuitforge-core default)
3. env-var auto-config (ANTHROPIC_API_KEY, OPENAI_API_KEY, OLLAMA_HOST )
"""
def __init__(self, config_path: Path | None = None) -> None:
if config_path is not None:
# Explicit path supplied — use it directly (e.g. tests, CLI override).
super().__init__(config_path)
return
local = Path(__file__).parent.parent / "config" / "llm.yaml"
user_level = Path.home() / ".config" / "circuitforge" / "llm.yaml"
if local.exists():
super().__init__(local)
elif user_level.exists():
super().__init__(user_level)
else:
# No yaml found — let circuitforge-core's env-var auto-config run.
# The core default CONFIG_PATH (~/.config/circuitforge/llm.yaml)
# won't exist either, so _auto_config_from_env() will be triggered.
super().__init__()
# Module-level singleton for convenience # Module-level singleton for convenience

View file

@ -492,6 +492,12 @@ def main() -> None:
# binds a harmless free port instead of conflicting with the external service. # binds a harmless free port instead of conflicting with the external service.
env_updates: dict[str, str] = {i["env_var"]: str(i["stub_port"]) for i in ports.values()} env_updates: dict[str, str] = {i["env_var"]: str(i["stub_port"]) for i in ports.values()}
env_updates["RECOMMENDED_PROFILE"] = profile env_updates["RECOMMENDED_PROFILE"] = profile
# When Ollama is adopted from the host process, write OLLAMA_HOST so
# LLMRouter's env-var auto-config finds it without needing config/llm.yaml.
ollama_info = ports.get("ollama")
if ollama_info and ollama_info.get("external"):
env_updates["OLLAMA_HOST"] = f"http://host.docker.internal:{ollama_info['resolved']}"
if offload_gb > 0: if offload_gb > 0:
env_updates["CPU_OFFLOAD_GB"] = str(offload_gb) env_updates["CPU_OFFLOAD_GB"] = str(offload_gb)
# GPU info for the app container (which lacks nvidia-smi access) # GPU info for the app container (which lacks nvidia-smi access)

View file

@ -22,7 +22,7 @@ from typing import Callable, Optional
from circuitforge_core.tasks.scheduler import ( from circuitforge_core.tasks.scheduler import (
TaskSpec, # re-export unchanged TaskSpec, # re-export unchanged
TaskScheduler as _CoreTaskScheduler, LocalScheduler as _CoreTaskScheduler,
) )
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -94,15 +94,6 @@ class TaskScheduler(_CoreTaskScheduler):
def __init__(self, db_path: Path, run_task_fn: Callable) -> None: def __init__(self, db_path: Path, run_task_fn: Callable) -> None:
budgets, max_depth = _load_config_overrides(db_path) budgets, max_depth = _load_config_overrides(db_path)
# Resolve VRAM using module-level _get_gpus so tests can monkeypatch it
try:
gpus = _get_gpus()
available_vram: float = (
sum(g["vram_total_gb"] for g in gpus) if gpus else 999.0
)
except Exception:
available_vram = 999.0
# Warn under this module's logger for any task types with no VRAM budget # Warn under this module's logger for any task types with no VRAM budget
# (mirrors the core warning but captures under scripts.task_scheduler # (mirrors the core warning but captures under scripts.task_scheduler
# so existing tests using caplog.at_level(logger="scripts.task_scheduler") pass) # so existing tests using caplog.at_level(logger="scripts.task_scheduler") pass)
@ -113,19 +104,12 @@ class TaskScheduler(_CoreTaskScheduler):
"defaulting to 0.0 GB (unlimited concurrency for this type)", t "defaulting to 0.0 GB (unlimited concurrency for this type)", t
) )
coordinator_url = os.environ.get(
"CF_ORCH_URL", "http://localhost:7700"
).rstrip("/")
super().__init__( super().__init__(
db_path=db_path, db_path=db_path,
run_task_fn=run_task_fn, run_task_fn=run_task_fn,
task_types=LLM_TASK_TYPES, task_types=LLM_TASK_TYPES,
vram_budgets=budgets, vram_budgets=budgets,
available_vram_gb=available_vram,
max_queue_depth=max_depth, max_queue_depth=max_depth,
coordinator_url=coordinator_url,
service_name="peregrine",
) )
def enqueue( def enqueue(

148
tests/test_db_migrate.py Normal file
View file

@ -0,0 +1,148 @@
"""Tests for scripts/db_migrate.py — numbered SQL migration runner."""
import sqlite3
import textwrap
from pathlib import Path
import pytest
from scripts.db_migrate import migrate_db
# ── helpers ───────────────────────────────────────────────────────────────────
def _applied(db_path: Path) -> list[str]:
con = sqlite3.connect(db_path)
try:
rows = con.execute("SELECT version FROM schema_migrations ORDER BY version").fetchall()
return [r[0] for r in rows]
finally:
con.close()
def _tables(db_path: Path) -> set[str]:
con = sqlite3.connect(db_path)
try:
rows = con.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%'"
).fetchall()
return {r[0] for r in rows}
finally:
con.close()
# ── tests ──────────────────────────────────────────────────────────────────────
def test_creates_schema_migrations_table(tmp_path):
"""Running against an empty DB creates the tracking table."""
db = tmp_path / "test.db"
(tmp_path / "migrations").mkdir() # empty migrations dir
# Patch the module-level _MIGRATIONS_DIR
import scripts.db_migrate as m
orig = m._MIGRATIONS_DIR
m._MIGRATIONS_DIR = tmp_path / "migrations"
try:
migrate_db(db)
assert "schema_migrations" in _tables(db)
finally:
m._MIGRATIONS_DIR = orig
def test_applies_migration_file(tmp_path):
"""A .sql file in migrations/ is applied and recorded."""
db = tmp_path / "test.db"
mdir = tmp_path / "migrations"
mdir.mkdir()
(mdir / "001_test.sql").write_text(
"CREATE TABLE IF NOT EXISTS widgets (id INTEGER PRIMARY KEY, name TEXT);"
)
import scripts.db_migrate as m
orig = m._MIGRATIONS_DIR
m._MIGRATIONS_DIR = mdir
try:
applied = migrate_db(db)
assert applied == ["001_test"]
assert "widgets" in _tables(db)
assert _applied(db) == ["001_test"]
finally:
m._MIGRATIONS_DIR = orig
def test_idempotent_second_run(tmp_path):
"""Running migrate_db twice does not re-apply migrations."""
db = tmp_path / "test.db"
mdir = tmp_path / "migrations"
mdir.mkdir()
(mdir / "001_test.sql").write_text(
"CREATE TABLE IF NOT EXISTS widgets (id INTEGER PRIMARY KEY, name TEXT);"
)
import scripts.db_migrate as m
orig = m._MIGRATIONS_DIR
m._MIGRATIONS_DIR = mdir
try:
migrate_db(db)
applied = migrate_db(db) # second run
assert applied == []
assert _applied(db) == ["001_test"]
finally:
m._MIGRATIONS_DIR = orig
def test_applies_only_new_migrations(tmp_path):
"""Migrations already in schema_migrations are skipped; only new ones run."""
db = tmp_path / "test.db"
mdir = tmp_path / "migrations"
mdir.mkdir()
(mdir / "001_first.sql").write_text(
"CREATE TABLE IF NOT EXISTS first_table (id INTEGER PRIMARY KEY);"
)
import scripts.db_migrate as m
orig = m._MIGRATIONS_DIR
m._MIGRATIONS_DIR = mdir
try:
migrate_db(db)
# Add a second migration
(mdir / "002_second.sql").write_text(
"CREATE TABLE IF NOT EXISTS second_table (id INTEGER PRIMARY KEY);"
)
applied = migrate_db(db)
assert applied == ["002_second"]
assert set(_applied(db)) == {"001_first", "002_second"}
assert "second_table" in _tables(db)
finally:
m._MIGRATIONS_DIR = orig
def test_migration_failure_raises(tmp_path):
"""A bad migration raises RuntimeError and does not record the version."""
db = tmp_path / "test.db"
mdir = tmp_path / "migrations"
mdir.mkdir()
(mdir / "001_bad.sql").write_text("THIS IS NOT VALID SQL !!!")
import scripts.db_migrate as m
orig = m._MIGRATIONS_DIR
m._MIGRATIONS_DIR = mdir
try:
with pytest.raises(RuntimeError, match="001_bad"):
migrate_db(db)
assert _applied(db) == []
finally:
m._MIGRATIONS_DIR = orig
def test_baseline_migration_runs(tmp_path):
"""The real 001_baseline.sql applies cleanly to a fresh database."""
db = tmp_path / "test.db"
applied = migrate_db(db)
assert "001_baseline" in applied
expected_tables = {
"jobs", "job_contacts", "company_research",
"background_tasks", "survey_responses", "digest_queue",
"schema_migrations",
}
assert expected_tables <= _tables(db)

View file

@ -145,7 +145,7 @@ def test_get_resume_missing_returns_not_exists(tmp_path, monkeypatch):
"""GET /api/settings/resume when file missing returns {exists: false}.""" """GET /api/settings/resume when file missing returns {exists: false}."""
fake_path = tmp_path / "config" / "plain_text_resume.yaml" fake_path = tmp_path / "config" / "plain_text_resume.yaml"
# Ensure the path doesn't exist # Ensure the path doesn't exist
monkeypatch.setattr("dev_api.RESUME_PATH", fake_path) monkeypatch.setattr("dev_api._resume_path", lambda: fake_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -157,7 +157,7 @@ def test_get_resume_missing_returns_not_exists(tmp_path, monkeypatch):
def test_post_resume_blank_creates_file(tmp_path, monkeypatch): def test_post_resume_blank_creates_file(tmp_path, monkeypatch):
"""POST /api/settings/resume/blank creates the file.""" """POST /api/settings/resume/blank creates the file."""
fake_path = tmp_path / "config" / "plain_text_resume.yaml" fake_path = tmp_path / "config" / "plain_text_resume.yaml"
monkeypatch.setattr("dev_api.RESUME_PATH", fake_path) monkeypatch.setattr("dev_api._resume_path", lambda: fake_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -170,7 +170,7 @@ def test_post_resume_blank_creates_file(tmp_path, monkeypatch):
def test_get_resume_after_blank_returns_exists(tmp_path, monkeypatch): def test_get_resume_after_blank_returns_exists(tmp_path, monkeypatch):
"""GET /api/settings/resume after blank creation returns {exists: true}.""" """GET /api/settings/resume after blank creation returns {exists: true}."""
fake_path = tmp_path / "config" / "plain_text_resume.yaml" fake_path = tmp_path / "config" / "plain_text_resume.yaml"
monkeypatch.setattr("dev_api.RESUME_PATH", fake_path) monkeypatch.setattr("dev_api._resume_path", lambda: fake_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -212,7 +212,7 @@ def test_get_search_prefs_returns_dict(tmp_path, monkeypatch):
fake_path.parent.mkdir(parents=True, exist_ok=True) fake_path.parent.mkdir(parents=True, exist_ok=True)
with open(fake_path, "w") as f: with open(fake_path, "w") as f:
yaml.dump({"default": {"remote_preference": "remote", "job_boards": []}}, f) yaml.dump({"default": {"remote_preference": "remote", "job_boards": []}}, f)
monkeypatch.setattr("dev_api.SEARCH_PREFS_PATH", fake_path) monkeypatch.setattr("dev_api._search_prefs_path", lambda: fake_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -227,7 +227,7 @@ def test_put_get_search_roundtrip(tmp_path, monkeypatch):
"""PUT then GET search prefs round-trip: saved field is returned.""" """PUT then GET search prefs round-trip: saved field is returned."""
fake_path = tmp_path / "config" / "search_profiles.yaml" fake_path = tmp_path / "config" / "search_profiles.yaml"
fake_path.parent.mkdir(parents=True, exist_ok=True) fake_path.parent.mkdir(parents=True, exist_ok=True)
monkeypatch.setattr("dev_api.SEARCH_PREFS_PATH", fake_path) monkeypatch.setattr("dev_api._search_prefs_path", lambda: fake_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -253,7 +253,7 @@ def test_put_get_search_roundtrip(tmp_path, monkeypatch):
def test_get_search_missing_file_returns_empty(tmp_path, monkeypatch): def test_get_search_missing_file_returns_empty(tmp_path, monkeypatch):
"""GET /api/settings/search when file missing returns empty dict.""" """GET /api/settings/search when file missing returns empty dict."""
fake_path = tmp_path / "config" / "search_profiles.yaml" fake_path = tmp_path / "config" / "search_profiles.yaml"
monkeypatch.setattr("dev_api.SEARCH_PREFS_PATH", fake_path) monkeypatch.setattr("dev_api._search_prefs_path", lambda: fake_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -363,7 +363,7 @@ def test_get_services_cpu_profile(client):
def test_get_email_has_password_set_bool(tmp_path, monkeypatch): def test_get_email_has_password_set_bool(tmp_path, monkeypatch):
"""GET /api/settings/system/email has password_set (bool) and no password key.""" """GET /api/settings/system/email has password_set (bool) and no password key."""
fake_email_path = tmp_path / "email.yaml" fake_email_path = tmp_path / "email.yaml"
monkeypatch.setattr("dev_api.EMAIL_PATH", fake_email_path) monkeypatch.setattr("dev_api._config_dir", lambda: fake_email_path.parent)
with patch("dev_api.get_credential", return_value=None): with patch("dev_api.get_credential", return_value=None):
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -378,7 +378,7 @@ def test_get_email_has_password_set_bool(tmp_path, monkeypatch):
def test_get_email_password_set_true_when_stored(tmp_path, monkeypatch): def test_get_email_password_set_true_when_stored(tmp_path, monkeypatch):
"""password_set is True when credential is stored.""" """password_set is True when credential is stored."""
fake_email_path = tmp_path / "email.yaml" fake_email_path = tmp_path / "email.yaml"
monkeypatch.setattr("dev_api.EMAIL_PATH", fake_email_path) monkeypatch.setattr("dev_api._config_dir", lambda: fake_email_path.parent)
with patch("dev_api.get_credential", return_value="secret"): with patch("dev_api.get_credential", return_value="secret"):
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -426,10 +426,14 @@ def test_finetune_status_returns_status_and_pairs_count(client):
assert "pairs_count" in data assert "pairs_count" in data
def test_finetune_status_idle_when_no_task(client): def test_finetune_status_idle_when_no_task(tmp_path, monkeypatch):
"""Status is 'idle' and pairs_count is 0 when no task exists.""" """Status is 'idle' and pairs_count is 0 when no task exists."""
fake_jsonl = tmp_path / "cover_letters.jsonl" # does not exist -> 0 pairs
monkeypatch.setattr("dev_api._TRAINING_JSONL", fake_jsonl)
with patch("scripts.task_runner.get_task_status", return_value=None, create=True): with patch("scripts.task_runner.get_task_status", return_value=None, create=True):
resp = client.get("/api/settings/fine-tune/status") from dev_api import app
c = TestClient(app)
resp = c.get("/api/settings/fine-tune/status")
assert resp.status_code == 200 assert resp.status_code == 200
data = resp.json() data = resp.json()
assert data["status"] == "idle" assert data["status"] == "idle"
@ -441,7 +445,7 @@ def test_finetune_status_idle_when_no_task(client):
def test_get_license_returns_tier_and_active(tmp_path, monkeypatch): def test_get_license_returns_tier_and_active(tmp_path, monkeypatch):
"""GET /api/settings/license returns tier and active fields.""" """GET /api/settings/license returns tier and active fields."""
fake_license = tmp_path / "license.yaml" fake_license = tmp_path / "license.yaml"
monkeypatch.setattr("dev_api.LICENSE_PATH", fake_license) monkeypatch.setattr("dev_api._license_path", lambda: fake_license)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -455,7 +459,7 @@ def test_get_license_returns_tier_and_active(tmp_path, monkeypatch):
def test_get_license_defaults_to_free(tmp_path, monkeypatch): def test_get_license_defaults_to_free(tmp_path, monkeypatch):
"""GET /api/settings/license defaults to free tier when no file.""" """GET /api/settings/license defaults to free tier when no file."""
fake_license = tmp_path / "license.yaml" fake_license = tmp_path / "license.yaml"
monkeypatch.setattr("dev_api.LICENSE_PATH", fake_license) monkeypatch.setattr("dev_api._license_path", lambda: fake_license)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -469,8 +473,7 @@ def test_get_license_defaults_to_free(tmp_path, monkeypatch):
def test_activate_license_valid_key_returns_ok(tmp_path, monkeypatch): def test_activate_license_valid_key_returns_ok(tmp_path, monkeypatch):
"""POST activate with valid key format returns {ok: true}.""" """POST activate with valid key format returns {ok: true}."""
fake_license = tmp_path / "license.yaml" fake_license = tmp_path / "license.yaml"
monkeypatch.setattr("dev_api.LICENSE_PATH", fake_license) monkeypatch.setattr("dev_api._license_path", lambda: fake_license)
monkeypatch.setattr("dev_api.CONFIG_DIR", tmp_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -482,8 +485,7 @@ def test_activate_license_valid_key_returns_ok(tmp_path, monkeypatch):
def test_activate_license_invalid_key_returns_ok_false(tmp_path, monkeypatch): def test_activate_license_invalid_key_returns_ok_false(tmp_path, monkeypatch):
"""POST activate with bad key format returns {ok: false}.""" """POST activate with bad key format returns {ok: false}."""
fake_license = tmp_path / "license.yaml" fake_license = tmp_path / "license.yaml"
monkeypatch.setattr("dev_api.LICENSE_PATH", fake_license) monkeypatch.setattr("dev_api._license_path", lambda: fake_license)
monkeypatch.setattr("dev_api.CONFIG_DIR", tmp_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -495,8 +497,7 @@ def test_activate_license_invalid_key_returns_ok_false(tmp_path, monkeypatch):
def test_deactivate_license_returns_ok(tmp_path, monkeypatch): def test_deactivate_license_returns_ok(tmp_path, monkeypatch):
"""POST /api/settings/license/deactivate returns 200 with ok.""" """POST /api/settings/license/deactivate returns 200 with ok."""
fake_license = tmp_path / "license.yaml" fake_license = tmp_path / "license.yaml"
monkeypatch.setattr("dev_api.LICENSE_PATH", fake_license) monkeypatch.setattr("dev_api._license_path", lambda: fake_license)
monkeypatch.setattr("dev_api.CONFIG_DIR", tmp_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -508,8 +509,7 @@ def test_deactivate_license_returns_ok(tmp_path, monkeypatch):
def test_activate_then_deactivate(tmp_path, monkeypatch): def test_activate_then_deactivate(tmp_path, monkeypatch):
"""Activate then deactivate: active goes False.""" """Activate then deactivate: active goes False."""
fake_license = tmp_path / "license.yaml" fake_license = tmp_path / "license.yaml"
monkeypatch.setattr("dev_api.LICENSE_PATH", fake_license) monkeypatch.setattr("dev_api._license_path", lambda: fake_license)
monkeypatch.setattr("dev_api.CONFIG_DIR", tmp_path)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -580,7 +580,7 @@ def test_get_developer_returns_expected_fields(tmp_path, monkeypatch):
_write_user_yaml(user_yaml) _write_user_yaml(user_yaml)
monkeypatch.setenv("STAGING_DB", str(db_dir / "staging.db")) monkeypatch.setenv("STAGING_DB", str(db_dir / "staging.db"))
fake_tokens = tmp_path / "tokens.yaml" fake_tokens = tmp_path / "tokens.yaml"
monkeypatch.setattr("dev_api.TOKENS_PATH", fake_tokens) monkeypatch.setattr("dev_api._tokens_path", lambda: fake_tokens)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)
@ -602,7 +602,7 @@ def test_put_dev_tier_then_get(tmp_path, monkeypatch):
_write_user_yaml(user_yaml) _write_user_yaml(user_yaml)
monkeypatch.setenv("STAGING_DB", str(db_dir / "staging.db")) monkeypatch.setenv("STAGING_DB", str(db_dir / "staging.db"))
fake_tokens = tmp_path / "tokens.yaml" fake_tokens = tmp_path / "tokens.yaml"
monkeypatch.setattr("dev_api.TOKENS_PATH", fake_tokens) monkeypatch.setattr("dev_api._tokens_path", lambda: fake_tokens)
from dev_api import app from dev_api import app
c = TestClient(app) c = TestClient(app)

View file

@ -0,0 +1,132 @@
"""Tests for Peregrine's LLMRouter shim — priority fallback logic."""
import sys
from pathlib import Path
from unittest.mock import patch, MagicMock, call
sys.path.insert(0, str(Path(__file__).parent.parent))
def _import_fresh():
"""Import scripts.llm_router fresh (bypass module cache)."""
import importlib
import scripts.llm_router as mod
importlib.reload(mod)
return mod
# ---------------------------------------------------------------------------
# Test 1: local config/llm.yaml takes priority when it exists
# ---------------------------------------------------------------------------
def test_uses_local_yaml_when_present():
"""When config/llm.yaml exists locally, super().__init__ is called with that path."""
import scripts.llm_router as shim_mod
from circuitforge_core.llm import LLMRouter as _CoreLLMRouter
local_path = Path(shim_mod.__file__).parent.parent / "config" / "llm.yaml"
user_path = Path.home() / ".config" / "circuitforge" / "llm.yaml"
def fake_exists(self):
return self == local_path # only the local path "exists"
captured = {}
def fake_core_init(self, config_path=None):
captured["config_path"] = config_path
self.config = {}
with patch.object(Path, "exists", fake_exists), \
patch.object(_CoreLLMRouter, "__init__", fake_core_init):
import importlib
import scripts.llm_router as mod
importlib.reload(mod)
mod.LLMRouter()
assert captured.get("config_path") == local_path, (
f"Expected super().__init__ to be called with local path {local_path}, "
f"got {captured.get('config_path')}"
)
# ---------------------------------------------------------------------------
# Test 2: falls through to env-var auto-config when neither yaml exists
# ---------------------------------------------------------------------------
def test_falls_through_to_env_when_no_yamls():
"""When no yaml files exist, super().__init__ is called with no args (env-var path)."""
import scripts.llm_router as shim_mod
from circuitforge_core.llm import LLMRouter as _CoreLLMRouter
captured = {}
def fake_exists(self):
return False # no yaml files exist anywhere
def fake_core_init(self, config_path=None):
# Record whether a path was passed
captured["config_path"] = config_path
captured["called"] = True
self.config = {}
with patch.object(Path, "exists", fake_exists), \
patch.object(_CoreLLMRouter, "__init__", fake_core_init):
import importlib
import scripts.llm_router as mod
importlib.reload(mod)
mod.LLMRouter()
assert captured.get("called"), "super().__init__ was never called"
# When called with no args, config_path defaults to None in our mock,
# meaning the shim correctly fell through to env-var auto-config
assert captured.get("config_path") is None, (
f"Expected super().__init__ to be called with no explicit path (None), "
f"got {captured.get('config_path')}"
)
# ---------------------------------------------------------------------------
# Test 3: module-level complete() singleton is only instantiated once
# ---------------------------------------------------------------------------
def test_complete_singleton_is_reused():
"""complete() reuses the same LLMRouter instance across multiple calls."""
import importlib
import scripts.llm_router as mod
importlib.reload(mod)
# Reset singleton
mod._router = None
instantiation_count = [0]
original_init = mod.LLMRouter.__init__
mock_router = MagicMock()
mock_router.complete.return_value = "OK"
original_class = mod.LLMRouter
class CountingRouter(original_class):
def __init__(self):
instantiation_count[0] += 1
# Bypass real __init__ to avoid needing config files
self.config = {}
def complete(self, prompt, system=None):
return "OK"
# Patch the class in the module
mod.LLMRouter = CountingRouter
mod._router = None
result1 = mod.complete("first call")
result2 = mod.complete("second call")
assert result1 == "OK"
assert result2 == "OK"
assert instantiation_count[0] == 1, (
f"Expected LLMRouter to be instantiated exactly once, "
f"got {instantiation_count[0]} instantiation(s)"
)
# Restore
mod.LLMRouter = original_class

View file

@ -0,0 +1,80 @@
"""Tests: preflight writes OLLAMA_HOST to .env when Ollama is adopted from host."""
import sys
from pathlib import Path
from unittest.mock import patch, call
sys.path.insert(0, str(Path(__file__).parent.parent))
import scripts.preflight as pf
def _make_ports(ollama_external: bool = True, ollama_port: int = 11434) -> dict:
"""Build a minimal ports dict as returned by preflight's port-scanning logic."""
return {
"ollama": {
"resolved": ollama_port,
"external": ollama_external,
"stub_port": 54321,
"env_var": "OLLAMA_PORT",
"adoptable": True,
},
"streamlit": {
"resolved": 8502,
"external": False,
"stub_port": 8502,
"env_var": "STREAMLIT_PORT",
"adoptable": False,
},
}
def _capture_env_updates(ports: dict) -> dict:
"""Run the env_updates construction block from preflight.main() and return the result.
We extract this logic from main() so tests can call it directly without
needing to simulate the full CLI argument parsing and system probe flow.
The block under test is the `if not args.check_only:` section.
"""
captured = {}
def fake_write_env(updates: dict) -> None:
captured.update(updates)
with patch.object(pf, "write_env", side_effect=fake_write_env), \
patch.object(pf, "update_llm_yaml"), \
patch.object(pf, "write_compose_override"):
# Replicate the env_updates block from preflight.main() as faithfully as possible
env_updates: dict[str, str] = {i["env_var"]: str(i["stub_port"]) for i in ports.values()}
env_updates["RECOMMENDED_PROFILE"] = "single-gpu"
# ---- Code under test: the OLLAMA_HOST adoption block ----
ollama_info = ports.get("ollama")
if ollama_info and ollama_info.get("external"):
env_updates["OLLAMA_HOST"] = f"http://host.docker.internal:{ollama_info['resolved']}"
# ---------------------------------------------------------
pf.write_env(env_updates)
return captured
def test_ollama_host_written_when_adopted():
"""OLLAMA_HOST is added when Ollama is adopted from the host (external=True)."""
ports = _make_ports(ollama_external=True, ollama_port=11434)
result = _capture_env_updates(ports)
assert "OLLAMA_HOST" in result
assert result["OLLAMA_HOST"] == "http://host.docker.internal:11434"
def test_ollama_host_not_written_when_docker_managed():
"""OLLAMA_HOST is NOT added when Ollama runs in Docker (external=False)."""
ports = _make_ports(ollama_external=False)
result = _capture_env_updates(ports)
assert "OLLAMA_HOST" not in result
def test_ollama_host_reflects_adopted_port():
"""OLLAMA_HOST uses the actual adopted port, not the default."""
ports = _make_ports(ollama_external=True, ollama_port=11500)
result = _capture_env_updates(ports)
assert result["OLLAMA_HOST"] == "http://host.docker.internal:11500"

View file

@ -109,24 +109,33 @@ def test_missing_budget_logs_warning(tmp_db, caplog):
ts.LLM_TASK_TYPES = frozenset(original) ts.LLM_TASK_TYPES = frozenset(original)
def test_cpu_only_system_gets_unlimited_vram(tmp_db, monkeypatch): def test_cpu_only_system_creates_scheduler(tmp_db, monkeypatch):
"""_available_vram is 999.0 when _get_gpus() returns empty list.""" """Scheduler constructs without error when _get_gpus() returns empty list.
# Patch the module-level _get_gpus in task_scheduler (not preflight)
# so __init__'s _ts_mod._get_gpus() call picks up the mock. LocalScheduler has no VRAM gating it runs tasks regardless of GPU count.
VRAM-aware scheduling is handled by circuitforge_orch's coordinator.
"""
monkeypatch.setattr("scripts.task_scheduler._get_gpus", lambda: []) monkeypatch.setattr("scripts.task_scheduler._get_gpus", lambda: [])
s = TaskScheduler(tmp_db, _noop_run_task) s = TaskScheduler(tmp_db, _noop_run_task)
assert s._available_vram == 999.0 # Scheduler still has correct budgets configured; no VRAM attribute expected
# Scheduler constructed successfully; budgets contain all LLM task types.
# Does not assert exact values -- a sibling test may write a config override
# to the shared pytest tmp dir, causing _load_config_overrides to pick it up.
assert set(s._budgets.keys()) >= LLM_TASK_TYPES
def test_gpu_vram_summed_across_all_gpus(tmp_db, monkeypatch): def test_gpu_detection_does_not_affect_local_scheduler(tmp_db, monkeypatch):
"""_available_vram sums vram_total_gb across all detected GPUs.""" """LocalScheduler ignores GPU VRAM — it has no _available_vram attribute.
VRAM-gated concurrency requires circuitforge_orch (Paid tier).
"""
fake_gpus = [ fake_gpus = [
{"name": "RTX 3090", "vram_total_gb": 24.0, "vram_free_gb": 20.0}, {"name": "RTX 3090", "vram_total_gb": 24.0, "vram_free_gb": 20.0},
{"name": "RTX 3090", "vram_total_gb": 24.0, "vram_free_gb": 18.0}, {"name": "RTX 3090", "vram_total_gb": 24.0, "vram_free_gb": 18.0},
] ]
monkeypatch.setattr("scripts.task_scheduler._get_gpus", lambda: fake_gpus) monkeypatch.setattr("scripts.task_scheduler._get_gpus", lambda: fake_gpus)
s = TaskScheduler(tmp_db, _noop_run_task) s = TaskScheduler(tmp_db, _noop_run_task)
assert s._available_vram == 48.0 assert not hasattr(s, "_available_vram")
def test_enqueue_adds_taskspec_to_deque(tmp_db): def test_enqueue_adds_taskspec_to_deque(tmp_db):
@ -206,40 +215,37 @@ def _make_recording_run_task(log: list, done_event: threading.Event, expected: i
return _run return _run
def _start_scheduler(tmp_db, run_task_fn, available_vram=999.0): def _start_scheduler(tmp_db, run_task_fn):
s = TaskScheduler(tmp_db, run_task_fn) s = TaskScheduler(tmp_db, run_task_fn)
s._available_vram = available_vram
s.start() s.start()
return s return s
# ── Tests ───────────────────────────────────────────────────────────────────── # ── Tests ─────────────────────────────────────────────────────────────────────
def test_deepest_queue_wins_first_slot(tmp_db): def test_all_task_types_complete(tmp_db):
"""Type with more queued tasks starts first when VRAM only fits one type.""" """Scheduler runs tasks from multiple types; all complete.
LocalScheduler runs type batches concurrently (no VRAM gating).
VRAM-gated sequential scheduling requires circuitforge_orch.
"""
log, done = [], threading.Event() log, done = [], threading.Event()
# Build scheduler but DO NOT start it yet — enqueue all tasks first
# so the scheduler sees the full picture on its very first wake.
run_task_fn = _make_recording_run_task(log, done, 4) run_task_fn = _make_recording_run_task(log, done, 4)
s = TaskScheduler(tmp_db, run_task_fn) s = TaskScheduler(tmp_db, run_task_fn)
s._available_vram = 3.0 # fits cover_letter (2.5) but not +company_research (5.0)
# Enqueue cover_letter (3 tasks) and company_research (1 task) before start.
# cover_letter has the deeper queue and must win the first batch slot.
for i in range(3): for i in range(3):
s.enqueue(i + 1, "cover_letter", i + 1, None) s.enqueue(i + 1, "cover_letter", i + 1, None)
s.enqueue(4, "company_research", 4, None) s.enqueue(4, "company_research", 4, None)
s.start() # scheduler now sees all tasks atomically on its first iteration s.start()
assert done.wait(timeout=5.0), "timed out — not all 4 tasks completed" assert done.wait(timeout=5.0), "timed out — not all 4 tasks completed"
s.shutdown() s.shutdown()
assert len(log) == 4 assert len(log) == 4
cl = [i for i, (_, t) in enumerate(log) if t == "cover_letter"] cl = [t for _, t in log if t == "cover_letter"]
cr = [i for i, (_, t) in enumerate(log) if t == "company_research"] cr = [t for _, t in log if t == "company_research"]
assert len(cl) == 3 and len(cr) == 1 assert len(cl) == 3 and len(cr) == 1
assert max(cl) < min(cr), "All cover_letter tasks must finish before company_research starts"
def test_fifo_within_type(tmp_db): def test_fifo_within_type(tmp_db):
@ -256,8 +262,8 @@ def test_fifo_within_type(tmp_db):
assert [task_id for task_id, _ in log] == [10, 20, 30] assert [task_id for task_id, _ in log] == [10, 20, 30]
def test_concurrent_batches_when_vram_allows(tmp_db): def test_concurrent_batches_different_types(tmp_db):
"""Two type batches start simultaneously when VRAM fits both.""" """Two type batches run concurrently (LocalScheduler has no VRAM gating)."""
started = {"cover_letter": threading.Event(), "company_research": threading.Event()} started = {"cover_letter": threading.Event(), "company_research": threading.Event()}
all_done = threading.Event() all_done = threading.Event()
log = [] log = []
@ -268,8 +274,7 @@ def test_concurrent_batches_when_vram_allows(tmp_db):
if len(log) >= 2: if len(log) >= 2:
all_done.set() all_done.set()
# VRAM=10.0 fits both cover_letter (2.5) and company_research (5.0) simultaneously s = _start_scheduler(tmp_db, run_task)
s = _start_scheduler(tmp_db, run_task, available_vram=10.0)
s.enqueue(1, "cover_letter", 1, None) s.enqueue(1, "cover_letter", 1, None)
s.enqueue(2, "company_research", 2, None) s.enqueue(2, "company_research", 2, None)
@ -307,8 +312,15 @@ def test_new_tasks_picked_up_mid_batch(tmp_db):
assert log == [1, 2] assert log == [1, 2]
def test_worker_crash_releases_vram(tmp_db): @pytest.mark.filterwarnings("ignore::pytest.PytestUnhandledThreadExceptionWarning")
"""If _run_task raises, _reserved_vram returns to 0 and scheduler continues.""" def test_worker_crash_does_not_stall_scheduler(tmp_db):
"""If _run_task raises, the scheduler continues processing the next task.
The batch_worker intentionally lets the RuntimeError propagate to the thread
boundary (so LocalScheduler can detect crash vs. normal exit). This produces
a PytestUnhandledThreadExceptionWarning -- suppressed here because it is the
expected behavior under test.
"""
log, done = [], threading.Event() log, done = [], threading.Event()
def run_task(db_path, task_id, task_type, job_id, params): def run_task(db_path, task_id, task_type, job_id, params):
@ -317,16 +329,15 @@ def test_worker_crash_releases_vram(tmp_db):
log.append(task_id) log.append(task_id)
done.set() done.set()
s = _start_scheduler(tmp_db, run_task, available_vram=3.0) s = _start_scheduler(tmp_db, run_task)
s.enqueue(1, "cover_letter", 1, None) s.enqueue(1, "cover_letter", 1, None)
s.enqueue(2, "cover_letter", 2, None) s.enqueue(2, "cover_letter", 2, None)
assert done.wait(timeout=5.0), "timed out — task 2 never completed after task 1 crash" assert done.wait(timeout=5.0), "timed out — task 2 never completed after task 1 crash"
s.shutdown() s.shutdown()
# Second task still ran, VRAM was released # Second task still ran despite first crashing
assert 2 in log assert 2 in log
assert s._reserved_vram == 0.0
def test_get_scheduler_returns_singleton(tmp_db): def test_get_scheduler_returns_singleton(tmp_db):

View file

@ -66,8 +66,12 @@ def test_sync_cookie_prgn_switch_param_overrides_yaml(profile_yaml, monkeypatch)
assert any("prgn_ui=streamlit" in s for s in injected) assert any("prgn_ui=streamlit" in s for s in injected)
def test_sync_cookie_downgrades_tier_resets_to_streamlit(profile_yaml, monkeypatch): def test_sync_cookie_free_tier_keeps_vue(profile_yaml, monkeypatch):
"""Free-tier user with vue preference gets reset to streamlit.""" """Free-tier user with vue preference keeps vue (vue_ui_beta is free tier).
Previously this test verified a downgrade to streamlit. Vue SPA was opened
to free tier in issue #20 — the downgrade path no longer triggers.
"""
import yaml as _yaml import yaml as _yaml
profile_yaml.write_text(_yaml.dump({"name": "T", "ui_preference": "vue"})) profile_yaml.write_text(_yaml.dump({"name": "T", "ui_preference": "vue"}))
@ -80,8 +84,8 @@ def test_sync_cookie_downgrades_tier_resets_to_streamlit(profile_yaml, monkeypat
sync_ui_cookie(profile_yaml, tier="free") sync_ui_cookie(profile_yaml, tier="free")
saved = _yaml.safe_load(profile_yaml.read_text()) saved = _yaml.safe_load(profile_yaml.read_text())
assert saved["ui_preference"] == "streamlit" assert saved["ui_preference"] == "vue"
assert any("prgn_ui=streamlit" in s for s in injected) assert any("prgn_ui=vue" in s for s in injected)
def test_switch_ui_writes_yaml_and_calls_sync(profile_yaml, monkeypatch): def test_switch_ui_writes_yaml_and_calls_sync(profile_yaml, monkeypatch):

View file

@ -236,7 +236,7 @@ class TestWizardStep:
search_path = tmp_path / "config" / "search_profiles.yaml" search_path = tmp_path / "config" / "search_profiles.yaml"
_write_user_yaml(yaml_path, {}) _write_user_yaml(yaml_path, {})
with patch("dev_api._wizard_yaml_path", return_value=str(yaml_path)): with patch("dev_api._wizard_yaml_path", return_value=str(yaml_path)):
with patch("dev_api.SEARCH_PREFS_PATH", search_path): with patch("dev_api._search_prefs_path", return_value=search_path):
r = client.post("/api/wizard/step", r = client.post("/api/wizard/step",
json={"step": 6, "data": { json={"step": 6, "data": {
"titles": ["Software Engineer", "Backend Developer"], "titles": ["Software Engineer", "Backend Developer"],

View file

@ -121,7 +121,8 @@ def test_byok_false_preserves_original_gating():
# ── Vue UI Beta & Demo Tier tests ────────────────────────────────────────────── # ── Vue UI Beta & Demo Tier tests ──────────────────────────────────────────────
def test_vue_ui_beta_free_tier(): def test_vue_ui_beta_free_tier():
assert can_use("free", "vue_ui_beta") is False # Vue SPA is open to all tiers (issue #20 — beta restriction removed)
assert can_use("free", "vue_ui_beta") is True
def test_vue_ui_beta_paid_tier(): def test_vue_ui_beta_paid_tier():

View file

@ -16,12 +16,14 @@ import { computed, onMounted } from 'vue'
import { RouterView, useRoute } from 'vue-router' import { RouterView, useRoute } from 'vue-router'
import { useMotion } from './composables/useMotion' import { useMotion } from './composables/useMotion'
import { useHackerMode, useKonamiCode } from './composables/useEasterEgg' import { useHackerMode, useKonamiCode } from './composables/useEasterEgg'
import { useTheme } from './composables/useTheme'
import AppNav from './components/AppNav.vue' import AppNav from './components/AppNav.vue'
import { useDigestStore } from './stores/digest' import { useDigestStore } from './stores/digest'
const motion = useMotion() const motion = useMotion()
const route = useRoute() const route = useRoute()
const { toggle, restore } = useHackerMode() const { toggle, restore } = useHackerMode()
const { initTheme } = useTheme()
const digestStore = useDigestStore() const digestStore = useDigestStore()
const isWizard = computed(() => route.path.startsWith('/setup')) const isWizard = computed(() => route.path.startsWith('/setup'))
@ -29,7 +31,8 @@ const isWizard = computed(() => route.path.startsWith('/setup'))
useKonamiCode(toggle) useKonamiCode(toggle)
onMounted(() => { onMounted(() => {
restore() // re-apply hacker mode from localStorage on hard reload initTheme() // apply persisted theme (hacker mode takes priority inside initTheme)
restore() // kept for hacker mode re-entry on hard reload (initTheme handles it, belt+suspenders)
digestStore.fetchAll() // populate badge immediately, before user visits Digest tab digestStore.fetchAll() // populate badge immediately, before user visits Digest tab
}) })
</script> </script>

View file

@ -73,11 +73,11 @@
} }
/* Accessible Solarpunk dark (system dark mode) /* Accessible Solarpunk dark (system dark mode)
Activates when OS/browser is in dark mode. Activates when OS/browser is in dark mode AND no
Uses :not([data-theme="hacker"]) so the Konami easter explicit theme is selected. Explicit [data-theme="*"]
egg always wins over the system preference. */ always wins over the system preference. */
@media (prefers-color-scheme: dark) { @media (prefers-color-scheme: dark) {
:root:not([data-theme="hacker"]) { :root:not([data-theme]) {
/* Brand — lighter greens readable on dark surfaces */ /* Brand — lighter greens readable on dark surfaces */
--color-primary: #6ab870; --color-primary: #6ab870;
--color-primary-hover: #7ecb84; --color-primary-hover: #7ecb84;
@ -161,6 +161,153 @@
--color-accent-glow-lg: rgba(0, 255, 65, 0.6); --color-accent-glow-lg: rgba(0, 255, 65, 0.6);
} }
/* ── Explicit light — forces light even on dark-OS ─ */
[data-theme="light"] {
--color-primary: #2d5a27;
--color-primary-hover: #234820;
--color-primary-light: #e8f2e7;
--color-surface: #eaeff8;
--color-surface-alt: #dde4f0;
--color-surface-raised: #f5f7fc;
--color-border: #a8b8d0;
--color-border-light: #ccd5e6;
--color-text: #1a2338;
--color-text-muted: #4a5c7a;
--color-text-inverse: #eaeff8;
--color-accent: #c4732a;
--color-accent-hover: #a85c1f;
--color-accent-light: #fdf0e4;
--color-success: #3a7a32;
--color-error: #c0392b;
--color-warning: #d4891a;
--color-info: #1e6091;
--shadow-sm: 0 1px 3px rgba(26, 35, 56, 0.08), 0 1px 2px rgba(26, 35, 56, 0.04);
--shadow-md: 0 4px 12px rgba(26, 35, 56, 0.1), 0 2px 4px rgba(26, 35, 56, 0.06);
--shadow-lg: 0 10px 30px rgba(26, 35, 56, 0.12), 0 4px 8px rgba(26, 35, 56, 0.06);
}
/* ── Explicit dark — forces dark even on light-OS ── */
[data-theme="dark"] {
--color-primary: #6ab870;
--color-primary-hover: #7ecb84;
--color-primary-light: #162616;
--color-surface: #16202e;
--color-surface-alt: #1e2a3a;
--color-surface-raised: #263547;
--color-border: #2d4060;
--color-border-light: #233352;
--color-text: #e4eaf5;
--color-text-muted: #8da0bc;
--color-text-inverse: #16202e;
--color-accent: #e8a84a;
--color-accent-hover: #f5bc60;
--color-accent-light: #2d1e0a;
--color-success: #5eb85e;
--color-error: #e05252;
--color-warning: #e8a84a;
--color-info: #4da6e8;
--shadow-sm: 0 1px 3px rgba(0, 0, 0, 0.3), 0 1px 2px rgba(0, 0, 0, 0.2);
--shadow-md: 0 4px 12px rgba(0, 0, 0, 0.35), 0 2px 4px rgba(0, 0, 0, 0.2);
--shadow-lg: 0 10px 30px rgba(0, 0, 0, 0.4), 0 4px 8px rgba(0, 0, 0, 0.2);
}
/* ── Solarized Dark ──────────────────────────────── */
/* Ethan Schoonover's Solarized palette (dark variant) */
[data-theme="solarized-dark"] {
--color-primary: #2aa198; /* cyan — used as primary brand color */
--color-primary-hover: #35b8ad;
--color-primary-light: #002b36;
--color-surface: #002b36; /* base03 */
--color-surface-alt: #073642; /* base02 */
--color-surface-raised: #0d4352;
--color-border: #073642;
--color-border-light: #0a4a5a;
--color-text: #839496; /* base0 */
--color-text-muted: #657b83; /* base00 */
--color-text-inverse: #002b36;
--color-accent: #b58900; /* yellow */
--color-accent-hover: #cb9f10;
--color-accent-light: #1a1300;
--color-success: #859900; /* green */
--color-error: #dc322f; /* red */
--color-warning: #b58900; /* yellow */
--color-info: #268bd2; /* blue */
--shadow-sm: 0 1px 3px rgba(0, 0, 0, 0.4), 0 1px 2px rgba(0, 0, 0, 0.3);
--shadow-md: 0 4px 12px rgba(0, 0, 0, 0.45), 0 2px 4px rgba(0, 0, 0, 0.3);
--shadow-lg: 0 10px 30px rgba(0, 0, 0, 0.5), 0 4px 8px rgba(0, 0, 0, 0.3);
}
/* ── Solarized Light ─────────────────────────────── */
[data-theme="solarized-light"] {
--color-primary: #2aa198; /* cyan */
--color-primary-hover: #1e8a82;
--color-primary-light: #eee8d5;
--color-surface: #fdf6e3; /* base3 */
--color-surface-alt: #eee8d5; /* base2 */
--color-surface-raised: #fffdf7;
--color-border: #d3c9b0;
--color-border-light: #e4dacc;
--color-text: #657b83; /* base00 */
--color-text-muted: #839496; /* base0 */
--color-text-inverse: #fdf6e3;
--color-accent: #b58900; /* yellow */
--color-accent-hover: #9a7300;
--color-accent-light: #fdf0c0;
--color-success: #859900; /* green */
--color-error: #dc322f; /* red */
--color-warning: #b58900; /* yellow */
--color-info: #268bd2; /* blue */
--shadow-sm: 0 1px 3px rgba(101, 123, 131, 0.12), 0 1px 2px rgba(101, 123, 131, 0.08);
--shadow-md: 0 4px 12px rgba(101, 123, 131, 0.15), 0 2px 4px rgba(101, 123, 131, 0.08);
--shadow-lg: 0 10px 30px rgba(101, 123, 131, 0.18), 0 4px 8px rgba(101, 123, 131, 0.08);
}
/* ── Colorblind-safe (deuteranopia/protanopia) ────── */
/* Avoids red/green confusion. Uses blue+orange as the
primary pair; cyan+magenta as semantic differentiators.
Based on Wong (2011) 8-color colorblind-safe palette. */
[data-theme="colorblind"] {
--color-primary: #0072B2; /* blue — safe primary */
--color-primary-hover: #005a8e;
--color-primary-light: #e0f0fa;
--color-surface: #f4f6fb;
--color-surface-alt: #e6eaf4;
--color-surface-raised: #fafbfe;
--color-border: #b0bcd8;
--color-border-light: #cdd5e8;
--color-text: #1a2338;
--color-text-muted: #4a5c7a;
--color-text-inverse: #f4f6fb;
--color-accent: #E69F00; /* orange — safe secondary */
--color-accent-hover: #c98900;
--color-accent-light: #fdf4dc;
--color-success: #009E73; /* teal-green — distinct from red/green confusion zone */
--color-error: #CC0066; /* magenta-red — distinguishable from green */
--color-warning: #E69F00; /* orange */
--color-info: #56B4E9; /* sky blue */
--shadow-sm: 0 1px 3px rgba(26, 35, 56, 0.08), 0 1px 2px rgba(26, 35, 56, 0.04);
--shadow-md: 0 4px 12px rgba(26, 35, 56, 0.1), 0 2px 4px rgba(26, 35, 56, 0.06);
--shadow-lg: 0 10px 30px rgba(26, 35, 56, 0.12), 0 4px 8px rgba(26, 35, 56, 0.06);
}
/* ── Base resets ─────────────────────────────────── */ /* ── Base resets ─────────────────────────────────── */
*, *::before, *::after { box-sizing: border-box; } *, *::before, *::after { box-sizing: border-box; }

View file

@ -34,12 +34,31 @@
</button> </button>
</div> </div>
<!-- Theme picker -->
<div class="sidebar__theme" v-if="!isHackerMode">
<label class="sidebar__theme-label" for="theme-select">Theme</label>
<select
id="theme-select"
class="sidebar__theme-select"
:value="currentTheme"
@change="setTheme(($event.target as HTMLSelectElement).value as Theme)"
aria-label="Select theme"
>
<option v-for="opt in THEME_OPTIONS" :key="opt.value" :value="opt.value">
{{ opt.icon }} {{ opt.label }}
</option>
</select>
</div>
<!-- Settings at bottom --> <!-- Settings at bottom -->
<div class="sidebar__footer"> <div class="sidebar__footer">
<RouterLink to="/settings" class="sidebar__link sidebar__link--footer" active-class="sidebar__link--active"> <RouterLink to="/settings" class="sidebar__link sidebar__link--footer" active-class="sidebar__link--active">
<Cog6ToothIcon class="sidebar__icon" aria-hidden="true" /> <Cog6ToothIcon class="sidebar__icon" aria-hidden="true" />
<span class="sidebar__label">Settings</span> <span class="sidebar__label">Settings</span>
</RouterLink> </RouterLink>
<button class="sidebar__classic-btn" @click="switchToClassic" title="Switch to Classic (Streamlit) UI">
Classic
</button>
</div> </div>
</nav> </nav>
@ -76,7 +95,10 @@ import {
} from '@heroicons/vue/24/outline' } from '@heroicons/vue/24/outline'
import { useDigestStore } from '../stores/digest' import { useDigestStore } from '../stores/digest'
import { useTheme, THEME_OPTIONS, type Theme } from '../composables/useTheme'
const digestStore = useDigestStore() const digestStore = useDigestStore()
const { currentTheme, setTheme, restoreTheme } = useTheme()
// Logo click easter egg 9.6: Click the Bird 5× rapidly // Logo click easter egg 9.6: Click the Bird 5× rapidly
const logoClickCount = ref(0) const logoClickCount = ref(0)
@ -101,8 +123,25 @@ const isHackerMode = computed(() =>
) )
function exitHackerMode() { function exitHackerMode() {
delete document.documentElement.dataset.theme
localStorage.removeItem('cf-hacker-mode') localStorage.removeItem('cf-hacker-mode')
restoreTheme()
}
const _apiBase = import.meta.env.BASE_URL.replace(/\/$/, '')
async function switchToClassic() {
// Persist preference via API so Streamlit reads streamlit from user.yaml
// and won't re-set the cookie back to vue (avoids the ?prgn_switch rerun cycle)
try {
await fetch(_apiBase + '/api/settings/ui-preference', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ preference: 'streamlit' }),
})
} catch { /* non-fatal — cookie below is enough for immediate redirect */ }
document.cookie = 'prgn_ui=streamlit; path=/; SameSite=Lax'
// Navigate to root (no query params) Caddy routes to Streamlit based on cookie
window.location.href = window.location.origin + '/'
} }
const navLinks = computed(() => [ const navLinks = computed(() => [
@ -272,6 +311,70 @@ const mobileLinks = [
margin: 0; margin: 0;
} }
.sidebar__classic-btn {
display: flex;
align-items: center;
width: 100%;
padding: var(--space-2) var(--space-3);
margin-top: var(--space-1);
background: none;
border: none;
border-radius: var(--radius-md);
color: var(--color-text-muted);
font-size: var(--text-xs);
font-weight: 500;
cursor: pointer;
opacity: 0.6;
transition: opacity 150ms, background 150ms;
white-space: nowrap;
}
.sidebar__classic-btn:hover {
opacity: 1;
background: var(--color-surface-alt);
}
/* ── Theme picker ───────────────────────────────────── */
.sidebar__theme {
padding: var(--space-2) var(--space-3);
border-top: 1px solid var(--color-border-light);
display: flex;
flex-direction: column;
gap: var(--space-1);
}
.sidebar__theme-label {
font-size: var(--text-xs);
color: var(--color-text-muted);
font-weight: 500;
text-transform: uppercase;
letter-spacing: 0.05em;
}
.sidebar__theme-select {
width: 100%;
padding: var(--space-2) var(--space-3);
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text);
font-size: var(--text-sm);
font-family: var(--font-body);
cursor: pointer;
appearance: auto;
transition: border-color 150ms ease, background 150ms ease;
}
.sidebar__theme-select:hover {
border-color: var(--color-primary);
background: var(--color-surface-raised);
}
.sidebar__theme-select:focus-visible {
outline: 2px solid var(--color-accent);
outline-offset: 2px;
}
/* ── Mobile tab bar (<1024px) ───────────────────────── */ /* ── Mobile tab bar (<1024px) ───────────────────────── */
.app-tabbar { .app-tabbar {
display: none; /* hidden on desktop */ display: none; /* hidden on desktop */

View file

@ -56,6 +56,49 @@
<span v-if="gaps.length > 6" class="gaps-more">+{{ gaps.length - 6 }}</span> <span v-if="gaps.length > 6" class="gaps-more">+{{ gaps.length - 6 }}</span>
</div> </div>
<!-- Resume Highlights -->
<div
v-if="resumeSkills.length || resumeDomains.length || resumeKeywords.length"
class="resume-highlights"
>
<button class="section-toggle" @click="highlightsExpanded = !highlightsExpanded">
<span class="section-toggle__label">My Resume Highlights</span>
<span class="section-toggle__icon" aria-hidden="true">{{ highlightsExpanded ? '▲' : '▼' }}</span>
</button>
<div v-if="highlightsExpanded" class="highlights-body">
<div v-if="resumeSkills.length" class="chips-group">
<span class="chips-group__label">Skills</span>
<div class="chips-wrap">
<span
v-for="s in resumeSkills" :key="s"
class="hl-chip"
:class="{ 'hl-chip--match': jobMatchSet.has(s.toLowerCase()) }"
>{{ s }}</span>
</div>
</div>
<div v-if="resumeDomains.length" class="chips-group">
<span class="chips-group__label">Domains</span>
<div class="chips-wrap">
<span
v-for="d in resumeDomains" :key="d"
class="hl-chip"
:class="{ 'hl-chip--match': jobMatchSet.has(d.toLowerCase()) }"
>{{ d }}</span>
</div>
</div>
<div v-if="resumeKeywords.length" class="chips-group">
<span class="chips-group__label">Keywords</span>
<div class="chips-wrap">
<span
v-for="k in resumeKeywords" :key="k"
class="hl-chip"
:class="{ 'hl-chip--match': jobMatchSet.has(k.toLowerCase()) }"
>{{ k }}</span>
</div>
</div>
</div>
</div>
<a v-if="job.url" :href="job.url" target="_blank" rel="noopener noreferrer" class="job-details__link"> <a v-if="job.url" :href="job.url" target="_blank" rel="noopener noreferrer" class="job-details__link">
View listing View listing
</a> </a>
@ -151,6 +194,61 @@
<!-- ATS Resume Optimizer --> <!-- ATS Resume Optimizer -->
<ResumeOptimizerPanel :job-id="props.jobId" /> <ResumeOptimizerPanel :job-id="props.jobId" />
<!-- Application Q&A -->
<div class="qa-section">
<button class="section-toggle" @click="qaExpanded = !qaExpanded">
<span class="section-toggle__label">Application Q&amp;A</span>
<span v-if="qaItems.length" class="qa-count">{{ qaItems.length }}</span>
<span class="section-toggle__icon" aria-hidden="true">{{ qaExpanded ? '▲' : '▼' }}</span>
</button>
<div v-if="qaExpanded" class="qa-body">
<p v-if="!qaItems.length" class="qa-empty">
No questions yet add one below to get LLM-suggested answers.
</p>
<div v-for="(item, i) in qaItems" :key="item.id" class="qa-item">
<div class="qa-item__header">
<span class="qa-item__q">{{ item.question }}</span>
<button class="qa-item__del" aria-label="Remove question" @click="removeQA(i)"></button>
</div>
<textarea
class="qa-item__answer"
:value="item.answer"
placeholder="Your answer…"
rows="3"
@input="updateAnswer(item.id, ($event.target as HTMLTextAreaElement).value)"
/>
<button
class="btn-ghost btn-ghost--sm qa-suggest-btn"
:disabled="suggesting === item.id"
@click="suggestAnswer(item)"
>
{{ suggesting === item.id ? '✨ Thinking…' : '✨ Suggest' }}
</button>
</div>
<div class="qa-add">
<input
v-model="newQuestion"
class="qa-add__input"
placeholder="Add a question from the application…"
@keydown.enter.prevent="addQA"
/>
<button class="btn-ghost btn-ghost--sm" :disabled="!newQuestion.trim()" @click="addQA">Add</button>
</div>
<button
v-if="qaItems.length"
class="btn-ghost qa-save-btn"
:disabled="qaSaved || qaSaving"
@click="saveQA"
>
{{ qaSaving ? 'Saving…' : (qaSaved ? '✓ Saved' : 'Save All') }}
</button>
</div>
</div>
<!-- Bottom action bar --> <!-- Bottom action bar -->
<div class="workspace__actions"> <div class="workspace__actions">
<button <button
@ -359,6 +457,96 @@ async function rejectListing() {
setTimeout(() => emit('job-removed'), 1000) setTimeout(() => emit('job-removed'), 1000)
} }
// Resume highlights
const resumeSkills = ref<string[]>([])
const resumeDomains = ref<string[]>([])
const resumeKeywords = ref<string[]>([])
const highlightsExpanded = ref(false)
// Words from the resume that also appear in the job description text
const jobMatchSet = computed<Set<string>>(() => {
const desc = (job.value?.description ?? '').toLowerCase()
const all = [...resumeSkills.value, ...resumeDomains.value, ...resumeKeywords.value]
return new Set(all.filter(t => desc.includes(t.toLowerCase())))
})
async function fetchResume() {
const { data } = await useApiFetch<{ skills?: string[]; domains?: string[]; keywords?: string[] }>(
'/api/settings/resume',
)
if (!data) return
resumeSkills.value = data.skills ?? []
resumeDomains.value = data.domains ?? []
resumeKeywords.value = data.keywords ?? []
if (resumeSkills.value.length || resumeDomains.value.length || resumeKeywords.value.length) {
highlightsExpanded.value = true
}
}
// Application Q&A
interface QAItem { id: string; question: string; answer: string }
const qaItems = ref<QAItem[]>([])
const qaExpanded = ref(false)
const qaSaved = ref(true)
const qaSaving = ref(false)
const newQuestion = ref('')
const suggesting = ref<string | null>(null)
function addQA() {
const q = newQuestion.value.trim()
if (!q) return
qaItems.value = [...qaItems.value, { id: crypto.randomUUID(), question: q, answer: '' }]
newQuestion.value = ''
qaSaved.value = false
qaExpanded.value = true
}
function removeQA(index: number) {
qaItems.value = qaItems.value.filter((_, i) => i !== index)
qaSaved.value = false
}
function updateAnswer(id: string, value: string) {
qaItems.value = qaItems.value.map(q => q.id === id ? { ...q, answer: value } : q)
qaSaved.value = false
}
async function saveQA() {
qaSaving.value = true
const { error } = await useApiFetch(`/api/jobs/${props.jobId}/qa`, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ items: qaItems.value }),
})
qaSaving.value = false
if (error) { showToast('Save failed — please try again'); return }
qaSaved.value = true
}
async function suggestAnswer(item: QAItem) {
suggesting.value = item.id
const { data, error } = await useApiFetch<{ answer: string }>(`/api/jobs/${props.jobId}/qa/suggest`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question: item.question }),
})
suggesting.value = null
if (error || !data?.answer) { showToast('Suggestion failed — check your LLM backend'); return }
qaItems.value = qaItems.value.map(q => q.id === item.id ? { ...q, answer: data.answer } : q)
qaSaved.value = false
}
async function fetchQA() {
const { data } = await useApiFetch<{ items: QAItem[] }>(`/api/jobs/${props.jobId}/qa`)
if (data?.items?.length) {
qaItems.value = data.items
qaExpanded.value = true
}
}
// Toast // Toast
const toast = ref<string | null>(null) const toast = ref<string | null>(null)
@ -406,6 +594,10 @@ onMounted(async () => {
await fetchJob() await fetchJob()
loadingJob.value = false loadingJob.value = false
// Load resume highlights and saved Q&A in parallel
fetchResume()
fetchQA()
// Check if a generation task is already in flight // Check if a generation task is already in flight
if (clState.value === 'none') { if (clState.value === 'none') {
const { data } = await useApiFetch<{ status: string; stage: string | null }>(`/api/jobs/${props.jobId}/cover_letter/task`) const { data } = await useApiFetch<{ status: string; stage: string | null }>(`/api/jobs/${props.jobId}/cover_letter/task`)
@ -843,6 +1035,205 @@ declare module '../stores/review' {
.toast-enter-active, .toast-leave-active { transition: opacity 250ms ease, transform 250ms ease; } .toast-enter-active, .toast-leave-active { transition: opacity 250ms ease, transform 250ms ease; }
.toast-enter-from, .toast-leave-to { opacity: 0; transform: translateX(-50%) translateY(8px); } .toast-enter-from, .toast-leave-to { opacity: 0; transform: translateX(-50%) translateY(8px); }
/* ── Resume Highlights ───────────────────────────────────────────────── */
.resume-highlights {
border-top: 1px solid var(--color-border-light);
padding-top: var(--space-3);
}
.section-toggle {
display: flex;
align-items: center;
gap: var(--space-2);
width: 100%;
background: none;
border: none;
cursor: pointer;
padding: 0;
text-align: left;
color: var(--color-text-muted);
}
.section-toggle__label {
font-size: var(--text-xs);
font-weight: 700;
text-transform: uppercase;
letter-spacing: 0.04em;
flex: 1;
}
.section-toggle__icon {
font-size: var(--text-xs);
}
.highlights-body {
display: flex;
flex-direction: column;
gap: var(--space-2);
margin-top: var(--space-2);
}
.chips-group { display: flex; flex-direction: column; gap: 4px; }
.chips-group__label {
font-size: 10px;
font-weight: 700;
text-transform: uppercase;
letter-spacing: 0.06em;
color: var(--color-text-muted);
opacity: 0.7;
}
.chips-wrap { display: flex; flex-wrap: wrap; gap: 4px; }
.hl-chip {
padding: 2px var(--space-2);
border-radius: 999px;
font-size: 11px;
background: var(--color-surface-alt);
border: 1px solid var(--color-border-light);
color: var(--color-text-muted);
}
.hl-chip--match {
background: rgba(39, 174, 96, 0.10);
border-color: rgba(39, 174, 96, 0.35);
color: var(--color-success);
font-weight: 600;
}
/* ── Application Q&A ─────────────────────────────────────────────────── */
.qa-section {
background: var(--color-surface-raised);
border: 1px solid var(--color-border-light);
border-radius: var(--radius-lg);
overflow: hidden;
}
.qa-section > .section-toggle {
padding: var(--space-3) var(--space-4);
color: var(--color-text);
}
.qa-section > .section-toggle:hover { background: var(--color-surface-alt); }
.qa-count {
display: inline-flex;
align-items: center;
justify-content: center;
width: 18px;
height: 18px;
border-radius: 50%;
background: var(--app-primary-light);
color: var(--app-primary);
font-size: 10px;
font-weight: 700;
}
.qa-body {
display: flex;
flex-direction: column;
gap: var(--space-3);
padding: var(--space-4);
border-top: 1px solid var(--color-border-light);
}
.qa-empty {
font-size: var(--text-xs);
color: var(--color-text-muted);
text-align: center;
padding: var(--space-2) 0;
}
.qa-item {
display: flex;
flex-direction: column;
gap: var(--space-1);
padding-bottom: var(--space-3);
border-bottom: 1px solid var(--color-border-light);
}
.qa-item:last-of-type { border-bottom: none; }
.qa-item__header {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: var(--space-2);
}
.qa-item__q {
font-size: var(--text-sm);
font-weight: 600;
color: var(--color-text);
line-height: 1.4;
flex: 1;
}
.qa-item__del {
background: none;
border: none;
cursor: pointer;
font-size: var(--text-xs);
color: var(--color-text-muted);
padding: 2px 4px;
flex-shrink: 0;
opacity: 0.5;
transition: opacity 150ms;
}
.qa-item__del:hover { opacity: 1; color: var(--color-error); }
.qa-item__answer {
width: 100%;
padding: var(--space-2) var(--space-3);
border: 1px solid var(--color-border-light);
border-radius: var(--radius-md);
background: var(--color-surface-alt);
color: var(--color-text);
font-family: var(--font-body);
font-size: var(--text-sm);
line-height: 1.5;
resize: vertical;
min-height: 72px;
}
.qa-item__answer:focus {
outline: none;
border-color: var(--app-primary);
}
.qa-suggest-btn { align-self: flex-end; }
.qa-add {
display: flex;
gap: var(--space-2);
align-items: center;
}
.qa-add__input {
flex: 1;
padding: var(--space-2) var(--space-3);
border: 1px solid var(--color-border-light);
border-radius: var(--radius-md);
background: var(--color-surface-alt);
color: var(--color-text);
font-family: var(--font-body);
font-size: var(--text-sm);
min-height: 36px;
}
.qa-add__input:focus {
outline: none;
border-color: var(--app-primary);
}
.qa-add__input::placeholder { color: var(--color-text-muted); }
.qa-save-btn { align-self: flex-end; }
/* ── Responsive ──────────────────────────────────────────────────────── */ /* ── Responsive ──────────────────────────────────────────────────────── */
@media (max-width: 900px) { @media (max-width: 900px) {

View file

@ -0,0 +1,412 @@
<template>
<Teleport to="body">
<div class="modal-backdrop" role="dialog" aria-modal="true" :aria-labelledby="`research-title-${jobId}`" @click.self="emit('close')">
<div class="modal-card">
<!-- Header -->
<div class="modal-header">
<h2 :id="`research-title-${jobId}`" class="modal-title">
🔍 {{ jobTitle }} Company Research
</h2>
<div class="modal-header-actions">
<button v-if="state === 'ready'" class="btn-regen" @click="generate" title="Refresh research"> Refresh</button>
<button class="btn-close" @click="emit('close')" aria-label="Close"></button>
</div>
</div>
<!-- Generating state -->
<div v-if="state === 'generating'" class="modal-body modal-body--loading">
<div class="research-spinner" aria-hidden="true" />
<p class="generating-msg">{{ stage ?? 'Researching…' }}</p>
<p class="generating-sub">This takes 3090 seconds depending on your LLM backend.</p>
</div>
<!-- Error state -->
<div v-else-if="state === 'error'" class="modal-body modal-body--error">
<p>Research generation failed.</p>
<p v-if="errorMsg" class="error-detail">{{ errorMsg }}</p>
<button class="btn-primary-sm" @click="generate">Retry</button>
</div>
<!-- Ready state -->
<div v-else-if="state === 'ready' && brief" class="modal-body">
<p v-if="brief.generated_at" class="generated-at">
Updated {{ fmtDate(brief.generated_at) }}
</p>
<section v-if="brief.company_brief" class="research-section">
<h3 class="section-title">🏢 Company</h3>
<p class="section-body">{{ brief.company_brief }}</p>
</section>
<section v-if="brief.ceo_brief" class="research-section">
<h3 class="section-title">👤 Leadership</h3>
<p class="section-body">{{ brief.ceo_brief }}</p>
</section>
<section v-if="brief.talking_points" class="research-section">
<div class="section-title-row">
<h3 class="section-title">💬 Talking Points</h3>
<button class="btn-copy" @click="copy(brief.talking_points!)" :aria-label="copied ? 'Copied!' : 'Copy talking points'">
{{ copied ? '✓ Copied' : '⎘ Copy' }}
</button>
</div>
<p class="section-body">{{ brief.talking_points }}</p>
</section>
<section v-if="brief.tech_brief" class="research-section">
<h3 class="section-title"> Tech Stack</h3>
<p class="section-body">{{ brief.tech_brief }}</p>
</section>
<section v-if="brief.funding_brief" class="research-section">
<h3 class="section-title">💰 Funding & Stage</h3>
<p class="section-body">{{ brief.funding_brief }}</p>
</section>
<section v-if="brief.red_flags" class="research-section research-section--warn">
<h3 class="section-title"> Red Flags</h3>
<p class="section-body">{{ brief.red_flags }}</p>
</section>
<section v-if="brief.accessibility_brief" class="research-section">
<h3 class="section-title"> Inclusion & Accessibility</h3>
<p class="section-body section-body--private">{{ brief.accessibility_brief }}</p>
<p class="private-note">For your decision-making only not disclosed in applications.</p>
</section>
</div>
<!-- Empty state (no research, not generating) -->
<div v-else class="modal-body modal-body--empty">
<p>No research yet for this company.</p>
<button class="btn-primary-sm" @click="generate">🔍 Generate Research</button>
</div>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
import { ref, onMounted, onUnmounted } from 'vue'
import { useApiFetch } from '../composables/useApi'
const props = defineProps<{
jobId: number
jobTitle: string
autoGenerate?: boolean
}>()
const emit = defineEmits<{ close: [] }>()
interface ResearchBrief {
company_brief: string | null
ceo_brief: string | null
talking_points: string | null
tech_brief: string | null
funding_brief: string | null
red_flags: string | null
accessibility_brief: string | null
generated_at: string | null
}
type ModalState = 'loading' | 'generating' | 'ready' | 'empty' | 'error'
const state = ref<ModalState>('loading')
const brief = ref<ResearchBrief | null>(null)
const stage = ref<string | null>(null)
const errorMsg = ref<string | null>(null)
const copied = ref(false)
let pollId: ReturnType<typeof setInterval> | null = null
function fmtDate(iso: string) {
const d = new Date(iso)
const diffH = Math.round((Date.now() - d.getTime()) / 3600000)
if (diffH < 1) return 'just now'
if (diffH < 24) return `${diffH}h ago`
if (diffH < 168) return `${Math.floor(diffH / 24)}d ago`
return d.toLocaleDateString([], { month: 'short', day: 'numeric' })
}
async function copy(text: string) {
await navigator.clipboard.writeText(text)
copied.value = true
setTimeout(() => { copied.value = false }, 2000)
}
function stopPoll() {
if (pollId) { clearInterval(pollId); pollId = null }
}
async function pollTask() {
const { data } = await useApiFetch<{ status: string; stage: string | null; message: string | null }>(
`/api/jobs/${props.jobId}/research/task`,
)
if (!data) return
stage.value = data.stage
if (data.status === 'completed') {
stopPoll()
await load()
} else if (data.status === 'failed') {
stopPoll()
state.value = 'error'
errorMsg.value = data.message ?? 'Unknown error'
}
}
async function load() {
const { data, error } = await useApiFetch<ResearchBrief>(`/api/jobs/${props.jobId}/research`)
if (error) {
if (error.kind === 'http' && error.status === 404) {
// Check if a task is running
const { data: task } = await useApiFetch<{ status: string; stage: string | null; message: string | null }>(
`/api/jobs/${props.jobId}/research/task`,
)
if (task && (task.status === 'queued' || task.status === 'running')) {
state.value = 'generating'
stage.value = task.stage
pollId = setInterval(pollTask, 3000)
} else if (props.autoGenerate) {
await generate()
} else {
state.value = 'empty'
}
} else {
state.value = 'error'
errorMsg.value = error.kind === 'http' ? error.detail : error.message
}
return
}
brief.value = data
state.value = 'ready'
}
async function generate() {
state.value = 'generating'
stage.value = null
errorMsg.value = null
stopPoll()
const { error } = await useApiFetch(`/api/jobs/${props.jobId}/research/generate`, { method: 'POST' })
if (error) {
state.value = 'error'
errorMsg.value = error.kind === 'http' ? error.detail : error.message
return
}
pollId = setInterval(pollTask, 3000)
}
function onEsc(e: KeyboardEvent) {
if (e.key === 'Escape') emit('close')
}
onMounted(async () => {
document.addEventListener('keydown', onEsc)
await load()
})
onUnmounted(() => {
document.removeEventListener('keydown', onEsc)
stopPoll()
})
</script>
<style scoped>
.modal-backdrop {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.55);
z-index: 500;
display: flex;
align-items: flex-start;
justify-content: center;
padding: var(--space-8) var(--space-4);
overflow-y: auto;
}
.modal-card {
background: var(--color-surface-raised);
border-radius: var(--radius-lg);
box-shadow: 0 8px 40px rgba(0, 0, 0, 0.3);
width: 100%;
max-width: 620px;
overflow: hidden;
}
.modal-header {
display: flex;
align-items: flex-start;
justify-content: space-between;
gap: var(--space-3);
padding: var(--space-5) var(--space-6);
border-bottom: 1px solid var(--color-border-light);
}
.modal-title {
font-size: 1rem;
font-weight: 700;
color: var(--color-text);
margin: 0;
line-height: 1.3;
}
.modal-header-actions {
display: flex;
align-items: center;
gap: var(--space-2);
flex-shrink: 0;
}
.btn-close {
background: none;
border: none;
cursor: pointer;
font-size: 1rem;
color: var(--color-text-muted);
padding: 2px 6px;
}
.btn-regen {
background: none;
border: 1px solid var(--color-border);
border-radius: var(--radius-sm);
cursor: pointer;
font-size: 0.78rem;
color: var(--color-text-muted);
padding: 2px 8px;
}
.modal-body {
padding: var(--space-6);
display: flex;
flex-direction: column;
gap: var(--space-5);
max-height: 70vh;
overflow-y: auto;
}
.modal-body--loading {
align-items: center;
text-align: center;
padding: var(--space-10) var(--space-6);
gap: var(--space-4);
}
.modal-body--empty {
align-items: center;
text-align: center;
padding: var(--space-10) var(--space-6);
gap: var(--space-4);
color: var(--color-text-muted);
}
.modal-body--error {
align-items: center;
text-align: center;
padding: var(--space-8) var(--space-6);
gap: var(--space-3);
color: var(--color-error);
}
.error-detail {
font-size: 0.8rem;
opacity: 0.8;
}
.research-spinner {
width: 36px;
height: 36px;
border: 3px solid var(--color-border);
border-top-color: var(--color-primary);
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
@keyframes spin { to { transform: rotate(360deg); } }
.generating-msg {
font-weight: 600;
color: var(--color-text);
}
.generating-sub {
font-size: 0.8rem;
color: var(--color-text-muted);
}
.generated-at {
font-size: 0.75rem;
color: var(--color-text-muted);
margin-bottom: calc(-1 * var(--space-2));
}
.research-section {
display: flex;
flex-direction: column;
gap: var(--space-2);
padding-bottom: var(--space-4);
border-bottom: 1px solid var(--color-border-light);
}
.research-section:last-child {
border-bottom: none;
padding-bottom: 0;
}
.research-section--warn .section-title {
color: var(--color-warning);
}
.section-title-row {
display: flex;
align-items: center;
justify-content: space-between;
}
.section-title {
font-size: 0.8rem;
font-weight: 700;
text-transform: uppercase;
letter-spacing: 0.04em;
color: var(--color-text-muted);
margin: 0;
}
.section-body {
font-size: 0.875rem;
color: var(--color-text);
line-height: 1.6;
white-space: pre-wrap;
}
.section-body--private {
font-style: italic;
}
.private-note {
font-size: 0.7rem;
color: var(--color-text-muted);
}
.btn-copy {
background: none;
border: 1px solid var(--color-border);
border-radius: var(--radius-sm);
cursor: pointer;
font-size: 0.72rem;
color: var(--color-text-muted);
padding: 2px 8px;
transition: color 150ms, border-color 150ms;
}
.btn-copy:hover { color: var(--color-primary); border-color: var(--color-primary); }
.btn-primary-sm {
background: var(--color-primary);
color: #fff;
border: none;
border-radius: var(--radius-md);
padding: var(--space-2) var(--space-5);
font-size: 0.875rem;
font-weight: 600;
cursor: pointer;
}
</style>

View file

@ -13,6 +13,7 @@ const emit = defineEmits<{
move: [jobId: number, preSelectedStage?: PipelineStage] move: [jobId: number, preSelectedStage?: PipelineStage]
prep: [jobId: number] prep: [jobId: number]
survey: [jobId: number] survey: [jobId: number]
research: [jobId: number]
}>() }>()
// Signal state // Signal state
@ -180,6 +181,7 @@ const columnColor = computed(() => {
</div> </div>
<footer class="card-footer"> <footer class="card-footer">
<button class="card-action" @click.stop="emit('move', job.id)">Move to </button> <button class="card-action" @click.stop="emit('move', job.id)">Move to </button>
<button v-if="['phone_screen', 'interviewing', 'offer'].includes(job.status)" class="card-action" @click.stop="emit('research', job.id)">🔍 Research</button>
<button v-if="['phone_screen', 'interviewing', 'offer'].includes(job.status)" class="card-action" @click.stop="emit('prep', job.id)">Prep </button> <button v-if="['phone_screen', 'interviewing', 'offer'].includes(job.status)" class="card-action" @click.stop="emit('prep', job.id)">Prep </button>
<button <button
v-if="['survey', 'phone_screen', 'interviewing', 'offer'].includes(job.status)" v-if="['survey', 'phone_screen', 'interviewing', 'offer'].includes(job.status)"

View file

@ -2,12 +2,15 @@ export type ApiError =
| { kind: 'network'; message: string } | { kind: 'network'; message: string }
| { kind: 'http'; status: number; detail: string } | { kind: 'http'; status: number; detail: string }
// Strip trailing slash so '/peregrine/' + '/api/...' → '/peregrine/api/...'
const _apiBase = import.meta.env.BASE_URL.replace(/\/$/, '')
export async function useApiFetch<T>( export async function useApiFetch<T>(
url: string, url: string,
opts?: RequestInit, opts?: RequestInit,
): Promise<{ data: T | null; error: ApiError | null }> { ): Promise<{ data: T | null; error: ApiError | null }> {
try { try {
const res = await fetch(url, opts) const res = await fetch(_apiBase + url, opts)
if (!res.ok) { if (!res.ok) {
const detail = await res.text().catch(() => '') const detail = await res.text().catch(() => '')
return { data: null, error: { kind: 'http', status: res.status, detail } } return { data: null, error: { kind: 'http', status: res.status, detail } }
@ -31,7 +34,7 @@ export function useApiSSE(
onComplete?: () => void, onComplete?: () => void,
onError?: (e: Event) => void, onError?: (e: Event) => void,
): () => void { ): () => void {
const es = new EventSource(url) const es = new EventSource(_apiBase + url)
es.onmessage = (e) => { es.onmessage = (e) => {
try { try {
const data = JSON.parse(e.data) as Record<string, unknown> const data = JSON.parse(e.data) as Record<string, unknown>

View file

@ -1,4 +1,5 @@
import { onMounted, onUnmounted } from 'vue' import { onMounted, onUnmounted } from 'vue'
import { useTheme } from './useTheme'
const KONAMI = ['ArrowUp','ArrowUp','ArrowDown','ArrowDown','ArrowLeft','ArrowRight','ArrowLeft','ArrowRight','b','a'] const KONAMI = ['ArrowUp','ArrowUp','ArrowDown','ArrowDown','ArrowLeft','ArrowRight','ArrowLeft','ArrowRight','b','a']
const KONAMI_AB = ['ArrowUp','ArrowUp','ArrowDown','ArrowDown','ArrowLeft','ArrowRight','ArrowLeft','ArrowRight','a','b'] const KONAMI_AB = ['ArrowUp','ArrowUp','ArrowDown','ArrowDown','ArrowLeft','ArrowRight','ArrowLeft','ArrowRight','a','b']
@ -31,8 +32,10 @@ export function useHackerMode() {
function toggle() { function toggle() {
const root = document.documentElement const root = document.documentElement
if (root.dataset.theme === 'hacker') { if (root.dataset.theme === 'hacker') {
delete root.dataset.theme
localStorage.removeItem('cf-hacker-mode') localStorage.removeItem('cf-hacker-mode')
// Let useTheme restore the user's chosen theme rather than just deleting data-theme
const { restoreTheme } = useTheme()
restoreTheme()
} else { } else {
root.dataset.theme = 'hacker' root.dataset.theme = 'hacker'
localStorage.setItem('cf-hacker-mode', 'true') localStorage.setItem('cf-hacker-mode', 'true')

View file

@ -0,0 +1,82 @@
/**
* useTheme manual theme picker for Peregrine.
*
* Themes: 'auto' | 'light' | 'dark' | 'solarized-dark' | 'solarized-light' | 'colorblind'
* Persisted in localStorage under 'cf-theme'.
* Applied via document.documentElement.dataset.theme.
* 'auto' removes the attribute so the @media prefers-color-scheme rule takes effect.
*
* Hacker mode sits on top of this system toggling it off calls restoreTheme()
* so the user's chosen theme is reinstated rather than dropping back to auto.
*/
import { ref, readonly } from 'vue'
import { useApiFetch } from './useApi'
export type Theme = 'auto' | 'light' | 'dark' | 'solarized-dark' | 'solarized-light' | 'colorblind'
const STORAGE_KEY = 'cf-theme'
const HACKER_KEY = 'cf-hacker-mode'
export const THEME_OPTIONS: { value: Theme; label: string; icon: string }[] = [
{ value: 'auto', label: 'Auto', icon: '⬡' },
{ value: 'light', label: 'Light', icon: '☀' },
{ value: 'dark', label: 'Dark', icon: '🌙' },
{ value: 'solarized-light', label: 'Solarized Light', icon: '🌤' },
{ value: 'solarized-dark', label: 'Solarized Dark', icon: '🌃' },
{ value: 'colorblind', label: 'Colorblind Safe', icon: '♿' },
]
// Module-level singleton so all consumers share the same reactive state.
const _current = ref<Theme>(_load())
function _load(): Theme {
return (localStorage.getItem(STORAGE_KEY) as Theme | null) ?? 'auto'
}
function _apply(theme: Theme) {
const root = document.documentElement
if (theme === 'auto') {
delete root.dataset.theme
} else {
root.dataset.theme = theme
}
}
export function useTheme() {
function setTheme(theme: Theme) {
_current.value = theme
localStorage.setItem(STORAGE_KEY, theme)
_apply(theme)
// Best-effort persist to server; ignore failures (works offline / local LLM)
useApiFetch('/api/settings/theme', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ theme }),
}).catch(() => {})
}
/** Restore user's chosen theme — called when hacker mode or other overlays exit. */
function restoreTheme() {
// Hacker mode clears itself; we only restore if it's actually off.
if (localStorage.getItem(HACKER_KEY) === 'true') return
_apply(_current.value)
}
/** Call once at app boot to apply persisted theme before first render. */
function initTheme() {
// Hacker mode takes priority on restore.
if (localStorage.getItem(HACKER_KEY) === 'true') {
document.documentElement.dataset.theme = 'hacker'
} else {
_apply(_current.value)
}
}
return {
currentTheme: readonly(_current),
setTheme,
restoreTheme,
initTheme,
}
}

View file

@ -2,6 +2,12 @@ import { ref } from 'vue'
import { defineStore } from 'pinia' import { defineStore } from 'pinia'
import { useApiFetch } from '../../composables/useApi' import { useApiFetch } from '../../composables/useApi'
export interface TrainingPair {
index: number
instruction: string
source_file: string
}
export const useFineTuneStore = defineStore('settings/fineTune', () => { export const useFineTuneStore = defineStore('settings/fineTune', () => {
const step = ref(1) const step = ref(1)
const inFlightJob = ref(false) const inFlightJob = ref(false)
@ -10,6 +16,8 @@ export const useFineTuneStore = defineStore('settings/fineTune', () => {
const quotaRemaining = ref<number | null>(null) const quotaRemaining = ref<number | null>(null)
const uploading = ref(false) const uploading = ref(false)
const loading = ref(false) const loading = ref(false)
const pairs = ref<TrainingPair[]>([])
const pairsLoading = ref(false)
let _pollTimer: ReturnType<typeof setInterval> | null = null let _pollTimer: ReturnType<typeof setInterval> | null = null
function resetStep() { step.value = 1 } function resetStep() { step.value = 1 }
@ -37,6 +45,26 @@ export const useFineTuneStore = defineStore('settings/fineTune', () => {
if (!error && data) { inFlightJob.value = true; jobStatus.value = 'queued' } if (!error && data) { inFlightJob.value = true; jobStatus.value = 'queued' }
} }
async function loadPairs() {
pairsLoading.value = true
const { data } = await useApiFetch<{ pairs: TrainingPair[]; total: number }>('/api/settings/fine-tune/pairs')
pairsLoading.value = false
if (data) {
pairs.value = data.pairs
pairsCount.value = data.total
}
}
async function deletePair(index: number) {
const { data } = await useApiFetch<{ ok: boolean; remaining: number }>(
`/api/settings/fine-tune/pairs/${index}`, { method: 'DELETE' }
)
if (data?.ok) {
pairs.value = pairs.value.filter(p => p.index !== index).map((p, i) => ({ ...p, index: i }))
pairsCount.value = data.remaining
}
}
return { return {
step, step,
inFlightJob, inFlightJob,
@ -45,10 +73,14 @@ export const useFineTuneStore = defineStore('settings/fineTune', () => {
quotaRemaining, quotaRemaining,
uploading, uploading,
loading, loading,
pairs,
pairsLoading,
resetStep, resetStep,
loadStatus, loadStatus,
startPolling, startPolling,
stopPolling, stopPolling,
submitJob, submitJob,
loadPairs,
deletePair,
} }
}) })

View file

@ -18,6 +18,7 @@ export const useSearchStore = defineStore('settings/search', () => {
const titleSuggestions = ref<string[]>([]) const titleSuggestions = ref<string[]>([])
const locationSuggestions = ref<string[]>([]) const locationSuggestions = ref<string[]>([])
const excludeSuggestions = ref<string[]>([])
const loading = ref(false) const loading = ref(false)
const saving = ref(false) const saving = ref(false)
@ -99,10 +100,24 @@ export const useSearchStore = defineStore('settings/search', () => {
arr.value = arr.value.filter(v => v !== value) arr.value = arr.value.filter(v => v !== value)
} }
function acceptSuggestion(type: 'title' | 'location', value: string) { async function suggestExcludeKeywords() {
const { data } = await useApiFetch<{ suggestions: string[] }>('/api/settings/search/suggest', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ type: 'exclude_keywords', current: exclude_keywords.value }),
})
if (data?.suggestions) {
excludeSuggestions.value = data.suggestions.filter(s => !exclude_keywords.value.includes(s))
}
}
function acceptSuggestion(type: 'title' | 'location' | 'exclude', value: string) {
if (type === 'title') { if (type === 'title') {
if (!job_titles.value.includes(value)) job_titles.value = [...job_titles.value, value] if (!job_titles.value.includes(value)) job_titles.value = [...job_titles.value, value]
titleSuggestions.value = titleSuggestions.value.filter(s => s !== value) titleSuggestions.value = titleSuggestions.value.filter(s => s !== value)
} else if (type === 'exclude') {
if (!exclude_keywords.value.includes(value)) exclude_keywords.value = [...exclude_keywords.value, value]
excludeSuggestions.value = excludeSuggestions.value.filter(s => s !== value)
} else { } else {
if (!locations.value.includes(value)) locations.value = [...locations.value, value] if (!locations.value.includes(value)) locations.value = [...locations.value, value]
locationSuggestions.value = locationSuggestions.value.filter(s => s !== value) locationSuggestions.value = locationSuggestions.value.filter(s => s !== value)
@ -118,8 +133,9 @@ export const useSearchStore = defineStore('settings/search', () => {
return { return {
remote_preference, job_titles, locations, exclude_keywords, job_boards, remote_preference, job_titles, locations, exclude_keywords, job_boards,
custom_board_urls, blocklist_companies, blocklist_industries, blocklist_locations, custom_board_urls, blocklist_companies, blocklist_industries, blocklist_locations,
titleSuggestions, locationSuggestions, titleSuggestions, locationSuggestions, excludeSuggestions,
loading, saving, saveError, loadError, loading, saving, saveError, loadError,
load, save, suggestTitles, suggestLocations, addTag, removeTag, acceptSuggestion, toggleBoard, load, save, suggestTitles, suggestLocations, suggestExcludeKeywords,
addTag, removeTag, acceptSuggestion, toggleBoard,
} }
}) })

View file

@ -53,6 +53,13 @@
:loading="taskRunning === 'score'" :loading="taskRunning === 'score'"
@click="scoreUnscored" @click="scoreUnscored"
/> />
<WorkflowButton
emoji="🔍"
label="Fill Missing Descriptions"
description="Re-fetch truncated job descriptions"
:loading="taskRunning === 'enrich'"
@click="runEnrich"
/>
</div> </div>
<button <button
@ -80,7 +87,6 @@
? `Last enriched ${formatRelative(store.status.enrichment_last_run)}` ? `Last enriched ${formatRelative(store.status.enrichment_last_run)}`
: 'Auto-enrichment active' }} : 'Auto-enrichment active' }}
</span> </span>
<button class="btn-ghost btn-ghost--sm" @click="runEnrich">Run Now</button>
</div> </div>
</section> </section>
@ -162,24 +168,194 @@
</div> </div>
</section> </section>
<!-- Advanced --> <!-- Danger Zone -->
<section class="home__section"> <section class="home__section">
<details class="advanced"> <details class="danger-zone">
<summary class="advanced__summary">Advanced</summary> <summary class="danger-zone__summary"> Danger Zone</summary>
<div class="advanced__body"> <div class="danger-zone__body">
<p class="advanced__warning"> These actions are destructive and cannot be undone.</p>
<div class="home__actions home__actions--danger"> <!-- Queue reset -->
<button class="action-btn action-btn--danger" @click="confirmPurge"> <div class="dz-block">
🗑 Purge Pending + Rejected <p class="dz-block__title">Queue reset</p>
</button> <p class="dz-block__desc">
<button class="action-btn action-btn--danger" @click="killTasks"> Archive clears your review queue while keeping job URLs for dedup same listings
🛑 Kill Stuck Tasks won't resurface on the next discovery run. Use hard purge only for a full clean slate
including dedup history.
</p>
<fieldset class="dz-scope" aria-label="Clear scope">
<legend class="dz-scope__legend">Clear scope</legend>
<label class="dz-scope__option">
<input type="radio" v-model="dangerScope" value="pending" />
Pending only
</label>
<label class="dz-scope__option">
<input type="radio" v-model="dangerScope" value="pending_approved" />
Pending + approved (stale search)
</label>
</fieldset>
<div class="dz-actions">
<button
class="action-btn action-btn--primary"
:disabled="!!confirmAction"
@click="beginConfirm('archive')"
>
📦 Archive &amp; reset
</button>
<button
class="action-btn action-btn--secondary"
:disabled="!!confirmAction"
@click="beginConfirm('purge')"
>
🗑 Hard purge (delete)
</button>
</div>
<!-- Inline confirm -->
<div v-if="confirmAction" class="dz-confirm" role="alertdialog" aria-live="assertive">
<p v-if="confirmAction.type === 'archive'" class="dz-confirm__msg dz-confirm__msg--info">
Archive <strong>{{ confirmAction.statuses.join(' + ') }}</strong> jobs?
URLs are kept for dedup nothing is permanently deleted.
</p>
<p v-else class="dz-confirm__msg dz-confirm__msg--warn">
Permanently delete <strong>{{ confirmAction.statuses.join(' + ') }}</strong> jobs?
This removes URLs from dedup history too. Cannot be undone.
</p>
<div class="dz-confirm__actions">
<button class="action-btn action-btn--primary" @click="executeConfirm">
{{ confirmAction.type === 'archive' ? 'Yes, archive' : 'Yes, delete' }}
</button>
<button class="action-btn action-btn--secondary" @click="confirmAction = null">
Cancel
</button>
</div>
</div>
</div>
<hr class="dz-divider" />
<!-- Background tasks -->
<div class="dz-block">
<p class="dz-block__title">Background tasks {{ activeTasks.length }} active</p>
<template v-if="activeTasks.length > 0">
<div
v-for="task in activeTasks"
:key="task.id"
class="dz-task"
>
<span class="dz-task__icon">{{ taskIcon(task.task_type) }}</span>
<span class="dz-task__type">{{ task.task_type.replace(/_/g, ' ') }}</span>
<span class="dz-task__label">
{{ task.title ? `${task.title}${task.company ? ' @ ' + task.company : ''}` : `job #${task.job_id}` }}
</span>
<span class="dz-task__status">{{ task.status }}</span>
<button
class="btn-ghost btn-ghost--sm dz-task__cancel"
@click="cancelTaskById(task.id)"
:aria-label="`Cancel ${task.task_type} task`"
>
</button>
</div>
</template>
<button
class="action-btn action-btn--secondary dz-kill"
:disabled="activeTasks.length === 0"
@click="killAll"
>
Kill all stuck
</button> </button>
</div> </div>
<hr class="dz-divider" />
<!-- More options -->
<details class="dz-more">
<summary class="dz-more__summary">More options</summary>
<div class="dz-more__body">
<!-- Email purge -->
<div class="dz-more__item">
<p class="dz-block__title">Purge email data</p>
<p class="dz-block__desc">Clears all email thread logs and email-sourced pending jobs.</p>
<template v-if="moreConfirm === 'email'">
<p class="dz-confirm__msg dz-confirm__msg--warn">
Deletes all email contacts and email-sourced jobs. Cannot be undone.
</p>
<div class="dz-confirm__actions">
<button class="action-btn action-btn--primary" @click="executePurgeTarget('email')">Yes, purge emails</button>
<button class="action-btn action-btn--secondary" @click="moreConfirm = null">Cancel</button>
</div>
</template>
<button v-else class="action-btn action-btn--secondary" @click="moreConfirm = 'email'">
📧 Purge Email Data
</button>
</div>
<!-- Non-remote purge -->
<div class="dz-more__item">
<p class="dz-block__title">Purge non-remote</p>
<p class="dz-block__desc">Removes pending/approved/rejected on-site listings from the DB.</p>
<template v-if="moreConfirm === 'non_remote'">
<p class="dz-confirm__msg dz-confirm__msg--warn">
Deletes all non-remote jobs not yet applied to. Cannot be undone.
</p>
<div class="dz-confirm__actions">
<button class="action-btn action-btn--primary" @click="executePurgeTarget('non_remote')">Yes, purge on-site</button>
<button class="action-btn action-btn--secondary" @click="moreConfirm = null">Cancel</button>
</div>
</template>
<button v-else class="action-btn action-btn--secondary" @click="moreConfirm = 'non_remote'">
🏢 Purge On-site Jobs
</button>
</div>
<!-- Wipe + re-scrape -->
<div class="dz-more__item">
<p class="dz-block__title">Wipe all + re-scrape</p>
<p class="dz-block__desc">Deletes all non-applied jobs then immediately runs a fresh discovery.</p>
<template v-if="moreConfirm === 'rescrape'">
<p class="dz-confirm__msg dz-confirm__msg--warn">
Wipes ALL pending, approved, and rejected jobs, then re-scrapes.
Applied and synced records are kept.
</p>
<div class="dz-confirm__actions">
<button class="action-btn action-btn--primary" @click="executePurgeTarget('rescrape')">Yes, wipe + scrape</button>
<button class="action-btn action-btn--secondary" @click="moreConfirm = null">Cancel</button>
</div>
</template>
<button v-else class="action-btn action-btn--secondary" @click="moreConfirm = 'rescrape'">
🔄 Wipe + Re-scrape
</button>
</div>
</div>
</details>
</div> </div>
</details> </details>
</section> </section>
<!-- Setup banners -->
<section v-if="banners.length > 0" class="home__section" aria-labelledby="setup-heading">
<h2 id="setup-heading" class="home__section-title">Finish setting up Peregrine</h2>
<div class="banners">
<div v-for="banner in banners" :key="banner.key" class="banner">
<span class="banner__icon" aria-hidden="true">💡</span>
<span class="banner__text">{{ banner.text }}</span>
<RouterLink :to="banner.link" class="banner__link">Go to settings </RouterLink>
<button
class="btn-ghost btn-ghost--sm banner__dismiss"
@click="dismissBanner(banner.key)"
:aria-label="`Dismiss: ${banner.text}`"
>
</button>
</div>
</div>
</section>
<!-- Stoop speed toast easter egg 9.2 --> <!-- Stoop speed toast easter egg 9.2 -->
<Transition name="toast"> <Transition name="toast">
<div v-if="stoopToast" class="stoop-toast" role="status" aria-live="polite"> <div v-if="stoopToast" class="stoop-toast" role="status" aria-live="polite">
@ -190,7 +366,7 @@
</template> </template>
<script setup lang="ts"> <script setup lang="ts">
import { ref, computed, onMounted } from 'vue' import { ref, computed, onMounted, onUnmounted } from 'vue'
import { RouterLink } from 'vue-router' import { RouterLink } from 'vue-router'
import { useJobsStore } from '../stores/jobs' import { useJobsStore } from '../stores/jobs'
import { useApiFetch } from '../composables/useApi' import { useApiFetch } from '../composables/useApi'
@ -231,6 +407,8 @@ function formatRelative(isoStr: string) {
return hrs === 1 ? '1 hour ago' : `${hrs} hours ago` return hrs === 1 ? '1 hour ago' : `${hrs} hours ago`
} }
// Task execution
const taskRunning = ref<string | null>(null) const taskRunning = ref<string | null>(null)
const stoopToast = ref(false) const stoopToast = ref(false)
@ -239,13 +417,16 @@ async function runTask(key: string, endpoint: string) {
await useApiFetch(endpoint, { method: 'POST' }) await useApiFetch(endpoint, { method: 'POST' })
taskRunning.value = null taskRunning.value = null
store.refresh() store.refresh()
fetchActiveTasks()
} }
const runDiscovery = () => runTask('discovery', '/api/tasks/discovery') const runDiscovery = () => runTask('discovery', '/api/tasks/discovery')
const syncEmails = () => runTask('email', '/api/tasks/email-sync') const syncEmails = () => runTask('email', '/api/tasks/email-sync')
const scoreUnscored = () => runTask('score', '/api/tasks/score') const scoreUnscored = () => runTask('score', '/api/tasks/score')
const syncIntegration = () => runTask('sync', '/api/tasks/sync') const syncIntegration = () => runTask('sync', '/api/tasks/sync')
const runEnrich = () => useApiFetch('/api/tasks/enrich', { method: 'POST' }) const runEnrich = () => runTask('enrich', '/api/tasks/enrich')
// Add jobs
const addTab = ref<'url' | 'csv'>('url') const addTab = ref<'url' | 'csv'>('url')
const urlInput = ref('') const urlInput = ref('')
@ -269,6 +450,8 @@ function handleCsvUpload(e: Event) {
useApiFetch('/api/jobs/upload-csv', { method: 'POST', body: form }) useApiFetch('/api/jobs/upload-csv', { method: 'POST', body: form })
} }
// Backlog archive
async function archiveByStatus(statuses: string[]) { async function archiveByStatus(statuses: string[]) {
await useApiFetch('/api/jobs/archive', { await useApiFetch('/api/jobs/archive', {
method: 'POST', method: 'POST',
@ -278,26 +461,100 @@ async function archiveByStatus(statuses: string[]) {
store.refresh() store.refresh()
} }
function confirmPurge() { // Danger Zone
// TODO: replace with ConfirmModal component
if (confirm('Permanently delete all pending and rejected jobs? This cannot be undone.')) { interface TaskRow { id: number; task_type: string; status: string; title?: string; company?: string; job_id: number }
useApiFetch('/api/jobs/purge', { interface Banner { key: string; text: string; link: string }
method: 'POST', interface ConfirmAction { type: 'archive' | 'purge'; statuses: string[] }
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ target: 'pending_rejected' }), const activeTasks = ref<TaskRow[]>([])
}) const dangerScope = ref<'pending' | 'pending_approved'>('pending')
store.refresh() const confirmAction = ref<ConfirmAction | null>(null)
} const moreConfirm = ref<string | null>(null)
const banners = ref<Banner[]>([])
let taskPollInterval: ReturnType<typeof setInterval> | null = null
async function fetchActiveTasks() {
const { data } = await useApiFetch<TaskRow[]>('/api/tasks')
activeTasks.value = data ?? []
} }
async function killTasks() { async function fetchBanners() {
const { data } = await useApiFetch<Banner[]>('/api/config/setup-banners')
banners.value = data ?? []
}
function scopeStatuses(): string[] {
return dangerScope.value === 'pending' ? ['pending'] : ['pending', 'approved']
}
function beginConfirm(type: 'archive' | 'purge') {
moreConfirm.value = null
confirmAction.value = { type, statuses: scopeStatuses() }
}
async function executeConfirm() {
const action = confirmAction.value
confirmAction.value = null
if (!action) return
const endpoint = action.type === 'archive' ? '/api/jobs/archive' : '/api/jobs/purge'
const key = action.type === 'archive' ? 'statuses' : 'statuses'
await useApiFetch(endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ [key]: action.statuses }),
})
store.refresh()
fetchActiveTasks()
}
async function cancelTaskById(id: number) {
await useApiFetch(`/api/tasks/${id}`, { method: 'DELETE' })
fetchActiveTasks()
}
async function killAll() {
await useApiFetch('/api/tasks/kill', { method: 'POST' }) await useApiFetch('/api/tasks/kill', { method: 'POST' })
fetchActiveTasks()
}
async function executePurgeTarget(target: string) {
moreConfirm.value = null
await useApiFetch('/api/jobs/purge', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ target }),
})
store.refresh()
fetchActiveTasks()
}
async function dismissBanner(key: string) {
await useApiFetch(`/api/config/setup-banners/${key}/dismiss`, { method: 'POST' })
banners.value = banners.value.filter(b => b.key !== key)
}
function taskIcon(taskType: string): string {
const icons: Record<string, string> = {
cover_letter: '✉️', company_research: '🔍', discovery: '🌐',
enrich_descriptions: '📝', email_sync: '📧', score: '📊',
scrape_url: '🔗',
}
return icons[taskType] ?? '⚙️'
} }
onMounted(async () => { onMounted(async () => {
store.refresh() store.refresh()
const { data } = await useApiFetch<{ name: string }>('/api/config/user') const { data } = await useApiFetch<{ name: string }>('/api/config/user')
if (data?.name) userName.value = data.name if (data?.name) userName.value = data.name
fetchActiveTasks()
fetchBanners()
taskPollInterval = setInterval(fetchActiveTasks, 5000)
})
onUnmounted(() => {
if (taskPollInterval) clearInterval(taskPollInterval)
}) })
</script> </script>
@ -392,12 +649,11 @@ onMounted(async () => {
.home__actions { .home__actions {
display: grid; display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
gap: var(--space-3); gap: var(--space-3);
} }
.home__actions--secondary { grid-template-columns: repeat(auto-fit, minmax(240px, 1fr)); } .home__actions--secondary { grid-template-columns: repeat(auto-fit, minmax(240px, 1fr)); }
.home__actions--danger { grid-template-columns: repeat(auto-fit, minmax(220px, 1fr)); }
.sync-banner { .sync-banner {
display: flex; display: flex;
@ -451,9 +707,7 @@ onMounted(async () => {
.action-btn--secondary { background: var(--color-surface-alt); color: var(--color-text); border: 1px solid var(--color-border); } .action-btn--secondary { background: var(--color-surface-alt); color: var(--color-text); border: 1px solid var(--color-border); }
.action-btn--secondary:hover { background: var(--color-border-light); } .action-btn--secondary:hover { background: var(--color-border-light); }
.action-btn--secondary:disabled { opacity: 0.4; cursor: not-allowed; }
.action-btn--danger { background: transparent; color: var(--color-error); border: 1px solid var(--color-error); }
.action-btn--danger:hover { background: rgba(192, 57, 43, 0.08); }
.enrichment-row { .enrichment-row {
display: flex; display: flex;
@ -528,13 +782,15 @@ onMounted(async () => {
.add-jobs__textarea:focus { outline: 2px solid var(--app-primary); outline-offset: 1px; } .add-jobs__textarea:focus { outline: 2px solid var(--app-primary); outline-offset: 1px; }
.advanced { /* ── Danger Zone ──────────────────────────────────────── */
.danger-zone {
background: var(--color-surface-raised); background: var(--color-surface-raised);
border: 1px solid var(--color-border-light); border: 1px solid var(--color-border-light);
border-radius: var(--radius-md); border-radius: var(--radius-md);
} }
.advanced__summary { .danger-zone__summary {
padding: var(--space-3) var(--space-4); padding: var(--space-3) var(--space-4);
cursor: pointer; cursor: pointer;
font-size: var(--text-sm); font-size: var(--text-sm);
@ -544,21 +800,172 @@ onMounted(async () => {
user-select: none; user-select: none;
} }
.advanced__summary::-webkit-details-marker { display: none; } .danger-zone__summary::-webkit-details-marker { display: none; }
.advanced__summary::before { content: '▶ '; font-size: 0.7em; } .danger-zone__summary::before { content: '▶ '; font-size: 0.7em; }
details[open] > .advanced__summary::before { content: '▼ '; } details[open] > .danger-zone__summary::before { content: '▼ '; }
.advanced__body { padding: 0 var(--space-4) var(--space-4); display: flex; flex-direction: column; gap: var(--space-4); } .danger-zone__body {
padding: 0 var(--space-4) var(--space-4);
display: flex;
flex-direction: column;
gap: var(--space-5);
}
.advanced__warning { .dz-block { display: flex; flex-direction: column; gap: var(--space-3); }
.dz-block__title {
font-size: var(--text-sm); font-size: var(--text-sm);
color: var(--color-warning); font-weight: 600;
background: rgba(212, 137, 26, 0.08); color: var(--color-text);
}
.dz-block__desc {
font-size: var(--text-sm);
color: var(--color-text-muted);
}
.dz-scope {
border: none;
padding: 0;
margin: 0;
display: flex;
gap: var(--space-5);
flex-wrap: wrap;
}
.dz-scope__legend {
font-size: var(--text-xs);
color: var(--color-text-muted);
margin-bottom: var(--space-2);
float: left;
width: 100%;
}
.dz-scope__option {
display: flex;
align-items: center;
gap: var(--space-2);
font-size: var(--text-sm);
cursor: pointer;
}
.dz-actions {
display: flex;
gap: var(--space-3);
flex-wrap: wrap;
}
.dz-confirm {
padding: var(--space-3) var(--space-4); padding: var(--space-3) var(--space-4);
border-radius: var(--radius-md); border-radius: var(--radius-md);
border-left: 3px solid var(--color-warning); display: flex;
flex-direction: column;
gap: var(--space-3);
} }
.dz-confirm__msg {
font-size: var(--text-sm);
padding: var(--space-3) var(--space-4);
border-radius: var(--radius-md);
border-left: 3px solid;
}
.dz-confirm__msg--info {
background: rgba(52, 152, 219, 0.1);
border-color: var(--app-primary);
color: var(--color-text);
}
.dz-confirm__msg--warn {
background: rgba(192, 57, 43, 0.08);
border-color: var(--color-error);
color: var(--color-text);
}
.dz-confirm__actions {
display: flex;
gap: var(--space-3);
}
.dz-divider {
border: none;
border-top: 1px solid var(--color-border-light);
margin: 0;
}
.dz-task {
display: flex;
align-items: center;
gap: var(--space-2);
padding: var(--space-2) var(--space-3);
background: var(--color-surface-alt);
border-radius: var(--radius-md);
font-size: var(--text-xs);
}
.dz-task__icon { flex-shrink: 0; }
.dz-task__type { font-family: var(--font-mono); color: var(--color-text-muted); min-width: 120px; }
.dz-task__label { flex: 1; color: var(--color-text); overflow: hidden; text-overflow: ellipsis; white-space: nowrap; }
.dz-task__status { color: var(--color-text-muted); font-style: italic; }
.dz-task__cancel { margin-left: var(--space-2); }
.dz-kill { align-self: flex-start; }
.dz-more {
background: transparent;
border: none;
}
.dz-more__summary {
cursor: pointer;
font-size: var(--text-sm);
font-weight: 600;
color: var(--color-text-muted);
list-style: none;
user-select: none;
padding: var(--space-1) 0;
}
.dz-more__summary::-webkit-details-marker { display: none; }
.dz-more__summary::before { content: '▶ '; font-size: 0.7em; }
details[open] > .dz-more__summary::before { content: '▼ '; }
.dz-more__body {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(220px, 1fr));
gap: var(--space-5);
margin-top: var(--space-4);
}
.dz-more__item { display: flex; flex-direction: column; gap: var(--space-2); }
/* ── Setup banners ────────────────────────────────────── */
.banners {
display: flex;
flex-direction: column;
gap: var(--space-2);
}
.banner {
display: flex;
align-items: center;
gap: var(--space-3);
padding: var(--space-3) var(--space-4);
background: var(--color-surface-raised);
border: 1px solid var(--color-border-light);
border-radius: var(--radius-md);
font-size: var(--text-sm);
}
.banner__icon { flex-shrink: 0; }
.banner__text { flex: 1; color: var(--color-text); }
.banner__link { color: var(--app-primary); text-decoration: none; white-space: nowrap; font-weight: 500; }
.banner__link:hover { text-decoration: underline; }
.banner__dismiss { margin-left: var(--space-1); }
/* ── Toast ────────────────────────────────────────────── */
.stoop-toast { .stoop-toast {
position: fixed; position: fixed;
bottom: var(--space-6); bottom: var(--space-6);
@ -588,6 +995,7 @@ details[open] > .advanced__summary::before { content: '▼ '; }
.home { padding: var(--space-4); gap: var(--space-6); } .home { padding: var(--space-4); gap: var(--space-6); }
.home__greeting { font-size: var(--text-2xl); } .home__greeting { font-size: var(--text-2xl); }
.home__metrics { grid-template-columns: repeat(3, 1fr); } .home__metrics { grid-template-columns: repeat(3, 1fr); }
.dz-more__body { grid-template-columns: 1fr; }
} }
@media (max-width: 480px) { @media (max-width: 480px) {

View file

@ -7,6 +7,7 @@ import type { StageSignal } from '../stores/interviews'
import { useApiFetch } from '../composables/useApi' import { useApiFetch } from '../composables/useApi'
import InterviewCard from '../components/InterviewCard.vue' import InterviewCard from '../components/InterviewCard.vue'
import MoveToSheet from '../components/MoveToSheet.vue' import MoveToSheet from '../components/MoveToSheet.vue'
import CompanyResearchModal from '../components/CompanyResearchModal.vue'
const router = useRouter() const router = useRouter()
const store = useInterviewsStore() const store = useInterviewsStore()
@ -22,10 +23,29 @@ function openMove(jobId: number, preSelectedStage?: PipelineStage) {
async function onMove(stage: PipelineStage, opts: { interview_date?: string; rejection_stage?: string }) { async function onMove(stage: PipelineStage, opts: { interview_date?: string; rejection_stage?: string }) {
if (!moveTarget.value) return if (!moveTarget.value) return
const movedJob = moveTarget.value
const wasHired = stage === 'hired' const wasHired = stage === 'hired'
await store.move(moveTarget.value.id, stage, opts) await store.move(movedJob.id, stage, opts)
moveTarget.value = null moveTarget.value = null
if (wasHired) triggerConfetti() if (wasHired) triggerConfetti()
// Auto-open research modal when moving to phone_screen (mirrors Streamlit behaviour)
if (stage === 'phone_screen') openResearch(movedJob.id, `${movedJob.title} at ${movedJob.company}`)
}
// Company research modal
const researchJobId = ref<number | null>(null)
const researchJobTitle = ref('')
const researchAutoGen = ref(false)
function openResearch(jobId: number, jobTitle: string, autoGenerate = true) {
researchJobId.value = jobId
researchJobTitle.value = jobTitle
researchAutoGen.value = autoGenerate
}
function onInterviewCardResearch(jobId: number) {
const job = store.jobs.find(j => j.id === jobId)
if (job) openResearch(jobId, `${job.title} at ${job.company}`, false)
} }
// Collapsible Applied section // Collapsible Applied section
@ -466,7 +486,8 @@ function daysSince(dateStr: string | null) {
</div> </div>
<InterviewCard v-for="(job, i) in store.phoneScreen" :key="job.id" :job="job" <InterviewCard v-for="(job, i) in store.phoneScreen" :key="job.id" :job="job"
:focused="focusedCol === 0 && focusedCard === i" :focused="focusedCol === 0 && focusedCard === i"
@move="openMove" @prep="router.push(`/prep/${$event}`)" @survey="router.push('/survey/' + $event)" /> @move="openMove" @prep="router.push(`/prep/${$event}`)" @survey="router.push('/survey/' + $event)"
@research="onInterviewCardResearch" />
</div> </div>
<div class="kanban-col" :class="{ 'kanban-col--focused': focusedCol === 1 }" aria-label="Interviewing"> <div class="kanban-col" :class="{ 'kanban-col--focused': focusedCol === 1 }" aria-label="Interviewing">
@ -479,7 +500,8 @@ function daysSince(dateStr: string | null) {
</div> </div>
<InterviewCard v-for="(job, i) in store.interviewing" :key="job.id" :job="job" <InterviewCard v-for="(job, i) in store.interviewing" :key="job.id" :job="job"
:focused="focusedCol === 1 && focusedCard === i" :focused="focusedCol === 1 && focusedCard === i"
@move="openMove" @prep="router.push(`/prep/${$event}`)" @survey="router.push('/survey/' + $event)" /> @move="openMove" @prep="router.push(`/prep/${$event}`)" @survey="router.push('/survey/' + $event)"
@research="onInterviewCardResearch" />
</div> </div>
<div class="kanban-col" :class="{ 'kanban-col--focused': focusedCol === 2 }" aria-label="Offer and Hired"> <div class="kanban-col" :class="{ 'kanban-col--focused': focusedCol === 2 }" aria-label="Offer and Hired">
@ -492,7 +514,8 @@ function daysSince(dateStr: string | null) {
</div> </div>
<InterviewCard v-for="(job, i) in store.offerHired" :key="job.id" :job="job" <InterviewCard v-for="(job, i) in store.offerHired" :key="job.id" :job="job"
:focused="focusedCol === 2 && focusedCard === i" :focused="focusedCol === 2 && focusedCard === i"
@move="openMove" @prep="router.push(`/prep/${$event}`)" @survey="router.push('/survey/' + $event)" /> @move="openMove" @prep="router.push(`/prep/${$event}`)" @survey="router.push('/survey/' + $event)"
@research="onInterviewCardResearch" />
</div> </div>
</section> </section>
@ -525,6 +548,14 @@ function daysSince(dateStr: string | null) {
@move="onMove" @move="onMove"
@close="moveTarget = null; movePreSelected = undefined" @close="moveTarget = null; movePreSelected = undefined"
/> />
<CompanyResearchModal
v-if="researchJobId !== null"
:jobId="researchJobId"
:jobTitle="researchJobTitle"
:autoGenerate="researchAutoGen"
@close="researchJobId = null"
/>
</div> </div>
</template> </template>

View file

@ -98,25 +98,50 @@
<span class="spinner" aria-hidden="true" /> <span class="spinner" aria-hidden="true" />
<span>Loading</span> <span>Loading</span>
</div> </div>
<div v-else-if="store.listJobs.length === 0" class="review__empty" role="status"> <template v-else>
<p class="empty-desc">No {{ activeTab }} jobs.</p> <!-- Sort + filter bar -->
</div> <div class="list-controls" aria-label="Sort and filter">
<ul v-else class="job-list" role="list"> <select v-model="sortBy" class="list-sort" aria-label="Sort by">
<li v-for="job in store.listJobs" :key="job.id" class="job-list__item"> <option value="match_score">Best match</option>
<div class="job-list__info"> <option value="date_found">Newest first</option>
<span class="job-list__title">{{ job.title }}</span> <option value="company">Company AZ</option>
<span class="job-list__company">{{ job.company }}</span> </select>
</div> <label class="list-filter-remote">
<div class="job-list__meta"> <input type="checkbox" v-model="filterRemote" />
<span v-if="job.match_score !== null" class="score-pill" :class="scorePillClass(job.match_score)"> Remote only
{{ job.match_score }}% </label>
</span> <span class="list-count">{{ sortedFilteredJobs.length }} job{{ sortedFilteredJobs.length !== 1 ? 's' : '' }}</span>
<a :href="job.url" target="_blank" rel="noopener noreferrer" class="job-list__link"> </div>
View
</a> <div v-if="sortedFilteredJobs.length === 0" class="review__empty" role="status">
</div> <p class="empty-desc">No {{ activeTab }} jobs{{ filterRemote ? ' (remote only)' : '' }}.</p>
</li> </div>
</ul> <ul v-else class="job-list" role="list">
<li v-for="job in sortedFilteredJobs" :key="job.id" class="job-list__item">
<div class="job-list__info">
<span class="job-list__title">{{ job.title }}</span>
<span class="job-list__company">
{{ job.company }}
<span v-if="job.is_remote" class="remote-tag">Remote</span>
</span>
</div>
<div class="job-list__meta">
<span v-if="job.match_score !== null" class="score-pill" :class="scorePillClass(job.match_score)">
{{ job.match_score }}%
</span>
<button
v-if="activeTab === 'approved'"
class="job-list__action"
@click="router.push(`/apply/${job.id}`)"
:aria-label="`Draft cover letter for ${job.title}`"
> Draft</button>
<a :href="job.url" target="_blank" rel="noopener noreferrer" class="job-list__link">
View
</a>
</div>
</li>
</ul>
</template>
</div> </div>
<!-- Help overlay --> <!-- Help overlay -->
@ -186,12 +211,13 @@
<script setup lang="ts"> <script setup lang="ts">
import { ref, computed, watch, onMounted, onUnmounted } from 'vue' import { ref, computed, watch, onMounted, onUnmounted } from 'vue'
import { useRoute } from 'vue-router' import { useRoute, useRouter } from 'vue-router'
import { useReviewStore } from '../stores/review' import { useReviewStore } from '../stores/review'
import JobCardStack from '../components/JobCardStack.vue' import JobCardStack from '../components/JobCardStack.vue'
const store = useReviewStore() const store = useReviewStore()
const route = useRoute() const route = useRoute()
const router = useRouter()
const stackRef = ref<InstanceType<typeof JobCardStack> | null>(null) const stackRef = ref<InstanceType<typeof JobCardStack> | null>(null)
// Tabs // Tabs
@ -315,6 +341,30 @@ function onKeyDown(e: KeyboardEvent) {
} }
} }
// List view: sort + filter
type SortKey = 'match_score' | 'date_found' | 'company'
const sortBy = ref<SortKey>('match_score')
const filterRemote = ref(false)
const sortedFilteredJobs = computed(() => {
let jobs = [...store.listJobs]
if (filterRemote.value) jobs = jobs.filter(j => j.is_remote)
jobs.sort((a, b) => {
if (sortBy.value === 'match_score') return (b.match_score ?? -1) - (a.match_score ?? -1)
if (sortBy.value === 'date_found') return new Date(b.date_found).getTime() - new Date(a.date_found).getTime()
if (sortBy.value === 'company') return (a.company ?? '').localeCompare(b.company ?? '')
return 0
})
return jobs
})
// Reset filters when switching tabs
watch(activeTab, () => {
filterRemote.value = false
sortBy.value = 'match_score'
})
// List view score pill // List view score pill
function scorePillClass(score: number) { function scorePillClass(score: number) {
@ -659,6 +709,69 @@ kbd {
font-weight: 600; font-weight: 600;
} }
.job-list__action {
font-size: var(--text-xs);
font-weight: 600;
color: var(--app-primary);
background: color-mix(in srgb, var(--app-primary) 10%, transparent);
border: 1px solid color-mix(in srgb, var(--app-primary) 25%, transparent);
border-radius: var(--radius-sm);
padding: 2px 8px;
cursor: pointer;
transition: background 150ms;
white-space: nowrap;
}
.job-list__action:hover {
background: color-mix(in srgb, var(--app-primary) 18%, transparent);
}
.remote-tag {
font-size: 0.65rem;
font-weight: 700;
color: var(--color-info);
background: color-mix(in srgb, var(--color-info) 12%, transparent);
border-radius: var(--radius-full);
padding: 1px 5px;
margin-left: 4px;
}
/* ── List controls (sort + filter) ──────────────────────────────────── */
.list-controls {
display: flex;
align-items: center;
gap: var(--space-3);
flex-wrap: wrap;
margin-bottom: var(--space-3);
}
.list-sort {
font-size: var(--text-xs);
border: 1px solid var(--color-border);
border-radius: var(--radius-sm);
background: var(--color-surface-raised);
color: var(--color-text);
padding: 3px 8px;
cursor: pointer;
}
.list-filter-remote {
display: flex;
align-items: center;
gap: var(--space-1);
font-size: var(--text-xs);
color: var(--color-text-muted);
cursor: pointer;
user-select: none;
}
.list-count {
font-size: var(--text-xs);
color: var(--color-text-muted);
margin-left: auto;
}
/* ── Help overlay ────────────────────────────────────────────────────── */ /* ── Help overlay ────────────────────────────────────────────────────── */
.help-overlay { .help-overlay {

View file

@ -6,7 +6,7 @@ import { useAppConfigStore } from '../../stores/appConfig'
const store = useFineTuneStore() const store = useFineTuneStore()
const config = useAppConfigStore() const config = useAppConfigStore()
const { step, inFlightJob, jobStatus, pairsCount, quotaRemaining } = storeToRefs(store) const { step, inFlightJob, jobStatus, pairsCount, quotaRemaining, pairs, pairsLoading } = storeToRefs(store)
const fileInput = ref<HTMLInputElement | null>(null) const fileInput = ref<HTMLInputElement | null>(null)
const selectedFiles = ref<File[]>([]) const selectedFiles = ref<File[]>([])
@ -45,6 +45,7 @@ async function checkLocalModel() {
onMounted(async () => { onMounted(async () => {
store.startPolling() store.startPolling()
await store.loadPairs()
if (store.step === 3 && !config.isCloud) await checkLocalModel() if (store.step === 3 && !config.isCloud) await checkLocalModel()
}) })
onUnmounted(() => { store.stopPolling(); store.resetStep() }) onUnmounted(() => { store.stopPolling(); store.resetStep() })
@ -99,6 +100,22 @@ onUnmounted(() => { store.stopPolling(); store.resetStep() })
</button> </button>
<button @click="store.step = 3" class="btn-secondary">Skip Train</button> <button @click="store.step = 3" class="btn-secondary">Skip Train</button>
</div> </div>
<!-- Training pairs list -->
<div v-if="pairs.length > 0" class="pairs-list">
<h4>Training Pairs <span class="pairs-badge">{{ pairs.length }}</span></h4>
<p class="section-note">Review and remove any low-quality pairs before training.</p>
<div v-if="pairsLoading" class="pairs-loading">Loading</div>
<ul v-else class="pairs-items">
<li v-for="pair in pairs" :key="pair.index" class="pair-item">
<div class="pair-info">
<span class="pair-instruction">{{ pair.instruction }}</span>
<span class="pair-source">{{ pair.source_file }}</span>
</div>
<button class="pair-delete" @click="store.deletePair(pair.index)" title="Remove this pair"></button>
</li>
</ul>
</div>
</section> </section>
<!-- Step 3: Train --> <!-- Step 3: Train -->
@ -160,4 +177,16 @@ onUnmounted(() => { store.stopPolling(); store.resetStep() })
.status-running { background: var(--color-warning-bg, #fef3c7); color: var(--color-warning-fg, #92400e); } .status-running { background: var(--color-warning-bg, #fef3c7); color: var(--color-warning-fg, #92400e); }
.status-ok { color: var(--color-success, #16a34a); } .status-ok { color: var(--color-success, #16a34a); }
.status-fail { color: var(--color-error, #dc2626); } .status-fail { color: var(--color-error, #dc2626); }
.pairs-list { margin-top: var(--space-6, 1.5rem); }
.pairs-list h4 { font-size: 0.95rem; font-weight: 600; margin: 0 0 var(--space-2, 0.5rem); display: flex; align-items: center; gap: 0.5rem; }
.pairs-badge { background: var(--color-primary, #2d5a27); color: #fff; font-size: 0.75rem; padding: 1px 7px; border-radius: var(--radius-full, 9999px); }
.pairs-loading { color: var(--color-text-muted); font-size: 0.875rem; padding: var(--space-2, 0.5rem) 0; }
.pairs-items { list-style: none; margin: 0; padding: 0; display: flex; flex-direction: column; gap: var(--space-2, 0.5rem); max-height: 280px; overflow-y: auto; }
.pair-item { display: flex; align-items: center; gap: var(--space-3, 0.75rem); padding: var(--space-2, 0.5rem) var(--space-3, 0.75rem); background: var(--color-surface-alt); border: 1px solid var(--color-border-light); border-radius: var(--radius-md); }
.pair-info { flex: 1; min-width: 0; display: flex; flex-direction: column; gap: 2px; }
.pair-instruction { font-size: 0.85rem; color: var(--color-text); white-space: nowrap; overflow: hidden; text-overflow: ellipsis; }
.pair-source { font-size: 0.75rem; color: var(--color-text-muted); }
.pair-delete { flex-shrink: 0; background: none; border: none; color: var(--color-error); cursor: pointer; font-size: 0.9rem; padding: 2px 4px; border-radius: var(--radius-sm); transition: background 150ms; }
.pair-delete:hover { background: var(--color-error); color: #fff; }
</style> </style>

View file

@ -62,9 +62,16 @@
rows="3" rows="3"
placeholder="How you write and communicate — used to shape cover letter voice." placeholder="How you write and communicate — used to shape cover letter voice."
/> />
<button
v-if="config.tier !== 'free'"
class="btn-generate"
type="button"
@click="generateVoice"
:disabled="generatingVoice"
>{{ generatingVoice ? 'Generating…' : 'Generate ✦' }}</button>
</div> </div>
<div class="field-row"> <div v-if="!config.isCloud" class="field-row">
<label class="field-label" for="profile-inference">Inference profile</label> <label class="field-label" for="profile-inference">Inference profile</label>
<select id="profile-inference" v-model="store.inference_profile" class="select-input"> <select id="profile-inference" v-model="store.inference_profile" class="select-input">
<option value="remote">Remote</option> <option value="remote">Remote</option>
@ -210,6 +217,7 @@ const config = useAppConfigStore()
const newNdaCompany = ref('') const newNdaCompany = ref('')
const generatingSummary = ref(false) const generatingSummary = ref(false)
const generatingMissions = ref(false) const generatingMissions = ref(false)
const generatingVoice = ref(false)
onMounted(() => { store.load() }) onMounted(() => { store.load() })
@ -265,6 +273,15 @@ async function generateMissions() {
})) }))
} }
} }
async function generateVoice() {
generatingVoice.value = true
const { data, error } = await useApiFetch<{ voice?: string }>(
'/api/settings/profile/generate-voice', { method: 'POST' }
)
generatingVoice.value = false
if (!error && data?.voice) store.candidate_voice = data.voice
}
</script> </script>
<style scoped> <style scoped>

View file

@ -15,7 +15,13 @@
<div class="empty-card"> <div class="empty-card">
<h3>Upload & Parse</h3> <h3>Upload & Parse</h3>
<p>Upload a PDF, DOCX, or ODT and we'll extract your info automatically.</p> <p>Upload a PDF, DOCX, or ODT and we'll extract your info automatically.</p>
<input type="file" accept=".pdf,.docx,.odt" @change="handleUpload" ref="fileInput" /> <input type="file" accept=".pdf,.docx,.odt" @change="handleFileSelect" ref="fileInput" />
<button
v-if="pendingFile"
@click="handleUpload"
:disabled="uploading"
style="margin-top:10px"
>{{ uploading ? 'Parsing…' : `Parse "${pendingFile.name}"` }}</button>
<p v-if="uploadError" class="error">{{ uploadError }}</p> <p v-if="uploadError" class="error">{{ uploadError }}</p>
</div> </div>
<!-- Blank --> <!-- Blank -->
@ -24,8 +30,8 @@
<p>Start with a blank form and fill in your details.</p> <p>Start with a blank form and fill in your details.</p>
<button @click="store.createBlank()" :disabled="store.loading">Start from Scratch</button> <button @click="store.createBlank()" :disabled="store.loading">Start from Scratch</button>
</div> </div>
<!-- Wizard --> <!-- Wizard self-hosted only -->
<div class="empty-card"> <div v-if="!config.isCloud" class="empty-card">
<h3>Run Setup Wizard</h3> <h3>Run Setup Wizard</h3>
<p>Walk through the onboarding wizard to set up your profile step by step.</p> <p>Walk through the onboarding wizard to set up your profile step by step.</p>
<RouterLink to="/setup">Open Setup Wizard </RouterLink> <RouterLink to="/setup">Open Setup Wizard </RouterLink>
@ -35,6 +41,21 @@
<!-- Full form (when resume exists) --> <!-- Full form (when resume exists) -->
<template v-else-if="store.hasResume"> <template v-else-if="store.hasResume">
<!-- Replace resume via upload -->
<section class="form-section replace-section">
<h3>Replace Resume</h3>
<p class="section-note">Upload a new PDF, DOCX, or ODT to re-parse and overwrite the current data.</p>
<input type="file" accept=".pdf,.docx,.odt" @change="handleFileSelect" ref="replaceFileInput" />
<button
v-if="pendingFile"
@click="handleUpload"
:disabled="uploading"
class="btn-primary"
style="margin-top:10px"
>{{ uploading ? 'Parsing…' : `Parse "${pendingFile.name}"` }}</button>
<p v-if="uploadError" class="error">{{ uploadError }}</p>
</section>
<!-- Personal Information --> <!-- Personal Information -->
<section class="form-section"> <section class="form-section">
<h3>Personal Information</h3> <h3>Personal Information</h3>
@ -221,17 +242,22 @@ import { ref, onMounted } from 'vue'
import { storeToRefs } from 'pinia' import { storeToRefs } from 'pinia'
import { useResumeStore } from '../../stores/settings/resume' import { useResumeStore } from '../../stores/settings/resume'
import { useProfileStore } from '../../stores/settings/profile' import { useProfileStore } from '../../stores/settings/profile'
import { useAppConfigStore } from '../../stores/appConfig'
import { useApiFetch } from '../../composables/useApi' import { useApiFetch } from '../../composables/useApi'
const store = useResumeStore() const store = useResumeStore()
const profileStore = useProfileStore() const profileStore = useProfileStore()
const config = useAppConfigStore()
const { loadError } = storeToRefs(store) const { loadError } = storeToRefs(store)
const showSelfId = ref(false) const showSelfId = ref(false)
const skillInput = ref('') const skillInput = ref('')
const domainInput = ref('') const domainInput = ref('')
const kwInput = ref('') const kwInput = ref('')
const uploadError = ref<string | null>(null) const uploadError = ref<string | null>(null)
const uploading = ref(false)
const pendingFile = ref<File | null>(null)
const fileInput = ref<HTMLInputElement | null>(null) const fileInput = ref<HTMLInputElement | null>(null)
const replaceFileInput = ref<HTMLInputElement | null>(null)
onMounted(async () => { onMounted(async () => {
await store.load() await store.load()
@ -246,9 +272,16 @@ onMounted(async () => {
} }
}) })
async function handleUpload(event: Event) { function handleFileSelect(event: Event) {
const file = (event.target as HTMLInputElement).files?.[0] const file = (event.target as HTMLInputElement).files?.[0]
pendingFile.value = file ?? null
uploadError.value = null
}
async function handleUpload() {
const file = pendingFile.value
if (!file) return if (!file) return
uploading.value = true
uploadError.value = null uploadError.value = null
const formData = new FormData() const formData = new FormData()
formData.append('file', file) formData.append('file', file)
@ -256,10 +289,14 @@ async function handleUpload(event: Event) {
'/api/settings/resume/upload', '/api/settings/resume/upload',
{ method: 'POST', body: formData } { method: 'POST', body: formData }
) )
uploading.value = false
if (error || !data?.ok) { if (error || !data?.ok) {
uploadError.value = data?.error ?? (typeof error === 'string' ? error : (error?.kind === 'network' ? error.message : error?.detail ?? 'Upload failed')) uploadError.value = data?.error ?? (typeof error === 'string' ? error : (error?.kind === 'network' ? error.message : error?.detail ?? 'Upload failed'))
return return
} }
pendingFile.value = null
if (fileInput.value) fileInput.value.value = ''
if (replaceFileInput.value) replaceFileInput.value.value = ''
if (data.data) { if (data.data) {
await store.load() await store.load()
} }
@ -307,4 +344,5 @@ h3 { font-size: 1rem; font-weight: 600; margin-bottom: var(--space-3, 16px); col
.section-note { font-size: 0.8rem; color: var(--color-text-secondary, #94a3b8); margin-bottom: 16px; } .section-note { font-size: 0.8rem; color: var(--color-text-secondary, #94a3b8); margin-bottom: 16px; }
.toggle-btn { margin-left: 10px; padding: 2px 10px; background: transparent; border: 1px solid var(--color-border, rgba(255,255,255,0.15)); border-radius: 4px; color: var(--color-text-secondary, #94a3b8); cursor: pointer; font-size: 0.78rem; } .toggle-btn { margin-left: 10px; padding: 2px 10px; background: transparent; border: 1px solid var(--color-border, rgba(255,255,255,0.15)); border-radius: 4px; color: var(--color-text-secondary, #94a3b8); cursor: pointer; font-size: 0.78rem; }
.loading { text-align: center; padding: var(--space-8, 48px); color: var(--color-text-secondary, #94a3b8); } .loading { text-align: center; padding: var(--space-8, 48px); color: var(--color-text-secondary, #94a3b8); }
.replace-section { background: var(--color-surface-2, rgba(255,255,255,0.03)); border-radius: 8px; padding: var(--space-4, 24px); }
</style> </style>

View file

@ -69,7 +69,18 @@
{{ kw }} <button @click="store.removeTag('exclude_keywords', kw)">×</button> {{ kw }} <button @click="store.removeTag('exclude_keywords', kw)">×</button>
</span> </span>
</div> </div>
<input v-model="excludeInput" @keydown.enter.prevent="store.addTag('exclude_keywords', excludeInput); excludeInput = ''" placeholder="Add keyword, press Enter" /> <div class="tag-input-row">
<input v-model="excludeInput" @keydown.enter.prevent="store.addTag('exclude_keywords', excludeInput); excludeInput = ''" placeholder="Add keyword, press Enter" />
<button @click="store.suggestExcludeKeywords()" class="btn-suggest">Suggest</button>
</div>
<div v-if="store.excludeSuggestions.length > 0" class="suggestions">
<span
v-for="s in store.excludeSuggestions"
:key="s"
class="suggestion-chip"
@click="store.acceptSuggestion('exclude', s)"
>+ {{ s }}</span>
</div>
</section> </section>
<!-- Job Boards --> <!-- Job Boards -->

View file

@ -41,7 +41,8 @@ const config = useAppConfigStore()
const devOverride = computed(() => !!config.devTierOverride) const devOverride = computed(() => !!config.devTierOverride)
const gpuProfiles = ['single-gpu', 'dual-gpu'] const gpuProfiles = ['single-gpu', 'dual-gpu']
const showSystem = computed(() => !config.isCloud) const showSystem = computed(() => !config.isCloud)
const showData = computed(() => !config.isCloud)
const showFineTune = computed(() => { const showFineTune = computed(() => {
if (config.isCloud) return config.tier === 'premium' if (config.isCloud) return config.tier === 'premium'
return gpuProfiles.includes(config.inferenceProfile) return gpuProfiles.includes(config.inferenceProfile)
@ -65,7 +66,7 @@ const allGroups = [
]}, ]},
{ label: 'Account', items: [ { label: 'Account', items: [
{ key: 'license', path: '/settings/license', label: 'License', show: true }, { key: 'license', path: '/settings/license', label: 'License', show: true },
{ key: 'data', path: '/settings/data', label: 'Data', show: true }, { key: 'data', path: '/settings/data', label: 'Data', show: showData },
{ key: 'privacy', path: '/settings/privacy', label: 'Privacy', show: true }, { key: 'privacy', path: '/settings/privacy', label: 'Privacy', show: true },
]}, ]},
{ label: 'Dev', items: [ { label: 'Dev', items: [