Compare commits

..

61 commits
v0.8.6 ... main

Author SHA1 Message Date
ef8b857bf9 chore: remove Streamlit app service from compose.yml (#104)
Some checks failed
CI / Backend (Python) (push) Failing after 2m39s
CI / Frontend (Vue) (push) Failing after 20s
Mirror / mirror (push) Failing after 8s
Vue+FastAPI is now the only frontend. Streamlit is deprecated.
Container was stopped and removed.
2026-04-21 11:14:55 -07:00
4388a2d476 feat: add CF_APP_NAME=peregrine to dev compose for cf-orch pipeline attribution
Some checks failed
CI / Backend (Python) (push) Failing after 7m27s
CI / Frontend (Vue) (push) Failing after 20s
Mirror / mirror (push) Failing after 7s
2026-04-21 10:58:52 -07:00
f10c974fbb chore: release v0.9.0 — messaging tab, demo experience, references, resume sync
Some checks failed
CI / Backend (Python) (push) Failing after 1m35s
CI / Frontend (Vue) (push) Failing after 28s
Mirror / mirror (push) Failing after 15s
Release / release (push) Failing after 5s
2026-04-21 10:17:01 -07:00
5f92c52270 Merge pull request 'feat: public demo experience (Vue SPA with demo mode)' (#103) from feature/demo-experience into main
Some checks are pending
CI / Backend (Python) (push) Waiting to run
CI / Frontend (Vue) (push) Waiting to run
Mirror / mirror (push) Waiting to run
2026-04-21 10:15:02 -07:00
53c1b33b40 feat(demo): add UX designer resume, ATS optimizer snapshots, and company research briefs
Some checks failed
CI / Backend (Python) (push) Failing after 1m21s
CI / Frontend (Vue) (push) Failing after 18s
CI / Backend (Python) (pull_request) Failing after 1m10s
CI / Frontend (Vue) (pull_request) Failing after 22s
- Seed resumes table with a full UX designer base resume (Alex Rivera persona)
- Add ATS gap reports and optimized resumes for Spotify (job 1), Duolingo (job 2), NPR (job 3)
  - Each gap report highlights role-specific keyword opportunities (audio UX, gamification, public media)
  - Optimized resumes tailor the base resume framing to each company's emphasis
- Seed company_research for Asana (phone_screen), Notion (interviewing), Figma (hired)
  - Includes company_brief, ceo_brief, talking_points, tech_brief, funding_brief,
    competitors_brief, accessibility_brief for each
- Update demo/config/user.yaml career_summary to match UX designer persona
  - Fixes mismatch between "software engineer" summary and UX/design job seeds
  - Adds music and education mission preference notes
2026-04-21 10:14:37 -07:00
1c980cca51 docs: add screenshots and animated GIF to README
Six new screenshots (dashboard, card-stack review with approve/reject tint,
apply workspace, interviews kanban) plus an animated GIF of the swipe-review
flow. Adds demo link above the fold.
2026-04-21 10:14:37 -07:00
d02391d960 chore: update compose.demo.yml for Vue/FastAPI architecture
Replace the legacy Streamlit `app` service with a FastAPI `api` service
running dev_api:app on port 8601. The Vue SPA (nginx) proxies /api/ →
api:8601 internally so no host port is needed on the api container.

Move web service from port 8507 → 8504 to match the documented demo URL
(demo.circuitforge.tech/peregrine via Caddy → host:8504).
2026-04-21 10:14:37 -07:00
5bca5aaa20 fix: DemoBanner button contrast — use semantic surface token instead of hardcoded white
--color-primary in dark mode is a medium-light green (#6ab870); white on green
yields ~2.2:1 contrast (fails WCAG AA 4.5:1 minimum). Using --color-surface
(dark navy in dark mode, near-white in light mode) ensures the text always
contrasts strongly with the primary background regardless of theme.

Also tints banner background with 8% primary via color-mix() so it reads as
visually distinct from the page surface without being loud.
2026-04-21 10:14:37 -07:00
230cfb074c fix(demo): smoke-test fixes — card reset, toast error type, apply hint, text contrast
- JobCardStack: expose resetCard() to restore card after a blocked action
- JobReviewView: call resetCard() when approve/reject returns false; prevents
  card going blank after demo guard blocks the action
- useApi: add 'demo-blocked' to ApiError union; return truthy error from the
  403 interceptor so store callers bail early (no optimistic UI update)
- ApplyView: add HintChip to desktop split-pane layout (was mobile-only)
- HintChip: fix text color — --app-primary-light is near-white in light theme,
  causing invisible text; switch to --color-text for cross-theme contrast
- vite.config.ts: support VITE_API_TARGET env var for dev proxy override
- migrations/006: add date_posted, hired_feedback columns and references_ table
  (columns existed in live DB but were missing from migration history)
- DemoBanner: commit component and test (were untracked)
2026-04-21 10:14:37 -07:00
302033598c feat(demo): switch demo data volume to tmpfs, wire DEMO_SEED_FILE 2026-04-21 10:14:37 -07:00
ad26f02d5f feat(demo): add committed seed SQL and startup loader 2026-04-21 10:14:36 -07:00
03206aa34c feat(demo): add IS_DEMO write-block guard on mutating endpoints 2026-04-21 10:14:15 -07:00
55f464080f feat(demo): wire DemoBanner, WelcomeModal, HintChip into app + views 2026-04-21 10:14:15 -07:00
d96cdfa89b feat(demo): add HintChip component with per-view localStorage dismiss 2026-04-21 10:14:15 -07:00
a16d562e06 feat(demo): add WelcomeModal with localStorage gate 2026-04-21 10:14:15 -07:00
63334f5278 feat: messaging tab — messages, templates, draft reply (#74)
Some checks failed
CI / Backend (Python) (push) Failing after 1m48s
CI / Frontend (Vue) (push) Failing after 21s
Mirror / mirror (push) Failing after 7s
Merges feature/messaging-tab into main.

Features:
- Migration 008: messages + message_templates tables with 4 built-in templates
- API endpoints: CRUD for messages and templates, draft-reply (BYOK tier gate), approve
- PUT /api/messages/{id} for draft body persistence
- Pinia store (messaging.ts) with full action set
- MessageLogModal: log calls and in-person meetings with backdated timestamps
- MessageTemplateModal: apply (with token substitution + highlight), create, edit
- MessagingView: two-panel job list + UNION timeline (contacts + messages), Osprey easter egg
- Router: /messages route, /contacts redirect, nav renamed Messages
- Integration test suite (8 tests, 766 total passing)
- CRITICAL fix: _get_effective_tier() no longer trusts X-CF-Tier client header
2026-04-20 20:26:41 -07:00
b1e92b0e52 feat(docker): add /peregrine/ base-path routing in nginx
Adds location blocks for /peregrine/assets/ and /peregrine/ so the SPA
works correctly when accessed via a Caddy prefix that does not strip the
path (e.g. direct host access without reverse proxy stripping).
2026-04-20 20:26:31 -07:00
91e2faf5d0 fix: tier bypass, draft body persistence, canDraftLlm cleanup, limit cap
- CRITICAL: Remove X-CF-Tier header trust from _get_effective_tier; use
  Heimdall in cloud mode and APP_TIER env var in single-tenant only
- HIGH: Add update_message_body helper + PUT /api/messages/{id} endpoint;
  updateMessageBody store action; approveDraft now persists edits to DB
  before calling approve so history always shows the final approved text
- Cleanup: Remove dead canDraftLlm ref, checkLlmAvailable function, and
  v-else-if Enable LLM drafts link; show Draft reply button unconditionally
- MEDIUM: Cap GET /api/messages limit param with Query(ge=1, le=1000)
- Test: Update test_draft_without_llm_returns_402 to patch effective_tier
  instead of sending X-CF-Tier header
2026-04-20 17:19:17 -07:00
6812e3f9ef feat: /messages route + /contacts redirect + nav rename (#74) 2026-04-20 13:04:27 -07:00
899cd3604b feat: MessagingView two-panel layout + draft approval + Osprey easter egg (#74) 2026-04-20 13:02:24 -07:00
aa09b20e7e feat: MessageTemplateModal component (apply/create/edit modes) (#74) 2026-04-20 12:58:00 -07:00
b77ec81cc6 fix: thread logged_at through message stack; Esc handler and localNow fixes
- scripts/messaging.py: add logged_at param to create_message; use provided value or fall back to _now_utc()
- dev-api.py: add logged_at: Optional[str] = None to MessageCreateBody
- web/src/stores/messaging.ts: remove logged_at from Omit, add as optional intersection so callers can pass it through
- web/src/components/MessageLogModal.vue: pass logged_at in handleSubmit payload; move @keydown.esc from backdrop to modal-dialog (which holds focus); compute localNow fresh inside watch so it reflects actual open time
2026-04-20 12:55:41 -07:00
8df3297ab6 feat: MessageLogModal component (#74) 2026-04-20 12:52:19 -07:00
222eb4a088 fix: messaging store error handling and Content-Type headers 2026-04-20 12:50:51 -07:00
47a40c9e36 feat: messaging Pinia store (#74) 2026-04-20 12:48:15 -07:00
dfcc264aba test: use db.add_contact helper in integration test fixture
Replace raw sqlite3 INSERT in test_draft_without_llm_returns_402 with
add_contact() so the fixture stays in sync with schema changes
automatically.
2026-04-20 12:45:47 -07:00
d3dfd015bf feat(cloud): add CF_APP_NAME=peregrine for coordinator pipeline attribution
Allocations from peregrine cloud containers were showing pipeline=null
in cf-orch analytics. Adding CF_APP_NAME to both app and api service
blocks so LLMRouter passes it as the pipeline tag on each allocation.
2026-04-20 12:43:05 -07:00
e11750e0e6 test: messaging HTTP integration tests (#74) 2026-04-20 12:41:45 -07:00
715a8aa33e feat: LLM reply draft, tiers BYOK gate, and messaging API endpoints (#74) 2026-04-20 12:36:16 -07:00
091834f1ae test: add missing update_template KeyError test (#74) 2026-04-20 12:32:35 -07:00
ea961d6da9 feat: messaging DB helpers + unit tests (#74) 2026-04-20 11:55:43 -07:00
9eca0c21ab feat: migration 008 — messages + message_templates tables (#74) 2026-04-20 11:51:59 -07:00
5020144f8d fix: update interview + survey tests for hired_feedback column and async analyze endpoint 2026-04-20 11:48:22 -07:00
9101e716ba fix: async survey/analyze via task queue (#107)
Move POST /api/jobs/:id/survey/analyze off the FastAPI worker thread
by routing it through the LLM task queue (same pattern as cover_letter,
company_research, resume_optimize).

- Extract prompt builders + run_survey_analyze() to scripts/survey_assistant.py
- Add survey_analyze to LLM_TASK_TYPES (task_scheduler.py) with 2.5 GB VRAM budget
  (text mode: phi3:mini; visual mode uses vision service's own VRAM pool)
- Add elif branch in task_runner._run_task; result stored as JSON in error col
- Replace sync endpoint body with submit_task(); add GET /survey/analyze/task poll
- Update survey.ts store: analyze() now fires task + polls at 3s interval;
  silently attaches to existing in-flight task when is_new=false
- SurveyView button label shows task stage while polling

Fixes load-test spike: ~22 greenlets blocking on LLM inference at 100 concurrent
users, causing 90s poll timeouts on cover_letter and research tasks.
2026-04-20 11:06:14 -07:00
acc04b04eb docs(config): add cf_text and cf_voice trunk service backends to llm.yaml.example
Documents the cf-orch allocation pattern for cf-text and cf-voice as
openai_compat backends with a cf_orch block. Products enable these when
CF_ORCH_URL is set; the router allocates via the broker and calls the
managed service directly. No catalog or leaf details here — those live
in cf-orch node profiles (The Orchard trunk/leaf split).
2026-04-20 10:56:22 -07:00
280f4271a5 feat: add Plausible analytics to Vue SPA and docs
Some checks failed
CI / Backend (Python) (push) Failing after 1m13s
CI / Frontend (Vue) (push) Failing after 20s
Mirror / mirror (push) Failing after 7s
2026-04-16 21:15:55 -07:00
1c9bfc9fb6 test: integration tests for resume library<->profile sync endpoints 2026-04-16 14:29:00 -07:00
22bc57242e feat: ResumeProfileView — career_summary, education, achievements sections and sync status label 2026-04-16 14:22:36 -07:00
9f984c22cb feat: resume store — add career_summary, education, achievements, lastSynced state
Extends the resume Pinia store with EducationEntry interface, four new
refs (career_summary, education, achievements, lastSynced), education
CRUD helpers, and load/save wiring for all new fields. lastSynced is
set to current ISO timestamp on successful save.
2026-04-16 14:15:07 -07:00
fe3e4ff539 feat: ResumesView — Apply to profile button, Active profile badge, sync notice, unsaved-changes guard 2026-04-16 14:13:44 -07:00
43599834d5 feat: ResumeSyncConfirmModal — before/after confirmation for profile sync 2026-04-16 14:11:37 -07:00
fe5371613e feat: extend PUT /api/settings/resume to sync content back to default library entry
When a default_resume_id is set in user.yaml, saving the resume profile
now calls profile_to_library and update_resume_content to keep the
library entry in sync. Returns {"ok": true, "synced_library_entry_id": <int|null>}.
2026-04-16 14:09:56 -07:00
369bf68399 feat: POST /api/resumes/{id}/apply-to-profile — library→profile sync with auto-backup 2026-04-16 14:06:52 -07:00
eef6c33d94 feat: add EducationEntry model, extend ResumePayload with education/achievements/career_summary
- Add EducationEntry Pydantic model (institution, degree, field, start_date, end_date)
- Extend ResumePayload with career_summary str, education List[EducationEntry], achievements List[str]
- Rewrite _normalize_experience to pass through Vue-native format (period/responsibilities keys) unchanged; AIHawk format (key_responsibilities/employment_period) still converted
- Extend GET /api/settings/resume to fall back to user.yaml for legacy career_summary when resume YAML is missing or the field is empty
2026-04-16 14:02:59 -07:00
53bfe6b326 feat: add update_resume_synced_at and update_resume_content db helpers
Expose synced_at in _resume_as_dict (with safe fallback for pre-migration
DBs), and add two new helpers: update_resume_synced_at (library→profile
direction) and update_resume_content (profile→library direction, updates
text/struct_json/word_count/synced_at/updated_at).
2026-04-16 13:14:10 -07:00
cd787a2509 fix: period split in profile_to_library handles ISO dates with hyphens
Fixes a bug where ISO-formatted dates (e.g. '2023-01 – 2025-03') in the
period field were split incorrectly. The old code replaced the en-dash with
a hyphen first, then split on the first hyphen, causing dates like '2023-01'
to be split into '2023' and '01' instead of the expected start/end pair.

The fix splits on the dash/dash separator *before* normalizing to plain
hyphens, ensuring round-trip conversion of dates with embedded hyphens.

Adds two regression tests:
- test_profile_to_library_period_split_iso_dates: verifies en-dash separation
- test_profile_to_library_period_split_em_dash: verifies em-dash separation
2026-04-16 13:11:22 -07:00
048a5f4cc3 feat: resume_sync.py — library↔profile transform functions with tests
Pure transform functions (no LLM, no DB) bridging the two resume
representations: library struct_json ↔ ResumePayload content fields.
Exports library_to_profile_content, profile_to_library,
make_auto_backup_name, blank_fields_on_import. 22 tests, all passing.
2026-04-16 13:04:56 -07:00
fe4947a72f feat: add synced_at column to resumes table (migration 007) 2026-04-16 12:58:00 -07:00
4e11cf3cfa fix: sanitize invalid JSON escape sequences from LLM output in resume optimizer
LLMs occasionally emit backslash sequences that are valid regex but not valid
JSON (e.g. \s, \d, \p). This caused extract_jd_signals() to fall through to
the exception handler, leaving llm_signals empty. With no LLM signals, the
optimizer fell back to TF-IDF only — which is more conservative and can
legitimately return zero gaps, making the UI appear to say the resume is fine.

Fix: strip bare backslashes not followed by a recognised JSON escape character
("  \  /  b  f  n  r  t  u) before parsing. Preserves \n, \", etc.

Reproduces: cover letter generation concurrent with gap analysis raises the
probability of a slightly malformed LLM response due to model load.
2026-04-16 11:11:50 -07:00
a4a2216c2f ci: add GitHub Actions CI for public credibility badge
Some checks failed
CI / Backend (Python) (push) Failing after 1m16s
CI / Frontend (Vue) (push) Failing after 19s
Mirror / mirror (push) Failing after 7s
Lean self-contained workflow — no Forgejo-specific secrets.
circuitforge-core installs from Forgejo git (public repo).
Forgejo (.forgejo/workflows/ci.yml) remains the canonical CI.

Backend: ruff + pytest | Frontend: vue-tsc + vitest
2026-04-15 20:20:13 -07:00
797032bd97 ci: remove stale .github/workflows/ci.yml
Some checks failed
CI / Backend (Python) (push) Failing after 1m21s
CI / Frontend (Vue) (push) Failing after 19s
Mirror / mirror (push) Failing after 10s
The .forgejo/workflows/ci.yml is the canonical CI definition.
The old .github/workflows/ci.yml was being mirrored to GitHub via
--mirror push, triggering GitHub Actions runs that fail because
FORGEJO_TOKEN and other Forgejo-specific secrets are not set there.

GitHub Actions does not process .forgejo/workflows/ so removing
this file stops the spurious GitHub runs. ISSUE_TEMPLATE and
pull_request_template.md are preserved in .github/.
2026-04-15 20:11:07 -07:00
fb8b464dd0 fix: use resume_parser extractors in import endpoint to clean CID glyphs
The import endpoint was doing its own inline PDF/DOCX/ODT extraction
without calling _clean_cid(). Bullet CIDs (127, 149, 183) and other
ATS-reembedded font artifacts were stored raw, surfacing as (cid:127)
in the resume library. Switch to extract_text_from_pdf/docx/odt from
resume_parser.py which already handle two-column layouts and CID cleaning.
2026-04-15 12:23:12 -07:00
ec521e14c5 fix: sweep user DBs on cloud startup for pending migrations 2026-04-15 12:18:23 -07:00
a302049f72 fix: add date_posted migration + cloud startup sweep
date_posted column was added to db.py CREATE TABLE but had no migration
file, so existing user DBs were missing it. The list_jobs endpoint queries
this column, causing 500 errors and empty Apply/Review queues for all
existing cloud users while job_counts (which doesn't touch date_posted)
continued to work — making the home page show correct counts but tabs show
empty data.

Fixes:
- migrations/006_date_posted.sql: ALTER TABLE to add date_posted to existing DBs
- dev_api.py lifespan: on startup in cloud mode, sweep all user DBs in
  CLOUD_DATA_ROOT and apply pending migrations — ensures schema changes land
  for every user on each deploy, not only on their first post-deploy request
2026-04-15 12:17:55 -07:00
03b9e52301 feat: references tracker and recommendation letter system (#96)
- references_ + job_references tables with CREATE + migration
- Full CRUD: GET/POST /api/references, PATCH/DELETE /api/references/:id
- Link/unlink to jobs: POST/DELETE /api/references/:id/link-job/:job_id
- GET /api/references/for-job/:job_id — linked refs with prep/letter drafts
- POST /api/references/:id/prep-email — LLM drafts heads-up email to send
  reference before interview; persisted to job_references.prep_email
- POST /api/references/:id/rec-letter — LLM drafts recommendation letter
  reference can edit and send on their letterhead (Paid/BYOK tier)
- ReferencesView.vue: add/edit/delete form, tag system (technical/managerial/
  character/academic), inline confirm-before-delete
- Route /references + IdentificationIcon nav link
2026-04-15 08:42:06 -07:00
0e4fce44c4 feat: shadow listing detector, hired feedback widget, contacts manager
Shadow listing detector (#95):
- Capture date_posted from JobSpy in discover.py + insert_job()
- Add date_posted migration to _MIGRATIONS
- _shadow_score() heuristic: 'shadow' (≥30 days stale), 'stale' (≥14 days)
- list_jobs() computes shadow_score per listing
- JobCard.vue: 'Ghost post' and 'Stale' badges with tooltip

Post-hire feedback widget (#91):
- Add hired_feedback migration to _MIGRATIONS
- POST /api/jobs/:id/hired-feedback endpoint
- InterviewCard.vue: optional widget on hired cards with factor
  checkboxes + freetext; dismissible; shows saved state
- PipelineJob interface extended with hired_feedback field

Contacts manager (#73):
- GET /api/contacts endpoint with job join, direction/search filters
- New ContactsView.vue: searchable table, inbound/outbound filter,
  signal chip column, job link
- Route /contacts added; Contacts nav link (UsersIcon) in AppNav

Also: add git to Dockerfile apt-get for circuitforge-core editable install
2026-04-15 08:34:12 -07:00
6599bc6952 chore: ignore runtime data artifacts
Add gitignore entries for:
- data/.feedback_ratelimit.json (rate limit state)
- data/email_score.jsonl.bad-labels (debug artifact from label review)
- data/config/ (runtime config directory)
2026-04-15 08:16:14 -07:00
8e36863a49 feat: Interview prep Q&A, cf-orch hardware profile, a11y fixes, dark theme
Some checks failed
CI / Backend (Python) (push) Failing after 2m15s
CI / Frontend (Vue) (push) Failing after 21s
Mirror / mirror (push) Failing after 9s
Backend
- dev-api.py: Q&A suggest endpoint, Log Contact, cf-orch node detection in wizard
  hardware step, canonical search_profiles format (profiles:[...]), connections
  settings endpoints, Resume Library endpoints
- db_migrate.py: migrations 002/003/004 — ATS columns, resume review, final
  resume struct
- discover.py: _normalize_profiles() for legacy wizard YAML format compat
- resume_optimizer.py: section-by-section resume parsing + scoring
- task_runner.py: Q&A and contact-log task types
- company_research.py: accessibility brief column wiring
- generate_cover_letter.py: restore _candidate module-level binding

Frontend
- InterviewPrepView.vue: Q&A chat tab, Log Contact form, MarkdownView rendering
- InterviewCard.vue: new reusable card component for interviews kanban
- InterviewsView.vue: rejected analytics section with stage breakdown chips
- ResumeProfileView.vue: sync with new resume store shape
- SearchPrefsView.vue: cf-orch toggle, profile format migration
- SystemSettingsView.vue: connections settings wiring
- ConnectionsSettingsView.vue: new view for integration connections
- MarkdownView.vue: new component for safe markdown rendering
- ApplyWorkspace.vue: a11y — h1→h2 demotion, aria-expanded on Q&A toggle,
  confirmation dialog on Reject action (#98 #99 #100)
- peregrine.css: explicit [data-theme="dark"] token block for light-OS users (#101),
  :focus-visible outline (#97)
- wizard.css: cf-orch hardware step styles
- WizardHardwareStep.vue: cf-orch node display, profile selection with orch option
- WizardLayout.vue: hardware step wiring

Infra
- compose.yml / compose.cloud.yml: cf-orch agent sidecar, llm.cloud.yaml mount
- Dockerfile.cfcore: cf-core editable install in image build
- HANDOFF-xanderland.md: Podman/systemd setup guide for beta tester
- podman-standalone.sh: standalone Podman run script

Tests
- test_dev_api_settings.py: remove stale worktree path bootstrap (credential_store
  now in main repo); fix job_boards fixture to use non-empty list
- test_wizard_api.py: update profiles assertion to superset check (cf-orch added);
  update step6 assertion to canonical profiles[].titles format
2026-04-14 17:01:18 -07:00
91943022a8 docs: add docs badge linking to docs.circuitforge.tech/peregrine in README
Some checks failed
CI / Backend (Python) (push) Failing after 1m17s
CI / Frontend (Vue) (push) Successful in 20s
Mirror / mirror (push) Failing after 10s
2026-04-14 08:19:57 -07:00
7467fb5416 feat: wire cf_text as openai_compat backend in llm.yaml
Some checks failed
CI / Backend (Python) (push) Failing after 12s
CI / Frontend (Vue) (push) Successful in 20s
Mirror / mirror (push) Failing after 7s
Adds the cf-text inference service (circuitforge-core) to the LLM
fallback chain as the first option for cover letter generation.
cf-text now exposes /v1/chat/completions (added in cf-core 69a338b),
making it a drop-in openai_compat backend at port 8006.

CF_TEXT_MODEL and CF_TEXT_PORT added to .env.example. Closes #75.
2026-04-12 17:10:41 -07:00
278413b073 feat: load mission alignment domains from config/mission_domains.yaml
Removes hardcoded _MISSION_SIGNALS and _MISSION_DEFAULTS dicts from
generate_cover_letter.py. Domains and signals are now defined in
config/mission_domains.yaml, which ships with the original 5 domains
(music, animal_welfare, education, social_impact, health) plus 3 new
ones (privacy, accessibility, open_source).

Any key in user.yaml mission_preferences not present in the YAML is
treated as a user-defined domain with no signal detection — custom
note only. Closes #78.
2026-04-12 16:46:13 -07:00
116 changed files with 10984 additions and 803 deletions

View file

@ -5,6 +5,7 @@
STREAMLIT_PORT=8502
OLLAMA_PORT=11434
VLLM_PORT=8000
CF_TEXT_PORT=8006
SEARXNG_PORT=8888
VISION_PORT=8002
VISION_MODEL=vikhyatk/moondream2
@ -15,6 +16,7 @@ OLLAMA_MODELS_DIR=~/models/ollama
VLLM_MODELS_DIR=~/models/vllm # override with full path to your model dir
VLLM_MODEL=Ouro-1.4B # cover letters — fast 1.4B model
VLLM_RESEARCH_MODEL=Ouro-2.6B-Thinking # research — reasoning 2.6B model; restart vllm to switch
CF_TEXT_MODEL=/Library/Assets/LLM/qwen2.5-3b-instruct-q4_k_m.gguf # cf-text GGUF model; set to "mock" to disable
VLLM_MAX_MODEL_LEN=4096 # increase to 8192 for Thinking models with long CoT
VLLM_GPU_MEM_UTIL=0.75 # lower to 0.6 if sharing GPU with other services
OLLAMA_DEFAULT_MODEL=llama3.2:3b
@ -45,6 +47,19 @@ FORGEJO_API_URL=https://git.opensourcesolarpunk.com/api/v1
CF_LICENSE_KEY=
CF_ORCH_URL=https://orch.circuitforge.tech
# cf-orch agent — GPU profiles only (single-gpu, dual-gpu-*)
# The agent registers this node with the cf-orch coordinator and reports VRAM stats.
# CF_ORCH_COORDINATOR_URL: coordinator the agent registers with
# CF_ORCH_NODE_ID: name shown on the dashboard (default: peregrine)
# CF_ORCH_AGENT_PORT: host port for the agent HTTP server (default: 7701)
# CF_ORCH_ADVERTISE_HOST: IP the coordinator uses to reach back to this agent.
# Defaults to 127.0.0.1 (same-host coordinator).
# Set to your host LAN IP for a remote coordinator.
CF_ORCH_COORDINATOR_URL=http://localhost:7700
CF_ORCH_NODE_ID=peregrine
CF_ORCH_AGENT_PORT=7701
#CF_ORCH_ADVERTISE_HOST=10.1.10.71
# Cloud multi-tenancy (compose.cloud.yml only — do not set for local installs)
CLOUD_MODE=false
CLOUD_DATA_ROOT=/devl/menagerie-data

View file

@ -1,3 +1,7 @@
# Peregrine CI — runs on GitHub mirror for public credibility badge.
# Forgejo (.forgejo/workflows/ci.yml) is the canonical CI — keep these in sync.
# No Forgejo-specific secrets used here; circuitforge-core is public on Forgejo.
name: CI
on:
@ -7,29 +11,46 @@ on:
branches: [main]
jobs:
test:
backend:
name: Backend (Python)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install system dependencies
run: sudo apt-get update -q && sudo apt-get install -y libsqlcipher-dev
- name: Set up Python
uses: actions/setup-python@v5
- uses: actions/setup-python@v5
with:
python-version: "3.11"
python-version: '3.12'
cache: pip
- name: Configure git credentials for Forgejo
env:
FORGEJO_TOKEN: ${{ secrets.FORGEJO_TOKEN }}
run: |
git config --global url."https://oauth2:${FORGEJO_TOKEN}@git.opensourcesolarpunk.com/".insteadOf "https://git.opensourcesolarpunk.com/"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests
- name: Lint
run: ruff check .
- name: Test
run: pytest tests/ -v --tb=short
frontend:
name: Frontend (Vue)
runs-on: ubuntu-latest
defaults:
run:
working-directory: web
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: npm
cache-dependency-path: web/package-lock.json
- name: Install dependencies
run: npm ci
- name: Type check
run: npx vue-tsc --noEmit
- name: Test
run: npm run test

3
.gitignore vendored
View file

@ -40,8 +40,11 @@ pytest-output.txt
docs/superpowers/
data/email_score.jsonl
data/email_score.jsonl.bad-labels
data/email_label_queue.jsonl
data/email_compare_sample.jsonl
data/.feedback_ratelimit.json
data/config/
config/label_tool.yaml
config/server.yaml

View file

@ -9,6 +9,62 @@ Format follows [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
---
## [0.9.0] — 2026-04-20
### Added
- **Messaging tab** (#74) — per-job communication timeline replacing `/contacts`.
Unified view of IMAP emails (`job_contacts`) and manually logged entries (`messages`).
Log calls and in-person notes with timestamp. Message template library with 4 built-in
templates (follow-up, thank-you, accommodation request, withdrawal) and user-created
templates with `{{token}}` substitution. LLM draft reply for inbound emails (BYOK-unlockable,
BSL 1.1). Draft approval flow with inline editing and one-click clipboard copy. Osprey
IVR stub button (Phase 2 placeholder with easter egg). `migrations/008_messaging.sql`.
- **Public demo experience** (#103) — full read-only demo mode at `demo.circuitforge.tech/peregrine`.
`IS_DEMO=true` write-blocks all mutating API endpoints with a toast notification.
Ephemeral seed data via tmpfs + `demo/seed.sql` (resets on container start). WelcomeModal
on first visit (localStorage-gated). Per-view HintChips guiding new users through the
job search flow (localStorage-dismissed). DemoBanner with accessible CTA buttons
(WCAG-compliant contrast in light and dark themes). `migrations/006_missing_columns.sql`.
- **References tracker and recommendation letter system** (#96) — track professional
references and generate LLM-drafted recommendation request letters.
- **Shadow listing detector** — flags duplicate or aggregator-reposted job listings.
- **Hired feedback widget** — capture post-hire notes and retrospective feedback on jobs.
- **Interview prep Q&A** — LLM-generated practice questions for the selected job.
- **Resume library ↔ profile sync**`POST /api/resumes/{id}/apply-to-profile` pushes
a library resume into the active profile; `PUT /api/settings/resume` syncs edits back
to the default library entry. `ResumeSyncConfirmModal` shows a before/after diff.
`ResumeProfileView` extended with career summary, education, and achievements sections.
`migrations/007_resume_sync.sql` adds `synced_at` to `resumes`.
- **Plausible analytics** — lightweight privacy-preserving analytics in Vue SPA and docs.
- **cf_text / cf_voice LLM backends** — wire trunk service backends in `llm.yaml`.
- **Mission alignment domains** — load preferred company domains from
`config/mission_domains.yaml` rather than hardcoded values.
- **GitHub Actions CI** — workflow for public credibility badge (`ci.yml`).
- **`CF_APP_NAME` cloud annotation** — coordinator pipeline attribution for multi-product
cloud deployments.
### Changed
- `/contacts` route now redirects to `/messages`; nav item renamed "Messages" → "Contacts"
label removed. `ContactsView.vue` preserved for reference, router points to `MessagingView`.
- Survey `/analyze` endpoint is now fully async via the task queue (no blocking LLM call
on the request thread).
- nginx config adds `/peregrine/` base-path routing for subdirectory deployments.
- `compose.demo.yml` updated for Vue/FastAPI architecture with tmpfs demo volume.
### Fixed
- Tier bypass and draft body persistence after page navigation.
- `canDraftLlm` cleanup and message list `limit` cap.
- DemoBanner button contrast — semantic surface token instead of hardcoded white.
- Period split in `profile_to_library` now handles ISO date strings containing hyphens.
- Cloud startup sweeps all user DBs for pending migrations on deploy.
- Resume import strips CID glyph references via `resume_parser` extractors.
- Survey and interview tests updated for `hired_feedback` column and async analyze flow.
---
## [0.8.6] — 2026-04-12
### Added

View file

@ -6,7 +6,7 @@ WORKDIR /app
# System deps for companyScraper (beautifulsoup4, fake-useragent, lxml) and PDF gen
# libsqlcipher-dev: required to build pysqlcipher3 (SQLCipher AES-256 encryption for cloud mode)
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc libffi-dev curl libsqlcipher-dev \
gcc libffi-dev curl libsqlcipher-dev git \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .

View file

@ -26,6 +26,12 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
COPY circuitforge-core/ /circuitforge-core/
RUN pip install --no-cache-dir /circuitforge-core
# circuitforge-orch client — needed for LLMRouter cf_orch allocation.
# Optional: if the directory doesn't exist the COPY will fail at build time; keep
# cf-orch as a sibling of peregrine in the build context.
COPY circuitforge-orch/ /circuitforge-orch/
RUN pip install --no-cache-dir /circuitforge-orch
COPY peregrine/requirements.txt .
# Skip the cfcore line — already installed above from the local copy
RUN grep -v 'circuitforge-core' requirements.txt | pip install --no-cache-dir -r /dev/stdin
@ -39,6 +45,13 @@ COPY peregrine/scrapers/ /app/scrapers/
COPY peregrine/ .
# Remove per-user config files that are gitignored but may exist locally.
# Defense-in-depth: the parent .dockerignore should already exclude these,
# but an explicit rm guarantees they never end up in the cloud image.
RUN rm -f config/user.yaml config/plain_text_resume.yaml config/notion.yaml \
config/email.yaml config/tokens.yaml config/craigslist.yaml \
config/adzuna.yaml .env
EXPOSE 8501
CMD ["streamlit", "run", "app/app.py", \

153
HANDOFF-xanderland.md Normal file
View file

@ -0,0 +1,153 @@
# Peregrine → xanderland.tv Setup Handoff
**Written from:** dev machine (CircuitForge dev env)
**Target:** xanderland.tv (beta tester, rootful Podman + systemd)
**Date:** 2026-02-27
---
## What we're doing
Getting Peregrine running on the beta tester's server as a Podman container managed by systemd. He already runs SearXNG and other services in the same style — rootful Podman with `--net=host`, `--restart=unless-stopped`, registered as systemd units.
The script `podman-standalone.sh` in the repo root handles the container setup.
---
## Step 1 — Get the repo onto xanderland.tv
From navi (or directly if you have a route):
```bash
ssh xanderland.tv "sudo git clone <repo-url> /opt/peregrine"
```
Or if it's already there, just pull:
```bash
ssh xanderland.tv "cd /opt/peregrine && sudo git pull"
```
---
## Step 2 — Verify /opt/peregrine looks right
```bash
ssh xanderland.tv "ls /opt/peregrine"
```
Expect to see: `Dockerfile`, `compose.yml`, `manage.sh`, `podman-standalone.sh`, `config/`, `app/`, `scripts/`, etc.
---
## Step 3 — Config
```bash
ssh xanderland.tv
cd /opt/peregrine
sudo mkdir -p data
sudo cp config/llm.yaml.example config/llm.yaml
sudo cp config/notion.yaml.example config/notion.yaml # only if he wants Notion sync
```
Then edit `config/llm.yaml` and set `searxng_url` to his existing SearXNG instance
(default is `http://localhost:8888` — confirm his actual port).
He won't need Anthropic/OpenAI keys to start — the setup wizard lets him pick local Ollama
or whatever he has running.
---
## Step 4 — Fix DOCS_DIR in the script
The script defaults `DOCS_DIR=/Library/Documents/JobSearch` which is the original user's path.
Update it to wherever his job search documents actually live, or a placeholder empty dir:
```bash
sudo mkdir -p /opt/peregrine/docs # placeholder if he has no docs yet
```
Then edit the script:
```bash
sudo sed -i 's|DOCS_DIR=.*|DOCS_DIR=/opt/peregrine/docs|' /opt/peregrine/podman-standalone.sh
```
---
## Step 5 — Build the image
```bash
ssh xanderland.tv "cd /opt/peregrine && sudo podman build -t localhost/peregrine:latest ."
```
Takes a few minutes on first run (downloads python:3.11-slim, installs deps).
---
## Step 6 — Run the script
```bash
ssh xanderland.tv "sudo bash /opt/peregrine/podman-standalone.sh"
```
This starts a single container (`peregrine`) with `--net=host` and `--restart=unless-stopped`.
SearXNG is NOT included — his existing instance is used.
Verify it came up:
```bash
ssh xanderland.tv "sudo podman ps | grep peregrine"
ssh xanderland.tv "sudo podman logs peregrine"
```
Health check endpoint: `http://xanderland.tv:8501/_stcore/health`
---
## Step 7 — Register as a systemd service
```bash
ssh xanderland.tv
sudo podman generate systemd --new --name peregrine \
| sudo tee /etc/systemd/system/peregrine.service
sudo systemctl daemon-reload
sudo systemctl enable --now peregrine
```
Confirm:
```bash
sudo systemctl status peregrine
```
---
## Step 8 — First-run wizard
Open `http://xanderland.tv:8501` in a browser.
The setup wizard (page 0) will gate the app until `config/user.yaml` is created.
He'll fill in his profile — name, resume, LLM backend preferences. This writes
`config/user.yaml` and unlocks the rest of the UI.
---
## Troubleshooting
| Symptom | Check |
|---------|-------|
| Container exits immediately | `sudo podman logs peregrine` — usually a missing config file |
| Port 8501 already in use | `sudo ss -tlnp \| grep 8501` — something else on that port |
| SearXNG not reachable | Confirm `searxng_url` in `config/llm.yaml` and that JSON format is enabled in SearXNG settings |
| Wizard loops / won't save | `config/` volume mount permissions — `sudo chown -R 1000:1000 /opt/peregrine/config` |
---
## To update Peregrine later
```bash
cd /opt/peregrine
sudo git pull
sudo podman build -t localhost/peregrine:latest .
sudo podman restart peregrine
```
No need to touch the systemd unit — it launches fresh via `--new` in the generate step.

View file

@ -4,11 +4,29 @@
[![License: BSL 1.1](https://img.shields.io/badge/License-BSL_1.1-blue.svg)](./LICENSE-BSL)
[![CI](https://github.com/CircuitForge/peregrine/actions/workflows/ci.yml/badge.svg)](https://github.com/CircuitForge/peregrine/actions/workflows/ci.yml)
[![Docs](https://img.shields.io/badge/docs-docs.circuitforge.tech-orange)](https://docs.circuitforge.tech/peregrine/)
**Job search pipeline — by [Circuit Forge LLC](https://circuitforge.tech)**
> *"Tools for the jobs that the system made hard on purpose."*
**[Try the live demo](https://demo.circuitforge.tech/peregrine)** — no account required, nothing saved.
---
![Job review — swipe right to approve, left to skip](docs/screenshots/02-review-swipe.gif)
<table>
<tr>
<td><img src="docs/screenshots/01-dashboard.png" alt="Dashboard with pipeline stats"/></td>
<td><img src="docs/screenshots/04-interviews.png" alt="Interview kanban with recruiter emails attached"/></td>
</tr>
<tr>
<td><img src="docs/screenshots/03-apply.png" alt="Apply workspace with AI cover letter draft"/></td>
<td><img src="docs/screenshots/02-review.png" alt="Job review card with match score and ghost-post detection"/></td>
</tr>
</table>
---
Job search is a second job nobody hired you for.

View file

@ -14,23 +14,22 @@ sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.user_profile import UserProfile
_USER_YAML = Path(__file__).parent.parent / "config" / "user.yaml"
_profile = UserProfile(_USER_YAML) if UserProfile.exists(_USER_YAML) else None
_name = _profile.name if _profile else "Job Seeker"
from scripts.db import init_db, get_job_counts, purge_jobs, purge_email_data, \
purge_non_remote, archive_jobs, kill_stuck_tasks, cancel_task, \
get_task_for_job, get_active_tasks, insert_job, get_existing_urls
from scripts.task_runner import submit_task
from app.cloud_session import resolve_session, get_db_path
_CONFIG_DIR = Path(__file__).parent.parent / "config"
from app.cloud_session import resolve_session, get_db_path, get_config_dir
resolve_session("peregrine")
init_db(get_db_path())
_CONFIG_DIR = get_config_dir()
_USER_YAML = _CONFIG_DIR / "user.yaml"
_profile = UserProfile(_USER_YAML) if UserProfile.exists(_USER_YAML) else None
_name = _profile.name if _profile else "Job Seeker"
def _email_configured() -> bool:
_e = Path(__file__).parent.parent / "config" / "email.yaml"
_e = get_config_dir() / "email.yaml"
if not _e.exists():
return False
import yaml as _yaml
@ -38,7 +37,7 @@ def _email_configured() -> bool:
return bool(_cfg.get("username") or _cfg.get("user") or _cfg.get("imap_host"))
def _notion_configured() -> bool:
_n = Path(__file__).parent.parent / "config" / "notion.yaml"
_n = get_config_dir() / "notion.yaml"
if not _n.exists():
return False
import yaml as _yaml
@ -46,7 +45,7 @@ def _notion_configured() -> bool:
return bool(_cfg.get("token"))
def _keywords_configured() -> bool:
_k = Path(__file__).parent.parent / "config" / "resume_keywords.yaml"
_k = get_config_dir() / "resume_keywords.yaml"
if not _k.exists():
return False
import yaml as _yaml

View file

@ -203,8 +203,16 @@ def get_config_dir() -> Path:
isolated and never shared across tenants.
Local: repo-level config/ directory.
"""
if CLOUD_MODE and st.session_state.get("db_path"):
return Path(st.session_state["db_path"]).parent / "config"
if CLOUD_MODE:
db_path = st.session_state.get("db_path")
if db_path:
return Path(db_path).parent / "config"
# Session not resolved yet (resolve_session() should have called st.stop() already).
# Return an isolated empty temp dir rather than the repo config, which may contain
# another user's data baked into the image.
_safe = Path("/tmp/peregrine-cloud-noconfig")
_safe.mkdir(exist_ok=True)
return _safe
return Path(__file__).parent.parent / "config"

Binary file not shown.

After

Width:  |  Height:  |  Size: 298 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 276 KiB

View file

@ -49,6 +49,7 @@ FEATURES: dict[str, str] = {
"company_research": "paid",
"interview_prep": "paid",
"survey_assistant": "paid",
"llm_reply_draft": "paid",
# Orchestration / infrastructure — stays gated
"email_classifier": "paid",
@ -81,6 +82,7 @@ BYOK_UNLOCKABLE: frozenset[str] = frozenset({
"company_research",
"interview_prep",
"survey_assistant",
"llm_reply_draft",
})
# Demo mode flag — read from environment at module load time.

View file

@ -36,6 +36,7 @@ services:
- PYTHONUNBUFFERED=1
- PEREGRINE_CADDY_PROXY=1
- CF_ORCH_URL=http://host.docker.internal:7700
- CF_APP_NAME=peregrine
- DEMO_MODE=false
- FORGEJO_API_TOKEN=${FORGEJO_API_TOKEN:-}
depends_on:
@ -51,6 +52,8 @@ services:
dockerfile: peregrine/Dockerfile.cfcore
command: >
bash -c "uvicorn dev_api:app --host 0.0.0.0 --port 8601"
ports:
- "8601:8601" # LAN-accessible — Caddy gates the public route; Kuma monitors this port directly
volumes:
- /devl/menagerie-data:/devl/menagerie-data
- ./config/llm.cloud.yaml:/app/config/llm.yaml:ro
@ -65,6 +68,8 @@ services:
- HEIMDALL_ADMIN_TOKEN=${HEIMDALL_ADMIN_TOKEN}
- PYTHONUNBUFFERED=1
- FORGEJO_API_TOKEN=${FORGEJO_API_TOKEN:-}
- CF_ORCH_URL=http://host.docker.internal:7700
- CF_APP_NAME=peregrine
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
@ -81,6 +86,9 @@ services:
- api
restart: unless-stopped
# cf-orch-agent: not needed in cloud — a host-native agent already runs on :7701
# and is registered with the coordinator. app/api reach it via CF_ORCH_URL.
searxng:
image: searxng/searxng:latest
volumes:

View file

@ -15,19 +15,21 @@
services:
app:
api:
build: .
ports:
- "8504:8501"
command: >
bash -c "uvicorn dev_api:app --host 0.0.0.0 --port 8601"
volumes:
- ./demo/config:/app/config
- ./demo/data:/app/data
# No /docs mount — demo has no personal documents
- ./demo:/app/demo:ro # seed.sql lives here; read-only
# /app/data is tmpfs — ephemeral, resets on every container start
tmpfs:
- /app/data
environment:
- DEMO_MODE=true
- STAGING_DB=/app/data/staging.db
- DEMO_SEED_FILE=/app/demo/seed.sql
- DOCS_DIR=/tmp/demo-docs
- STREAMLIT_SERVER_BASE_URL_PATH=peregrine
- PYTHONUNBUFFERED=1
- PYTHONLOGGING=WARNING
# No API keys — inference is blocked by DEMO_MODE before any key is needed
@ -37,6 +39,7 @@ services:
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
# No host port — nginx proxies /api/ → api:8601 internally
web:
build:
@ -45,7 +48,9 @@ services:
args:
VITE_BASE_PATH: /peregrine/
ports:
- "8507:80"
- "8504:80" # demo.circuitforge.tech/peregrine* → host:8504
depends_on:
- api
restart: unless-stopped
searxng:

View file

@ -1,48 +1,7 @@
# compose.yml — Peregrine by Circuit Forge LLC
# Profiles: remote | cpu | single-gpu | dual-gpu-ollama
# Streamlit (app service) removed — Vue+FastAPI is the only frontend (#104)
services:
app:
build:
context: ..
dockerfile: peregrine/Dockerfile.cfcore
command: >
bash -c "streamlit run app/app.py
--server.port=8501
--server.headless=true
--server.fileWatcherType=none
2>&1 | tee /app/data/.streamlit.log"
ports:
- "${STREAMLIT_PORT:-8501}:8501"
volumes:
- ./config:/app/config
- ./data:/app/data
- ${DOCS_DIR:-~/Documents/JobSearch}:/docs
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker:ro
environment:
- STAGING_DB=/app/data/staging.db
- DOCS_DIR=/docs
- ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY:-}
- OPENAI_COMPAT_URL=${OPENAI_COMPAT_URL:-}
- OPENAI_COMPAT_KEY=${OPENAI_COMPAT_KEY:-}
- PEREGRINE_GPU_COUNT=${PEREGRINE_GPU_COUNT:-0}
- PEREGRINE_GPU_NAMES=${PEREGRINE_GPU_NAMES:-}
- RECOMMENDED_PROFILE=${RECOMMENDED_PROFILE:-remote}
- STREAMLIT_SERVER_BASE_URL_PATH=${STREAMLIT_BASE_URL_PATH:-}
- FORGEJO_API_TOKEN=${FORGEJO_API_TOKEN:-}
- FORGEJO_REPO=${FORGEJO_REPO:-}
- FORGEJO_API_URL=${FORGEJO_API_URL:-}
- PYTHONUNBUFFERED=1
- PYTHONLOGGING=WARNING
- PEREGRINE_CADDY_PROXY=1
depends_on:
searxng:
condition: service_healthy
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped
api:
build:
context: ..
@ -61,6 +20,8 @@ services:
- OPENAI_COMPAT_KEY=${OPENAI_COMPAT_KEY:-}
- PEREGRINE_GPU_COUNT=${PEREGRINE_GPU_COUNT:-0}
- PEREGRINE_GPU_NAMES=${PEREGRINE_GPU_NAMES:-}
- CF_ORCH_URL=${CF_ORCH_URL:-http://host.docker.internal:7700}
- CF_APP_NAME=peregrine
- PYTHONUNBUFFERED=1
extra_hosts:
- "host.docker.internal:host-gateway"
@ -129,6 +90,31 @@ services:
profiles: [single-gpu, dual-gpu-ollama, dual-gpu-vllm, dual-gpu-mixed]
restart: unless-stopped
cf-orch-agent:
build:
context: ..
dockerfile: peregrine/Dockerfile.cfcore
command: ["/bin/sh", "/app/docker/cf-orch-agent/start.sh"]
ports:
- "${CF_ORCH_AGENT_PORT:-7701}:7701"
environment:
- CF_ORCH_COORDINATOR_URL=${CF_ORCH_COORDINATOR_URL:-http://host.docker.internal:7700}
- CF_ORCH_NODE_ID=${CF_ORCH_NODE_ID:-peregrine}
- CF_ORCH_AGENT_PORT=${CF_ORCH_AGENT_PORT:-7701}
- CF_ORCH_ADVERTISE_HOST=${CF_ORCH_ADVERTISE_HOST:-}
- PYTHONUNBUFFERED=1
extra_hosts:
- "host.docker.internal:host-gateway"
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
profiles: [single-gpu, dual-gpu-ollama, dual-gpu-vllm, dual-gpu-mixed]
restart: unless-stopped
finetune:
build:
context: .

View file

@ -0,0 +1,23 @@
# config/label_tool.yaml — Multi-account IMAP config for the email label tool
# Copy to config/label_tool.yaml and fill in your credentials.
# This file is gitignored.
accounts:
- name: "Gmail"
host: "imap.gmail.com"
port: 993
username: "you@gmail.com"
password: "your-app-password" # Use an App Password, not your login password
folder: "INBOX"
days_back: 90
- name: "Outlook"
host: "outlook.office365.com"
port: 993
username: "you@outlook.com"
password: "your-app-password"
folder: "INBOX"
days_back: 90
# Optional: limit emails fetched per account per run (0 = unlimited)
max_per_account: 500

View file

@ -45,6 +45,11 @@ backends:
model: __auto__
supports_images: false
type: openai_compat
cf_orch:
service: vllm
model_candidates:
- Qwen2.5-3B-Instruct
ttl_s: 300
vllm_research:
api_key: ''
base_url: http://host.docker.internal:8000/v1
@ -52,6 +57,11 @@ backends:
model: __auto__
supports_images: false
type: openai_compat
cf_orch:
service: vllm
model_candidates:
- Qwen2.5-3B-Instruct
ttl_s: 300
fallback_order:
- vllm
- ollama

View file

@ -1,4 +1,11 @@
backends:
cf_text:
api_key: any
base_url: http://host.docker.internal:8006/v1
enabled: true
model: cf-text
supports_images: false
type: openai_compat
anthropic:
api_key_env: ANTHROPIC_API_KEY
enabled: false
@ -34,7 +41,7 @@ backends:
supports_images: false
type: openai_compat
vision_service:
base_url: http://host.docker.internal:8002
base_url: http://vision:8002
enabled: true
supports_images: true
type: vision_service
@ -58,6 +65,7 @@ backends:
supports_images: false
type: openai_compat
fallback_order:
- cf_text
- ollama
- claude_code
- vllm
@ -67,6 +75,7 @@ research_fallback_order:
- claude_code
- vllm_research
- ollama_research
- cf_text
- github_copilot
- anthropic
vision_fallback_order:

View file

@ -45,6 +45,39 @@ backends:
enabled: false
type: vision_service
supports_images: true
# ── cf-orch trunk services ─────────────────────────────────────────────────
# These backends allocate via cf-orch rather than connecting to a static URL.
# cf-orch starts the service on-demand and returns its URL; the router then
# calls it directly using the openai_compat path.
# Set CF_ORCH_URL (env) or url below; leave enabled: false if cf-orch is
# not deployed in your environment.
cf_text:
type: openai_compat
enabled: false
base_url: http://localhost:8008/v1 # fallback when cf-orch is not available
model: __auto__
api_key: any
supports_images: false
cf_orch:
service: cf-text
# model_candidates: leave empty to use the service's default_model,
# or specify an alias from the node's catalog (e.g. "qwen2.5-3b").
model_candidates: []
ttl_s: 3600
cf_voice:
type: openai_compat
enabled: false
base_url: http://localhost:8009/v1 # fallback when cf-orch is not available
model: __auto__
api_key: any
supports_images: false
cf_orch:
service: cf-voice
model_candidates: []
ttl_s: 3600
fallback_order:
- ollama
- claude_code

258
config/mission_domains.yaml Normal file
View file

@ -0,0 +1,258 @@
# Mission domain signal configuration for cover letter generation.
#
# When a job description or company name matches signals in a domain,
# the cover letter prompt injects a Para 3 hint to reflect genuine personal
# alignment. Dict order = match priority (first match wins).
#
# Users can add custom domains under `mission_preferences` in user.yaml.
# Any key in mission_preferences that is NOT listed here is treated as a
# user-defined domain: no signal detection, custom note only (skipped if
# the job description doesn't contain the key as a literal word).
#
# Schema per domain:
# signals: list[str] — lowercase keywords to scan for in "company + JD"
# default_note: str — hint injected when user has no custom note for domain
domains:
music:
signals:
- music
- spotify
- tidal
- soundcloud
- bandcamp
- apple music
- distrokid
- cd baby
- landr
- beatport
- reverb
- vinyl
- streaming
- artist
- label
- live nation
- ticketmaster
- aeg
- songkick
- concert
- venue
- festival
- audio
- podcast
- studio
- record
- musician
- playlist
default_note: >
This company is in the music industry — an industry the candidate finds genuinely
compelling. Para 3 should warmly and specifically reflect this authentic alignment,
not as a generic fan statement, but as an honest statement of where they'd love to
apply their skills.
animal_welfare:
signals:
- animal
- shelter
- rescue
- humane society
- spca
- aspca
- veterinary
- "vet "
- wildlife
- "pet "
- adoption
- foster
- dog
- cat
- feline
- canine
- sanctuary
- zoo
default_note: >
This organization works in animal welfare/rescue — a mission the candidate finds
genuinely meaningful. Para 3 should reflect this authentic connection warmly and
specifically, tying their skills to this mission.
education:
signals:
- education
- school
- learning
- student
- edtech
- classroom
- curriculum
- tutoring
- academic
- university
- kids
- children
- youth
- literacy
- khan academy
- duolingo
- chegg
- coursera
- instructure
- canvas lms
- clever
- district
- teacher
- k-12
- k12
- grade
- pedagogy
default_note: >
This company works in education or EdTech — a domain that resonates with the
candidate's values. Para 3 should reflect this authentic connection specifically
and warmly.
social_impact:
signals:
- nonprofit
- non-profit
- "501(c)"
- social impact
- mission-driven
- public benefit
- community
- underserved
- equity
- justice
- humanitarian
- advocacy
- charity
- foundation
- ngo
- social good
- civic
- public health
- mental health
- food security
- housing
- homelessness
- poverty
- workforce development
default_note: >
This organization is mission-driven / social impact focused — exactly the kind of
cause the candidate cares deeply about. Para 3 should warmly reflect their genuine
desire to apply their skills to work that makes a real difference in people's lives.
# Health listed last — genuine but lower-priority connection.
health:
signals:
- patient
- patients
- healthcare
- health tech
- healthtech
- pharma
- pharmaceutical
- clinical
- medical
- hospital
- clinic
- therapy
- therapist
- rare disease
- life sciences
- life science
- treatment
- prescription
- biotech
- biopharma
- medtech
- behavioral health
- population health
- care management
- care coordination
- oncology
- specialty pharmacy
- provider network
- payer
- health plan
- benefits administration
- ehr
- emr
- fhir
- hipaa
default_note: >
This company works in healthcare, life sciences, or patient care.
Do NOT write about the candidate's passion for pharmaceuticals or healthcare as an
industry. Instead, Para 3 should reflect genuine care for the PEOPLE these companies
exist to serve: those navigating complex, often invisible, or unusual health journeys;
patients facing rare or poorly understood conditions; individuals whose situations don't
fit a clean category. The connection is to the humans behind the data, not the industry.
If the user has provided a personal note, use that to anchor Para 3 specifically.
# Extended domains — added 2026-04-12
privacy:
signals:
- privacy
- data rights
- surveillance
- gdpr
- ccpa
- anonymity
- end-to-end encryption
- open source
- decentralized
- self-hosted
- zero knowledge
- data sovereignty
- digital rights
- eff
- electronic frontier
default_note: >
This company operates in the privacy, data rights, or digital rights space —
a domain the candidate genuinely cares about. Para 3 should reflect their
authentic belief in user autonomy and data sovereignty, not as abstract principle
but as something that shapes how they approach their work.
accessibility:
signals:
- accessibility
- assistive technology
- a11y
- wcag
- screen reader
- adaptive technology
- disability
- neurodivergent
- neurodiversity
- adhd
- autism
- inclusive design
- universal design
- accommodations
- ada compliance
default_note: >
This company works in accessibility or assistive technology — a mission the
candidate feels genuine, personal alignment with. Para 3 should reflect authentic
investment in building tools and systems that work for everyone, especially those
whose needs are most often overlooked in mainstream product development.
open_source:
signals:
- open source
- open-source
- linux foundation
- apache foundation
- free software
- gnu
- contributor
- maintainer
- upstream
- community-driven
- innersource
- copyleft
- mozilla
- wikimedia
default_note: >
This organization is rooted in open source culture — a community the candidate
actively participates in and believes in. Para 3 should reflect genuine investment
in the collaborative, transparent, and community-driven approach to building
software that lasts.

View file

@ -1,9 +1,11 @@
candidate_accessibility_focus: false
candidate_lgbtq_focus: false
candidate_voice: Clear, direct, and human. Focuses on impact over jargon.
career_summary: 'Experienced software engineer with a background in full-stack development,
cloud infrastructure, and data pipelines. Passionate about building tools that help
people navigate complex systems.
candidate_voice: Clear, direct, and human. Focuses on impact over jargon. Avoids
buzzwords and lets the work speak.
career_summary: 'Senior UX Designer with 6 years of experience designing for music,
education, and media products. Strong background in cross-platform design systems,
user research, and 0-to-1 feature development. Passionate about making complex
digital experiences feel effortless.
'
dev_tier_override: null
@ -16,9 +18,9 @@ inference_profile: remote
linkedin: ''
mission_preferences:
animal_welfare: ''
education: ''
education: Education technology is where design decisions have long-term impact on how people learn.
health: ''
music: ''
music: Love designing for music and audio discovery — it combines craft with genuine emotional resonance.
social_impact: Want my work to reach people who need it most.
name: Demo User
nda_companies: []

259
demo/seed.sql Normal file
View file

@ -0,0 +1,259 @@
-- jobs
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('UX Designer', 'Spotify', 'https://www.linkedin.com/jobs/view/1000001', 'linkedin', 'Remote', '1', '$110k$140k', '94.0', 'approved', '2026-04-14', '2026-04-12', 'Dear Hiring Manager,
I''m excited to apply for the UX Designer role at Spotify. With five years of
experience designing for music discovery and cross-platform experiences, I''ve
consistently shipped features that make complex audio content feel effortless to
navigate. At my last role I led a redesign of the playlist creation flow that
reduced drop-off by 31%.
Spotify''s commitment to artist and listener discovery and its recent push into
audiobooks and podcast tooling aligns directly with the kind of cross-format
design challenges I''m most energised by.
I''d love to bring that focus to your product design team.
Warm regards,
[Your name]
', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Product Designer', 'Duolingo', 'https://www.linkedin.com/jobs/view/1000002', 'linkedin', 'Pittsburgh, PA', '0', '$95k$120k', '87.0', 'approved', '2026-04-13', '2026-04-10', 'Draft in progress — cover letter generating…', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('UX Lead', 'NPR', 'https://www.indeed.com/viewjob?jk=1000003', 'indeed', 'Washington, DC', '1', '$120k$150k', '81.0', 'approved', '2026-04-12', '2026-04-08', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Senior UX Designer', 'Mozilla', 'https://www.linkedin.com/jobs/view/1000004', 'linkedin', 'Remote', '1', '$105k$130k', '81.0', 'pending', '2026-04-13', '2026-03-12', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Interaction Designer', 'Figma', 'https://www.indeed.com/viewjob?jk=1000005', 'indeed', 'San Francisco, CA', '1', '$115k$145k', '78.0', 'pending', '2026-04-11', '2026-04-09', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Product Designer II', 'Notion', 'https://www.linkedin.com/jobs/view/1000006', 'linkedin', 'Remote', '1', '$100k$130k', '76.0', 'pending', '2026-04-10', '2026-04-07', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('UX Designer', 'Stripe', 'https://www.linkedin.com/jobs/view/1000007', 'linkedin', 'Remote', '1', '$120k$150k', '74.0', 'pending', '2026-04-09', '2026-04-06', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('UI/UX Designer', 'Canva', 'https://www.indeed.com/viewjob?jk=1000008', 'indeed', 'Remote', '1', '$90k$115k', '72.0', 'pending', '2026-04-08', '2026-04-05', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Senior Product Designer', 'Asana', 'https://www.linkedin.com/jobs/view/1000009', 'linkedin', 'San Francisco, CA', '1', '$125k$155k', '69.0', 'pending', '2026-04-07', '2026-04-04', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('UX Researcher', 'Intercom', 'https://www.indeed.com/viewjob?jk=1000010', 'indeed', 'Remote', '1', '$95k$120k', '67.0', 'pending', '2026-04-06', '2026-04-03', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Product Designer', 'Linear', 'https://www.linkedin.com/jobs/view/1000011', 'linkedin', 'Remote', '1', '$110k$135k', '65.0', 'pending', '2026-04-05', '2026-04-02', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('UX Designer', 'Loom', 'https://www.indeed.com/viewjob?jk=1000012', 'indeed', 'Remote', '1', '$90k$110k', '62.0', 'pending', '2026-04-04', '2026-04-01', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Senior Product Designer', 'Asana', 'https://www.asana.com/jobs/1000013', 'linkedin', 'San Francisco, CA', '1', '$125k$155k', '91.0', 'phone_screen', '2026-04-01', '2026-03-30', NULL, '2026-04-08', '2026-04-15', NULL, NULL, NULL, '2026-04-15T14:00:00', NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Product Designer', 'Notion', 'https://www.notion.so/jobs/1000014', 'indeed', 'Remote', '1', '$100k$130k', '88.0', 'interviewing', '2026-03-25', '2026-03-23', NULL, '2026-04-01', '2026-04-05', '2026-04-12', NULL, NULL, '2026-04-22T10:00:00', NULL, NULL);
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('Design Systems Designer', 'Figma', 'https://www.figma.com/jobs/1000015', 'linkedin', 'San Francisco, CA', '1', '$130k$160k', '96.0', 'hired', '2026-03-01', '2026-02-27', NULL, '2026-03-08', '2026-03-14', '2026-03-21', '2026-04-01', '2026-04-08', NULL, NULL, '{"factors":["clear_scope","great_manager","mission_aligned"],"notes":"Excited about design systems work. Salary met expectations."}');
INSERT INTO jobs (title, company, url, source, location, is_remote, salary, match_score, status, date_found, date_posted, cover_letter, applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, interview_date, rejection_stage, hired_feedback) VALUES ('UX Designer', 'Slack', 'https://slack.com/jobs/1000016', 'indeed', 'Remote', '1', '$115k$140k', '79.0', 'applied', '2026-03-18', '2026-03-16', NULL, '2026-03-28', NULL, NULL, NULL, NULL, NULL, NULL, NULL);
-- job_contacts
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (1, 'inbound', 'Excited to connect — UX Designer role at Spotify', 'jamie.chen@spotify.com', 'you@example.com', '2026-04-12', 'positive_response');
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (1, 'outbound', 'Re: Excited to connect — UX Designer role at Spotify', 'you@example.com', 'jamie.chen@spotify.com', '2026-04-13', NULL);
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (13, 'inbound', 'Interview Confirmation — Senior Product Designer', 'recruiting@asana.com', 'you@example.com', '2026-04-13', 'interview_scheduled');
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (14, 'inbound', 'Your panel interview is confirmed for Apr 22', 'recruiting@notion.so', 'you@example.com', '2026-04-12', 'interview_scheduled');
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (14, 'inbound', 'Pre-interview prep resources', 'marcus.webb@notion.so', 'you@example.com', '2026-04-13', 'positive_response');
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (15, 'inbound', 'Figma Design Systems — Offer Letter', 'offers@figma.com', 'you@example.com', '2026-04-01', 'offer_received');
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (15, 'outbound', 'Re: Figma Design Systems — Offer Letter (acceptance)', 'you@example.com', 'offers@figma.com', '2026-04-05', NULL);
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (15, 'inbound', 'Welcome to Figma! Onboarding next steps', 'onboarding@figma.com', 'you@example.com', '2026-04-08', NULL);
INSERT INTO job_contacts (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) VALUES (16, 'inbound', 'Thanks for applying to Slack', 'noreply@slack.com', 'you@example.com', '2026-03-28', NULL);
-- references_
INSERT INTO references_ (name, email, role, company, relationship, notes, tags, prep_email) VALUES ('Dr. Priya Nair', 'priya.nair@example.com', 'Director of Design', 'Acme Corp', 'former_manager', 'Managed me for 3 years on the consumer app redesign. Enthusiastic reference.', '["manager","design"]', 'Hi Priya,
I hope you''re doing well! I''m currently interviewing for a few senior UX roles and would be so grateful if you''d be willing to serve as a reference.
Thank you!
[Your name]');
INSERT INTO references_ (name, email, role, company, relationship, notes, tags, prep_email) VALUES ('Sam Torres', 'sam.torres@example.com', 'Senior Product Designer', 'Acme Corp', 'former_colleague', 'Worked together on design systems. Great at speaking to collaborative process.', '["colleague","design_systems"]', NULL);
INSERT INTO references_ (name, email, role, company, relationship, notes, tags, prep_email) VALUES ('Jordan Kim', 'jordan.kim@example.com', 'VP of Product', 'Streamline Inc', 'former_manager', 'Led the product team I was embedded in. Can speak to business impact of design work.', '["manager","product"]', NULL);
-- resumes
INSERT INTO resumes (name, source, job_id, text, struct_json, word_count, is_default) VALUES (
'Base Resume',
'uploaded',
NULL,
'ALEX RIVERA
UX Designer · Product Design · Design Systems
alex.rivera@example.com · linkedin.com/in/alexrivera · Portfolio: alexrivera.design
SUMMARY
Senior UX Designer with 6 years of experience designing for music, education, and media platforms. Led 0-to-1 product work and redesigned high-traffic flows used by tens of millions of users. Deep background in user research, interaction design, and cross-platform design systems. Strong collaborator with engineering and product comfortable in ambiguity, methodical about process.
EXPERIENCE
Senior UX Designer StreamNote (2023present)
- Led redesign of the core listening queue, reducing abandonment by 31% across mobile and web
- Built and maintained a component library (Figma tokens + React) used by 8 product squads
- Ran 60+ moderated user research sessions; findings shaped 3 major product bets
- Partnered with ML team to design recommendation transparency features for power users
UX Designer EduPath (20212023)
- Designed the onboarding and early-habit loop for a K12 learning app (2.4M DAU)
- Shipped streak redesign that improved D7 retention by 18%
- Drove accessibility audit and remediation (WCAG 2.1 AA); filed and closed 47 issues
- Mentored 2 junior designers; led weekly design critique
Product Designer Signal Media (20192021)
- Designed editorial tools and reader-facing article experiences for a digital news publisher
- Prototyped and shipped a "read later" feature that became the #2 most-used feature within 90 days
- Collaborated with editorial and engineering to establish a shared component system (reduces new-story design time by 60%)
SKILLS
Figma · Prototyping · User Research · Usability Testing · Design Systems · Interaction Design
Accessibility (WCAG 2.1) · Cross-Platform (iOS/Android/Web) · React (collaboration-level) · SQL (basic)
Workshop Facilitation · Stakeholder Communication
EDUCATION
B.F.A. Graphic Design, Minor in Human-Computer Interaction State University of the Arts, 2019
SELECTED PROJECTS
Playlist Flow Redesign (StreamNote) reduced creation drop-off 31%, won internal design award
D7 Retention Streak (EduPath) +18% weekly retention; featured in company all-hands
Accessibility Audit (EduPath) full WCAG 2.1 AA remediation across iOS, Android, web',
'{"contact":{"name":"Alex Rivera","email":"alex.rivera@example.com","linkedin":"linkedin.com/in/alexrivera","portfolio":"alexrivera.design"},"summary":"Senior UX Designer with 6 years of experience designing for music, education, and media platforms.","experience":[{"company":"StreamNote","title":"Senior UX Designer","dates":"2023present","bullets":["Led redesign of core listening queue, reducing abandonment by 31%","Built component library used by 8 product squads","Ran 60+ moderated user research sessions"]},{"company":"EduPath","title":"UX Designer","dates":"20212023","bullets":["Designed onboarding and early-habit loop for K12 app (2.4M DAU)","Shipped streak redesign that improved D7 retention by 18%","Drove accessibility audit (WCAG 2.1 AA)"]},{"company":"Signal Media","title":"Product Designer","dates":"20192021","bullets":["Designed editorial tools and reader-facing article experiences","Prototyped and shipped read-later feature (top 2 used within 90 days)"]}],"education":[{"institution":"State University of the Arts","degree":"B.F.A. Graphic Design, Minor in HCI","year":"2019"}],"skills":["Figma","Prototyping","User Research","Usability Testing","Design Systems","Interaction Design","Accessibility (WCAG 2.1)","Cross-Platform","React","SQL","Workshop Facilitation"]}',
320,
1
);
-- ATS resume optimizer data for approved jobs (Spotify=1, Duolingo=2, NPR=3)
-- Spotify: gap report highlights audio/podcast tooling keywords; optimized resume tailored
UPDATE jobs SET
ats_gap_report = '[{"term":"audio UX","section":"experience","priority":3,"rationale":"Spotify''s JD emphasizes audio product experience; resume mentions music broadly but not audio-specific UX patterns"},{"term":"podcast design","section":"experience","priority":2,"rationale":"Spotify is investing heavily in podcast tooling; related experience at Signal Media could be framed around audio content"},{"term":"cross-platform mobile","section":"skills","priority":2,"rationale":"JD specifies iOS and Android explicitly; resume lists cross-platform but not mobile-first framing"},{"term":"A/B testing","section":"experience","priority":1,"rationale":"JD mentions data-driven iteration; resume does not reference experimentation framework"}]',
optimized_resume = 'ALEX RIVERA
UX Designer · Audio Product · Cross-Platform Design
alex.rivera@example.com · linkedin.com/in/alexrivera · Portfolio: alexrivera.design
SUMMARY
Senior UX Designer specializing in audio and media product design. 6 years of experience shipping cross-platform features used by millions with a focus on music discovery, content navigation, and habit-forming interactions. Comfortable moving from user research to pixel-perfect specs to cross-functional alignment.
EXPERIENCE
Senior UX Designer StreamNote (2023present)
- Led redesign of the core listening queue (audio UX) reduced abandonment 31% across iOS, Android, and web
- Designed podcast chapter navigation prototype; validated with 8 user sessions, handed off to eng in Q3
- Built Figma component library (tokens + variants) used by 8 product squads cut design-to-dev handoff time by 40%
- Drove A/B test framework with data team: 12 experiments shipped; 7 reached statistical significance
UX Designer EduPath (20212023)
- Designed cross-platform onboarding (iOS/Android/web) for K12 learning app, 2.4M DAU
- Shipped streak redesign with 3 A/B variants winning variant improved D7 retention by 18%
- Full WCAG 2.1 AA remediation across all platforms; filed and closed 47 issues
Product Designer Signal Media (20192021)
- Designed audio and editorial experiences for a digital media publisher
- Prototyped and shipped "listen later" feature for podcast content #2 most-used feature within 90 days
- Established shared design system that reduced new-story design time by 60%
SKILLS
Figma · Audio UX · Podcast Design · Cross-Platform (iOS/Android/Web) · Design Systems
A/B Testing · User Research · Usability Testing · Accessibility (WCAG 2.1) · Interaction Design
EDUCATION
B.F.A. Graphic Design, Minor in HCI State University of the Arts, 2019'
WHERE id = 1;
-- Duolingo: gap report highlights gamification, retention, and learning science keywords
UPDATE jobs SET
ats_gap_report = '[{"term":"gamification","section":"experience","priority":3,"rationale":"Duolingo''s entire product is built on gamification mechanics; streak work at EduPath is highly relevant but not explicitly framed"},{"term":"streak mechanics","section":"experience","priority":3,"rationale":"Duolingo invented the streak; EduPath streak redesign is directly applicable and should be foregrounded"},{"term":"learning science","section":"experience","priority":2,"rationale":"JD references behavioral psychology; resume does not mention research-backed habit design"},{"term":"localization","section":"skills","priority":1,"rationale":"Duolingo ships to 40+ languages; internationalization experience or awareness would strengthen application"}]',
optimized_resume = 'ALEX RIVERA
UX Designer · Gamification · Learning Products
alex.rivera@example.com · linkedin.com/in/alexrivera · Portfolio: alexrivera.design
SUMMARY
UX Designer with 6 years of experience in education and media products. Designed habit-forming experiences grounded in behavioral research streak systems, onboarding flows, and retention mechanics for apps with millions of daily active users. Passionate about learning products that feel like play.
EXPERIENCE
UX Designer EduPath (20212023)
- Redesigned streak and gamification mechanics for K12 learning app (2.4M DAU) D7 retention +18%
- Applied behavioral science principles (variable reward, loss aversion, social proof) to onboarding flow redesign
- Led 30+ user research sessions with students, parents, and teachers; findings shaped product roadmap for 2 quarters
- Drove WCAG 2.1 AA accessibility remediation 47 issues filed and closed across iOS, Android, web
Senior UX Designer StreamNote (2023present)
- Designed habit-reinforcing listening queue with personalized recommendations surface abandonment -31%
- Built and scaled Figma design system used by 8 squads; reduced design-to-dev cycle by 40%
- Ran A/B tests with data team; 12 experiments across retention and discovery features
Product Designer Signal Media (20192021)
- Designed reader engagement and content-return mechanics for digital news platform
- "Read later" feature reached #2 usage within 90 days of launch
SKILLS
Figma · Gamification Design · Habit & Retention Mechanics · User Research · Behavioral UX
Learning Products · Accessibility (WCAG 2.1) · Cross-Platform (iOS/Android/Web) · Design Systems
EDUCATION
B.F.A. Graphic Design, Minor in HCI State University of the Arts, 2019'
WHERE id = 2;
-- NPR: gap report highlights public media, accessibility, and editorial tool experience
UPDATE jobs SET
ats_gap_report = '[{"term":"public media","section":"experience","priority":3,"rationale":"NPR is a public media org; framing experience around mission-driven media rather than commercial products strengthens fit"},{"term":"editorial tools","section":"experience","priority":3,"rationale":"NPR''s UX Lead role includes internal tools for journalists; Signal Media editorial tools work is directly applicable"},{"term":"accessibility standards","section":"experience","priority":2,"rationale":"NPR serves a broad public audience including listeners with disabilities; WCAG work at EduPath should be prominent"},{"term":"content discovery","section":"experience","priority":2,"rationale":"NPR''s JD mentions listener discovery; StreamNote queue redesign is relevant framing"}]',
optimized_resume = 'ALEX RIVERA
UX Lead · Public Media · Accessible Design
alex.rivera@example.com · linkedin.com/in/alexrivera · Portfolio: alexrivera.design
SUMMARY
Senior UX Designer with 6 years of experience in media, education, and content platforms. Led design for editorial tools, content discovery surfaces, and accessible experiences for mission-driven organizations. Believes design has an obligation to reach all users especially the ones the industry tends to forget.
EXPERIENCE
Senior UX Designer StreamNote (2023present)
- Led content discovery redesign (listening queue, personalized surfaces) abandonment -31%
- Designed and shipped podcast chapter navigation as a 0-to-1 feature
- Built scalable Figma component library used by 8 cross-functional squads
- Ran 60+ moderated research sessions; regularly presented findings to CPO and VP Product
Product Designer Signal Media (20192021)
- Designed editorial authoring tools used daily by 120+ journalists reduced story publish time by 35%
- Shipped "read later" feature for a digital news publisher #2 most-used feature within 90 days
- Established shared design system that cut new-template design time by 60%
UX Designer EduPath (20212023)
- Led full WCAG 2.1 AA accessibility audit and remediation across iOS, Android, and web
- Designed onboarding and retention flows for a public K12 learning app (2.4M DAU)
- D7 retention +18% following streak redesign; results shared at company all-hands
SKILLS
Figma · Editorial & Publishing Tools · Content Discovery UX · Accessibility (WCAG 2.1 AA)
Public-Facing Product Design · User Research · Cross-Platform · Design Systems
EDUCATION
B.F.A. Graphic Design, Minor in HCI State University of the Arts, 2019'
WHERE id = 3;
-- company_research for interview-stage jobs
-- Job 13: Asana (phone_screen, interview 2026-04-15)
INSERT INTO company_research (job_id, generated_at, company_brief, ceo_brief, talking_points, tech_brief, funding_brief, competitors_brief, red_flags, accessibility_brief, scrape_used, raw_output) VALUES (
13,
'2026-04-14T09:00:00',
'Asana is a work management platform founded in 2008 by Dustin Moskovitz and Justin Rosenstein (both ex-Facebook). Headquartered in San Francisco, Asana went public on the NYSE in September 2020 via a direct listing. The product focuses on project and task management for teams, with a strong emphasis on clarity of ownership and cross-functional coordination. It serves over 130,000 paying customers across 190+ countries. Asana''s design philosophy centers on removing ambiguity from work — a principle that directly shapes product design decisions. The company has made significant investments in AI-assisted task management through its "AI Studio" features, launched in 2024.',
'Dustin Moskovitz, co-founder and CEO, is known for a thoughtful management style and genuine interest in org design and well-being at work. He is a co-founder of the effective altruism movement and the Open Philanthropy Project. Expect questions and conversation that reflect a values-driven culture — mission alignment matters here. Anne Raimondi is COO and a well-regarded operations leader.',
'["Asana''s design team works closely with the Core Product and Platform squads — ask how design embeds with engineering","Recent focus on AI features (AI Studio, smart task assignment) — familiarity with AI UX patterns will land well","Asana''s brand voice is unusually distinct — understand their design language before the call","Ask about the cross-functional collaboration model: how does design influence roadmap priority?","The role is hybrid SF — clarify expectations around in-office days upfront"]',
'Asana is built primarily on React (frontend), Python and PHP (backend), and uses a proprietary data model (the Asana object graph) that drives their real-time sync. Their design team uses Figma heavily. They have invested in their own design system ("Alchemy") which underpins the entire product.',
'Asana went public via direct listing (NYSE: ASAN) in September 2020. Revenue in FY2025 was approximately $726M, with consistent double-digit YoY growth. The company has been investing in profitability — operating losses have narrowed significantly. No recent acquisition activity.',
'Primary competitors: Monday.com, ClickUp, Notion (project management use cases), Jira (for engineering teams), and Microsoft Project. Asana differentiates on simplicity, clear ownership model, and enterprise reliability over raw feature count.',
NULL,
'Asana has published an accessibility statement and maintains WCAG 2.1 AA compliance across their core product. Their employee ERGs include groups for disability and neurodiversity. The company scores above average on Glassdoor for work-life balance. Their San Francisco HQ has dedicated quiet spaces and standing desks.',
0,
'Asana company research generated for phone screen 2026-04-15. Sources: public filings, company blog, Glassdoor.'
);
-- Job 14: Notion (interviewing, panel 2026-04-22)
INSERT INTO company_research (job_id, generated_at, company_brief, ceo_brief, talking_points, tech_brief, funding_brief, competitors_brief, red_flags, accessibility_brief, scrape_used, raw_output) VALUES (
14,
'2026-04-11T14:30:00',
'Notion is an all-in-one workspace tool combining notes, docs, wikis, and project management. Founded in 2013, relaunched in 2018 after a near-failure. Headquartered in San Francisco, with a significant remote-first culture. Notion reached a $10B valuation in its 2021 funding round and has since focused on consolidation and profitability. The product is unusually design-forward — Notion''s UI is considered a benchmark in the industry for flexibility without overwhelming complexity. Their 20232024 push into AI (Notion AI) added LLM-powered writing and summarization directly into the workspace. The product design team is small-but-influential and works closely with the founders.',
'Ivan Zhao is co-founder and CEO, known for being deeply product-focused and aesthetically driven. He has described Notion as an attempt to make software feel like a craftsman''s tool. Akshay Kothari is co-founder and COO. The culture reflects the founders'' values: deliberate, high-craft, opinionated. Expect the panel to include designers or PMs who will probe your design sensibility and taste.',
'["Notion''s design team is small and influential — expect ownership of end-to-end features, not component-level work","AI features (Notion AI) are a major current initiative — come with opinions on how AI should integrate into a workspace without disrupting user flow","Notion''s design language is a competitive moat — study it carefully before the panel","Panel likely includes a PM, a senior designer, and possibly a founder — tailor your portfolio walk to each audience","Ask about the product design team structure: how many designers, how do they embed with eng, what does the IC path look like?"]',
'Notion is built on a React frontend with a custom block-based data model. Their backend uses Postgres and Kafka for real-time sync. Notion AI uses third-party LLM providers (Anthropic, OpenAI) via API. The design team uses Figma and maintains a well-documented internal design system.',
'Notion raised $275M at a $10B valuation in October 2021 (led by Sequoia and Coatue). The company has not announced further funding rounds; public commentary suggests a path to profitability. ARR estimated at $300500M as of 2024.',
'Competitors include Confluence (Atlassian), Coda, Linear (for engineering-focused workflows), Obsidian (local-first notes), and increasingly Asana and ClickUp for project management use cases. Notion''s differentiator is its flexible block model and strong brand identity with knowledge workers.',
'Some employee reviews mention that the small team size means high ownership but also that projects can pivot quickly. Design headcount has been stable post-2022 layoffs. Worth asking about team stability in the panel.',
'Notion has made public commitments to WCAG 2.1 AA compliance but has received community feedback that keyboard navigation in the block editor has gaps. Their 2024 accessibility roadmap addressed the most commonly reported issues. The company has a neurodiversity ERG and remote-first culture (async-friendly).',
0,
'Notion company research generated for panel interview 2026-04-22. Sources: public filings, company blog, community accessibility reports.'
);
-- Job 15: Figma (hired — research used during interview cycle)
INSERT INTO company_research (job_id, generated_at, company_brief, ceo_brief, talking_points, tech_brief, funding_brief, competitors_brief, red_flags, accessibility_brief, scrape_used, raw_output) VALUES (
15,
'2026-03-13T11:00:00',
'Figma is the leading browser-based design tool, founded in 2012 by Dylan Field and Evan Wallace. Headquartered in San Francisco. Figma disrupted the design tool market with its collaborative, multiplayer approach — Google Docs for design. The product includes Figma Design, FigJam (whiteboarding), and Dev Mode (engineering handoff). Adobe''s attempted $20B acquisition was blocked by UK and EU regulators in 2023; Figma received a $1B termination fee. Post-Adobe, Figma has accelerated independent investment in AI features and a new "Figma Make" prototyping tool. The Design Systems team (the role you accepted) is responsible for the core component and token infrastructure used across all Figma products.',
'Dylan Field, co-founder and CEO, is known for being deeply technical and product-obsessed. He joined the board of OpenAI. Post-Adobe-deal fallout, Field has been publicly focused on Figma''s independent growth trajectory. Expect a culture of high standards and genuine product craft. Noah Levin leads the design org.',
'["You are joining the Design Systems team — the infrastructure team for Figma''s own product design","Your work will directly impact every other designer at Figma — high visibility, high leverage","Figma uses its own product (dogfooding) — you will be designing in Figma for Figma","Key initiative: AI-assisted component generation in Figma Make — design systems input is critical","You are the first external hire in this role since the Adobe deal fell through — ask about team direction post-acquisition"]',
'Figma''s frontend is React with a custom WebGL rendering engine (written in Rust + WASM) for the canvas. This is some of the most sophisticated browser-based graphics code in production. Dev Mode connects to GitHub, Storybook, and VS Code. The design system team works in Figma and outputs tokens that connect to code via Figma''s token pipeline.',
'Figma received a $1B termination fee from Adobe when the acquisition was blocked in late 2023. The company raised $200M at a $10B valuation in 2021. With the termination fee and strong ARR, Figma is well-capitalized for independent growth. No IPO timeline announced publicly.',
'Primary competitor is Sketch (declining market share), with Adobe XD effectively sunset. Framer is a growing competitor for prototyping. Penpot (open-source) is gaining traction in privacy-conscious and European markets. Figma''s multiplayer and browser-based approach remains a strong moat.',
NULL,
'Figma has an active accessibility team and public blog posts on designing accessible components. Their design system (the one you will be contributing to) includes built-in accessibility annotations and ARIA guidance. The company has disability and neurodiversity ERGs. Remote-friendly with SF HQ.',
0,
'Figma company research generated for interviewing stage 2026-03-13. Sources: company blog, public filings, design community.'
);

1780
dev-api.py

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,14 @@
#!/bin/sh
# Start the cf-orch agent. Adds --advertise-host only when CF_ORCH_ADVERTISE_HOST is set.
set -e
ARGS="--coordinator ${CF_ORCH_COORDINATOR_URL:-http://host.docker.internal:7700} \
--node-id ${CF_ORCH_NODE_ID:-peregrine} \
--host 0.0.0.0 \
--port ${CF_ORCH_AGENT_PORT:-7701}"
if [ -n "${CF_ORCH_ADVERTISE_HOST}" ]; then
ARGS="$ARGS --advertise-host ${CF_ORCH_ADVERTISE_HOST}"
fi
exec cf-orch agent $ARGS

View file

@ -22,6 +22,19 @@ server {
add_header Cache-Control "public, immutable";
}
# Handle /peregrine/ base path used when accessed directly (no Caddy prefix stripping).
# ^~ blocks regex location matches so assets at /peregrine/assets/... are served correctly.
location ^~ /peregrine/assets/ {
alias /usr/share/nginx/html/assets/;
expires 1y;
add_header Cache-Control "public, immutable";
}
location /peregrine/ {
alias /usr/share/nginx/html/;
try_files $uri $uri/ /index.html;
}
# SPA fallback must come after API and assets
location / {
try_files $uri $uri/ /index.html;

View file

@ -4,6 +4,8 @@
Peregrine automates the full job search lifecycle: discovery, matching, cover letter generation, application tracking, and interview preparation. It is privacy-first and local-first — your data never leaves your machine unless you configure an external integration.
![Peregrine dashboard](screenshots/01-dashboard.png)
---
## Quick Start

1
docs/plausible.js Normal file
View file

@ -0,0 +1 @@
(function(){var s=document.createElement("script");s.defer=true;s.dataset.domain="docs.circuitforge.tech,circuitforge.tech";s.dataset.api="https://analytics.circuitforge.tech/api/event";s.src="https://analytics.circuitforge.tech/js/script.js";document.head.appendChild(s);})();

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 220 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 123 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 107 KiB

View file

@ -1,5 +1,7 @@
# Apply Workspace
![Peregrine apply workspace with cover letter generator and ATS optimizer](../screenshots/03-apply.png)
The Apply Workspace is where you generate cover letters, export application documents, and record that you have applied to a job.
---

View file

@ -1,5 +1,7 @@
# Job Review
![Peregrine job review triage](../screenshots/02-review.png)
The Job Review page is where you approve or reject newly discovered jobs before they enter the application pipeline.
---

View file

@ -0,0 +1,7 @@
-- Add ATS resume optimizer columns introduced in v0.8.x.
-- Existing DBs that were created before the baseline included these columns
-- need this migration to add them. Safe to run on new DBs: IF NOT EXISTS guards
-- are not available for ADD COLUMN in SQLite, so we use a try/ignore pattern
-- at the application level (db_migrate.py wraps each migration in a transaction).
ALTER TABLE jobs ADD COLUMN optimized_resume TEXT;
ALTER TABLE jobs ADD COLUMN ats_gap_report TEXT;

View file

@ -0,0 +1,3 @@
-- Resume review draft and version archive columns (migration 003)
ALTER TABLE jobs ADD COLUMN resume_draft_json TEXT;
ALTER TABLE jobs ADD COLUMN resume_archive_json TEXT;

View file

@ -0,0 +1,5 @@
-- Migration 004: add resume_final_struct to jobs table
-- Stores the approved resume as a structured JSON dict alongside the plain text
-- (resume_optimized_text). Enables YAML export and future re-processing without
-- re-parsing the plain text.
ALTER TABLE jobs ADD COLUMN resume_final_struct TEXT;

View file

@ -0,0 +1,6 @@
-- 006_date_posted.sql
-- Add date_posted column for shadow listing detection (stale/shadow score feature).
-- New DBs already have this column from the CREATE TABLE statement in db.py;
-- this migration adds it to existing user DBs.
ALTER TABLE jobs ADD COLUMN date_posted TEXT;

View file

@ -0,0 +1,22 @@
-- Migration 006: Add columns and tables present in the live DB but missing from migrations
-- These were added via direct ALTER TABLE after the v0.8.5 baseline was written.
-- date_posted: used for ghost-post shadow-score detection
ALTER TABLE jobs ADD COLUMN date_posted TEXT;
-- hired_feedback: JSON blob saved when a job reaches the 'hired' outcome
ALTER TABLE jobs ADD COLUMN hired_feedback TEXT;
-- references_ table: contacts who can provide references for applications
CREATE TABLE IF NOT EXISTS references_ (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
relationship TEXT,
company TEXT,
email TEXT,
phone TEXT,
notes TEXT,
tags TEXT,
prep_email TEXT,
role TEXT
);

View file

@ -0,0 +1,3 @@
-- 007_resume_sync.sql
-- Add synced_at to resumes: ISO datetime of last library↔profile sync, null = never synced.
ALTER TABLE resumes ADD COLUMN synced_at TEXT;

View file

@ -0,0 +1,97 @@
-- messages: manual log entries and LLM drafts
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER REFERENCES jobs(id) ON DELETE SET NULL,
job_contact_id INTEGER REFERENCES job_contacts(id) ON DELETE SET NULL,
type TEXT NOT NULL DEFAULT 'email',
direction TEXT,
subject TEXT,
body TEXT,
from_addr TEXT,
to_addr TEXT,
logged_at TEXT NOT NULL DEFAULT (datetime('now')),
approved_at TEXT,
template_id INTEGER REFERENCES message_templates(id) ON DELETE SET NULL,
osprey_call_id TEXT
);
-- message_templates: built-in seeds and user-created templates
CREATE TABLE IF NOT EXISTS message_templates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key TEXT UNIQUE,
title TEXT NOT NULL,
category TEXT NOT NULL DEFAULT 'custom',
subject_template TEXT,
body_template TEXT NOT NULL,
is_builtin INTEGER NOT NULL DEFAULT 0,
is_community INTEGER NOT NULL DEFAULT 0,
community_source TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);
INSERT OR IGNORE INTO message_templates
(key, title, category, subject_template, body_template, is_builtin)
VALUES
(
'follow_up',
'Following up on my application',
'follow_up',
'Following up — {{role}} application',
'Hi {{recruiter_name}},
I wanted to follow up on my application for the {{role}} position at {{company}}. I remain very interested in the opportunity and would welcome the chance to discuss my background further.
Please let me know if there is anything else you need from me.
Best regards,
{{name}}',
1
),
(
'thank_you',
'Thank you for the interview',
'thank_you',
'Thank you — {{role}} interview',
'Hi {{recruiter_name}},
Thank you for taking the time to speak with me about the {{role}} role at {{company}}. I enjoyed learning more about the team and the work you are doing.
I am very excited about this opportunity and look forward to hearing about the next steps.
Best regards,
{{name}}',
1
),
(
'accommodation_request',
'Accommodation request',
'accommodation',
'Accommodation request — {{role}} interview',
'Hi {{recruiter_name}},
I am writing to request a reasonable accommodation for my upcoming interview for the {{role}} position. Specifically, I would appreciate:
{{accommodation_details}}
Please let me know if you need any additional information. I am happy to discuss this further.
Thank you,
{{name}}',
1
),
(
'withdrawal',
'Withdrawing my application',
'withdrawal',
'Application withdrawal — {{role}}',
'Hi {{recruiter_name}},
I am writing to let you know that I would like to withdraw my application for the {{role}} position at {{company}}.
Thank you for your time and consideration. I wish you and the team all the best.
Best regards,
{{name}}',
1
)

View file

@ -70,3 +70,6 @@ nav:
- Tier System: reference/tier-system.md
- LLM Router: reference/llm-router.md
- Config Files: reference/config-files.md
extra_javascript:
- plausible.js

92
podman-standalone.sh Executable file
View file

@ -0,0 +1,92 @@
#!/usr/bin/env bash
# podman-standalone.sh — Peregrine rootful Podman setup (no Compose)
#
# For beta testers running system Podman (non-rootless) with systemd.
# Mirrors the manage.sh "remote" profile: app + SearXNG only.
# Ollama/vLLM/vision are expected as host services if needed.
#
# ── Prerequisites ────────────────────────────────────────────────────────────
# 1. Clone the repo:
# sudo git clone <repo-url> /opt/peregrine
#
# 2. Build the app image:
# cd /opt/peregrine && sudo podman build -t localhost/peregrine:latest .
#
# 3. Create a config directory and copy the example configs:
# sudo mkdir -p /opt/peregrine/{config,data}
# sudo cp /opt/peregrine/config/*.example /opt/peregrine/config/
# # Edit /opt/peregrine/config/llm.yaml, notion.yaml, etc. as needed
#
# 4. Run this script:
# sudo bash /opt/peregrine/podman-standalone.sh
#
# ── After setup — generate systemd unit files ────────────────────────────────
# sudo podman generate systemd --new --name peregrine-searxng \
# | sudo tee /etc/systemd/system/peregrine-searxng.service
# sudo podman generate systemd --new --name peregrine \
# | sudo tee /etc/systemd/system/peregrine.service
# sudo systemctl daemon-reload
# sudo systemctl enable --now peregrine-searxng peregrine
#
# ── SearXNG ──────────────────────────────────────────────────────────────────
# Peregrine expects a SearXNG instance with JSON format enabled.
# If you already run one, skip the SearXNG container and set the URL in
# config/llm.yaml (searxng_url key). The default is http://localhost:8888.
#
# ── Ports ────────────────────────────────────────────────────────────────────
# Peregrine UI → http://localhost:8501
#
# ── To use a different Streamlit port ────────────────────────────────────────
# Uncomment the CMD override at the bottom of the peregrine run block and
# set PORT= to your desired port. The Dockerfile default is 8501.
#
set -euo pipefail
REPO_DIR=/opt/peregrine
DATA_DIR=/opt/peregrine/data
DOCS_DIR=/Library/Documents/JobSearch # ← adjust to your docs path
TZ=America/Los_Angeles
# ── Peregrine App ─────────────────────────────────────────────────────────────
# Image is built locally — no registry auto-update label.
# To update: sudo podman build -t localhost/peregrine:latest /opt/peregrine
# sudo podman restart peregrine
#
# Env vars: ANTHROPIC_API_KEY, OPENAI_COMPAT_URL, OPENAI_COMPAT_KEY are
# optional — only needed if you're using those backends in config/llm.yaml.
#
sudo podman run -d \
--name=peregrine \
--restart=unless-stopped \
--net=host \
-v ${REPO_DIR}/config:/app/config:Z \
-v ${DATA_DIR}:/app/data:Z \
-v ${DOCS_DIR}:/docs:z \
-e STAGING_DB=/app/data/staging.db \
-e DOCS_DIR=/docs \
-e PYTHONUNBUFFERED=1 \
-e PYTHONLOGGING=WARNING \
-e TZ=${TZ} \
--health-cmd="curl -f http://localhost:8501/_stcore/health || exit 1" \
--health-interval=30s \
--health-timeout=10s \
--health-start-period=60s \
--health-retries=3 \
localhost/peregrine:latest
# To override the default port (8501), uncomment and edit the line below,
# then remove the image name above and place it at the end of the CMD:
# streamlit run app/app.py --server.port=8501 --server.headless=true --server.fileWatcherType=none
echo ""
echo "Peregrine is starting up."
echo " App: http://localhost:8501"
echo ""
echo "Check container health with:"
echo " sudo podman ps"
echo " sudo podman logs peregrine"
echo ""
echo "To register as a systemd service:"
echo " sudo podman generate systemd --new --name peregrine \\"
echo " | sudo tee /etc/systemd/system/peregrine.service"
echo " sudo systemctl daemon-reload"
echo " sudo systemctl enable --now peregrine"

View file

@ -277,7 +277,8 @@ def _load_resume_and_keywords() -> tuple[dict, list[str]]:
return resume, keywords
def research_company(job: dict, use_scraper: bool = True, on_stage=None) -> dict:
def research_company(job: dict, use_scraper: bool = True, on_stage=None,
config_path: "Path | None" = None) -> dict:
"""
Generate a pre-interview research brief for a job.
@ -295,7 +296,7 @@ def research_company(job: dict, use_scraper: bool = True, on_stage=None) -> dict
"""
from scripts.llm_router import LLMRouter
router = LLMRouter()
router = LLMRouter(config_path=config_path) if config_path else LLMRouter()
research_order = router.config.get("research_fallback_order") or router.config["fallback_order"]
company = job.get("company") or "the company"
title = job.get("title") or "this role"

View file

@ -130,6 +130,32 @@ CREATE TABLE IF NOT EXISTS digest_queue (
)
"""
CREATE_REFERENCES = """
CREATE TABLE IF NOT EXISTS references_ (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
relationship TEXT,
company TEXT,
email TEXT,
phone TEXT,
notes TEXT,
tags TEXT DEFAULT '[]',
created_at TEXT DEFAULT (datetime('now')),
updated_at TEXT DEFAULT (datetime('now'))
);
"""
CREATE_JOB_REFERENCES = """
CREATE TABLE IF NOT EXISTS job_references (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER NOT NULL REFERENCES jobs(id) ON DELETE CASCADE,
reference_id INTEGER NOT NULL REFERENCES references_(id) ON DELETE CASCADE,
prep_email TEXT,
rec_letter TEXT,
UNIQUE(job_id, reference_id)
);
"""
_MIGRATIONS = [
("cover_letter", "TEXT"),
("applied_at", "TEXT"),
@ -143,6 +169,8 @@ _MIGRATIONS = [
("calendar_event_id", "TEXT"),
("optimized_resume", "TEXT"), # ATS-rewritten resume text (paid tier)
("ats_gap_report", "TEXT"), # JSON gap report (free tier)
("date_posted", "TEXT"), # Original posting date from job board (shadow listing detection)
("hired_feedback", "TEXT"), # JSON: optional post-hire "what helped" response
]
@ -176,6 +204,9 @@ def _migrate_db(db_path: Path) -> None:
conn.execute("ALTER TABLE background_tasks ADD COLUMN params TEXT")
except sqlite3.OperationalError:
pass # column already exists
# Ensure references tables exist (CREATE IF NOT EXISTS is idempotent)
conn.execute(CREATE_REFERENCES)
conn.execute(CREATE_JOB_REFERENCES)
conn.commit()
conn.close()
@ -189,6 +220,8 @@ def init_db(db_path: Path = DEFAULT_DB) -> None:
conn.execute(CREATE_BACKGROUND_TASKS)
conn.execute(CREATE_SURVEY_RESPONSES)
conn.execute(CREATE_DIGEST_QUEUE)
conn.execute(CREATE_REFERENCES)
conn.execute(CREATE_JOB_REFERENCES)
conn.commit()
conn.close()
_migrate_db(db_path)
@ -202,8 +235,8 @@ def insert_job(db_path: Path = DEFAULT_DB, job: dict = None) -> Optional[int]:
try:
cursor = conn.execute(
"""INSERT INTO jobs
(title, company, url, source, location, is_remote, salary, description, date_found)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(title, company, url, source, location, is_remote, salary, description, date_found, date_posted)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)""",
(
job.get("title", ""),
job.get("company", ""),
@ -214,6 +247,7 @@ def insert_job(db_path: Path = DEFAULT_DB, job: dict = None) -> Optional[int]:
job.get("salary", ""),
job.get("description", ""),
job.get("date_found", ""),
job.get("date_posted", "") or "",
),
)
conn.commit()
@ -939,6 +973,7 @@ def _resume_as_dict(row) -> dict:
"is_default": row["is_default"],
"created_at": row["created_at"],
"updated_at": row["updated_at"],
"synced_at": row["synced_at"] if "synced_at" in row.keys() else None,
}
@ -1040,6 +1075,44 @@ def set_default_resume(db_path: Path = DEFAULT_DB, resume_id: int = 0) -> None:
conn.close()
def update_resume_synced_at(db_path: Path = DEFAULT_DB, resume_id: int = 0) -> None:
"""Mark a library entry as synced to the profile (library→profile direction)."""
conn = sqlite3.connect(db_path)
try:
conn.execute(
"UPDATE resumes SET synced_at=datetime('now') WHERE id=?",
(resume_id,),
)
conn.commit()
finally:
conn.close()
def update_resume_content(
db_path: Path = DEFAULT_DB,
resume_id: int = 0,
text: str = "",
struct_json: str | None = None,
) -> None:
"""Update text, struct_json, and synced_at for a library entry.
Called by the profilelibrary sync path (PUT /api/settings/resume).
"""
word_count = len(text.split()) if text else 0
conn = sqlite3.connect(db_path)
try:
conn.execute(
"""UPDATE resumes
SET text=?, struct_json=?, word_count=?,
synced_at=datetime('now'), updated_at=datetime('now')
WHERE id=?""",
(text, struct_json, word_count, resume_id),
)
conn.commit()
finally:
conn.close()
def get_job_resume(db_path: Path = DEFAULT_DB, job_id: int = 0) -> dict | None:
"""Return the resume for a job: job-specific first, then default, then None."""
conn = sqlite3.connect(db_path)

View file

@ -56,7 +56,56 @@ def migrate_db(db_path: Path) -> list[str]:
sql = path.read_text(encoding="utf-8")
log.info("Applying migration %s to %s", version, db_path.name)
try:
con.executescript(sql)
# Execute statements individually so that ALTER TABLE ADD COLUMN
# errors caused by already-existing columns (pre-migration DBs
# created from a newer schema) are treated as no-ops rather than
# fatal failures.
statements = [s.strip() for s in sql.split(";") if s.strip()]
for stmt in statements:
# Strip leading SQL comment lines (-- ...) before processing.
# Checking startswith("--") on the raw chunk would skip entire
# multi-line statements whose first line is a comment.
stripped_lines = [
ln for ln in stmt.splitlines()
if not ln.strip().startswith("--")
]
stmt = "\n".join(stripped_lines).strip()
if not stmt:
continue
# Pre-check: if this is ADD COLUMN and the column already exists, skip.
# This guards against schema_migrations being ahead of the actual schema
# (e.g. DB reset after migrations were recorded).
stmt_upper = stmt.upper()
if "ALTER TABLE" in stmt_upper and "ADD COLUMN" in stmt_upper:
# Extract table name and column name from the statement
import re as _re
m = _re.match(
r"ALTER\s+TABLE\s+(\w+)\s+ADD\s+COLUMN\s+(\w+)",
stmt, _re.IGNORECASE
)
if m:
tbl, col = m.group(1), m.group(2)
existing = {
row[1]
for row in con.execute(f"PRAGMA table_info({tbl})")
}
if col in existing:
log.info(
"Migration %s: column %s.%s already exists, skipping",
version, tbl, col,
)
continue
try:
con.execute(stmt)
except sqlite3.OperationalError as stmt_exc:
msg = str(stmt_exc).lower()
if "duplicate column name" in msg or "already exists" in msg:
log.info(
"Migration %s: statement already applied, skipping: %s",
version, stmt_exc,
)
else:
raise
con.execute(
"INSERT INTO schema_migrations (version) VALUES (?)", (version,)
)

View file

@ -34,11 +34,38 @@ CUSTOM_SCRAPERS: dict[str, object] = {
}
def _normalize_profiles(raw: dict) -> dict:
"""Normalize search_profiles.yaml to the canonical {profiles: [...]} format.
The onboarding wizard (pre-fix) wrote a flat `default: {...}` structure.
Canonical format is `profiles: [{name, titles/job_titles, boards, ...}]`.
This converts on load so both formats work without a migration.
"""
if "profiles" in raw:
return raw
# Wizard-written format: top-level keys are profile names (usually "default")
profiles = []
for name, body in raw.items():
if not isinstance(body, dict):
continue
# job_boards: [{name, enabled}] → boards: [name] (enabled only)
job_boards = body.pop("job_boards", None)
if job_boards and "boards" not in body:
body["boards"] = [b["name"] for b in job_boards if b.get("enabled", True)]
# blocklist_* keys live in load_blocklist, not per-profile — drop them
body.pop("blocklist_companies", None)
body.pop("blocklist_industries", None)
body.pop("blocklist_locations", None)
profiles.append({"name": name, **body})
return {"profiles": profiles}
def load_config(config_dir: Path | None = None) -> tuple[dict, dict]:
cfg = config_dir or CONFIG_DIR
profiles_path = cfg / "search_profiles.yaml"
notion_path = cfg / "notion.yaml"
profiles = yaml.safe_load(profiles_path.read_text())
raw = yaml.safe_load(profiles_path.read_text()) or {}
profiles = _normalize_profiles(raw)
notion_cfg = yaml.safe_load(notion_path.read_text()) if notion_path.exists() else {"field_map": {}, "token": None, "database_id": None}
return profiles, notion_cfg
@ -212,14 +239,43 @@ def run_discovery(db_path: Path = DEFAULT_DB, notion_push: bool = False, config_
_rp = profile.get("remote_preference", "both")
_is_remote: bool | None = True if _rp == "remote" else (False if _rp == "onsite" else None)
# When filtering for remote-only, also drop hybrid roles at the description level.
# Job boards (especially LinkedIn) tag hybrid listings as is_remote=True, so the
# board-side filter alone is not reliable. We match specific work-arrangement
# phrases to avoid false positives like "hybrid cloud" or "hybrid architecture".
_HYBRID_PHRASES = [
"hybrid role", "hybrid position", "hybrid work", "hybrid schedule",
"hybrid model", "hybrid arrangement", "hybrid opportunity",
"in-office/remote", "in office/remote", "remote/in-office",
"remote/office", "office/remote",
"days in office", "days per week in", "days onsite", "days on-site",
"required to be in office", "required in office",
]
if _rp == "remote":
exclude_kw = exclude_kw + _HYBRID_PHRASES
for location in profile["locations"]:
# ── JobSpy boards ──────────────────────────────────────────────────
if boards:
print(f" [jobspy] {location} — boards: {', '.join(boards)}")
# Validate boards against the installed JobSpy Site enum.
# One unsupported name in the list aborts the entire scrape_jobs() call.
try:
from jobspy import Site as _Site
_valid = {s.value for s in _Site}
_filtered = [b for b in boards if b in _valid]
_dropped = [b for b in boards if b not in _valid]
if _dropped:
print(f" [jobspy] Skipping unsupported boards: {', '.join(_dropped)}")
except ImportError:
_filtered = boards # fallback: pass through unchanged
if not _filtered:
print(f" [jobspy] No valid boards for {location} — skipping")
continue
print(f" [jobspy] {location} — boards: {', '.join(_filtered)}")
try:
jobspy_kwargs: dict = dict(
site_name=boards,
site_name=_filtered,
search_term=" OR ".join(f'"{t}"' for t in (profile.get("titles") or profile.get("job_titles", []))),
location=location,
results_wanted=results_per_board,
@ -251,6 +307,10 @@ def run_discovery(db_path: Path = DEFAULT_DB, notion_push: bool = False, config_
elif job_dict.get("salary_source") and str(job_dict["salary_source"]) not in ("nan", "None", ""):
salary_str = str(job_dict["salary_source"])
_dp = job_dict.get("date_posted")
date_posted_str = (
_dp.isoformat() if hasattr(_dp, "isoformat") else str(_dp)
) if _dp and str(_dp) not in ("nan", "None", "") else ""
row = {
"url": url,
"title": _s(job_dict.get("title")),
@ -260,6 +320,7 @@ def run_discovery(db_path: Path = DEFAULT_DB, notion_push: bool = False, config_
"is_remote": bool(job_dict.get("is_remote", False)),
"salary": salary_str,
"description": _s(job_dict.get("description")),
"date_posted": date_posted_str,
"_exclude_kw": exclude_kw,
}
if _insert_if_new(row, _s(job_dict.get("site"))):

View file

@ -16,6 +16,8 @@ import re
import sys
from pathlib import Path
import yaml
sys.path.insert(0, str(Path(__file__).parent.parent))
from scripts.user_profile import UserProfile
@ -40,107 +42,57 @@ def _build_system_context(profile=None) -> str:
return " ".join(parts)
SYSTEM_CONTEXT = _build_system_context()
_candidate = _profile.name if _profile else "the candidate"
# ── Mission-alignment detection ───────────────────────────────────────────────
# When a company/JD signals one of these preferred industries, the cover letter
# prompt injects a hint so Para 3 can reflect genuine personal connection.
# Domains and their keyword signals are loaded from config/mission_domains.yaml.
# This does NOT disclose any personal disability or family information.
_MISSION_DOMAINS_PATH = Path(__file__).parent.parent / "config" / "mission_domains.yaml"
def load_mission_domains(path: Path | None = None) -> dict[str, dict]:
"""Load mission domain config from YAML. Returns dict keyed by domain name."""
p = path or _MISSION_DOMAINS_PATH
if not p.exists():
return {}
with p.open(encoding="utf-8") as fh:
data = yaml.safe_load(fh)
return data.get("domains", {}) if data else {}
_MISSION_DOMAINS: dict[str, dict] = load_mission_domains()
_MISSION_SIGNALS: dict[str, list[str]] = {
"music": [
"music", "spotify", "tidal", "soundcloud", "bandcamp", "apple music",
"distrokid", "cd baby", "landr", "beatport", "reverb", "vinyl",
"streaming", "artist", "label", "live nation", "ticketmaster", "aeg",
"songkick", "concert", "venue", "festival", "audio", "podcast",
"studio", "record", "musician", "playlist",
],
"animal_welfare": [
"animal", "shelter", "rescue", "humane society", "spca", "aspca",
"veterinary", "vet ", "wildlife", "pet ", "adoption", "foster",
"dog", "cat", "feline", "canine", "sanctuary", "zoo",
],
"education": [
"education", "school", "learning", "student", "edtech", "classroom",
"curriculum", "tutoring", "academic", "university", "kids", "children",
"youth", "literacy", "khan academy", "duolingo", "chegg", "coursera",
"instructure", "canvas lms", "clever", "district", "teacher",
"k-12", "k12", "grade", "pedagogy",
],
"social_impact": [
"nonprofit", "non-profit", "501(c)", "social impact", "mission-driven",
"public benefit", "community", "underserved", "equity", "justice",
"humanitarian", "advocacy", "charity", "foundation", "ngo",
"social good", "civic", "public health", "mental health", "food security",
"housing", "homelessness", "poverty", "workforce development",
],
# Health is listed last — it's a genuine but lower-priority connection than
# music/animals/education/social_impact. detect_mission_alignment returns on first
# match, so dict order = preference order.
"health": [
"patient", "patients", "healthcare", "health tech", "healthtech",
"pharma", "pharmaceutical", "clinical", "medical",
"hospital", "clinic", "therapy", "therapist",
"rare disease", "life sciences", "life science",
"treatment", "prescription", "biotech", "biopharma", "medtech",
"behavioral health", "population health",
"care management", "care coordination", "oncology", "specialty pharmacy",
"provider network", "payer", "health plan", "benefits administration",
"ehr", "emr", "fhir", "hipaa",
],
}
_candidate = _profile.name if _profile else "the candidate"
_MISSION_DEFAULTS: dict[str, str] = {
"music": (
f"This company is in the music industry — an industry {_candidate} finds genuinely "
"compelling. Para 3 should warmly and specifically reflect this authentic alignment, "
"not as a generic fan statement, but as an honest statement of where they'd love to "
"apply their skills."
),
"animal_welfare": (
f"This organization works in animal welfare/rescue — a mission {_candidate} finds "
"genuinely meaningful. Para 3 should reflect this authentic connection warmly and "
"specifically, tying their skills to this mission."
),
"education": (
f"This company works in education or EdTech — a domain that resonates with "
f"{_candidate}'s values. Para 3 should reflect this authentic connection specifically "
"and warmly."
),
"social_impact": (
f"This organization is mission-driven / social impact focused — exactly the kind of "
f"cause {_candidate} cares deeply about. Para 3 should warmly reflect their genuine "
"desire to apply their skills to work that makes a real difference in people's lives."
),
"health": (
f"This company works in healthcare, life sciences, or patient care. "
f"Do NOT write about {_candidate}'s passion for pharmaceuticals or healthcare as an "
"industry. Instead, Para 3 should reflect genuine care for the PEOPLE these companies "
"exist to serve — those navigating complex, often invisible, or unusual health journeys; "
"patients facing rare or poorly understood conditions; individuals whose situations don't "
"fit a clean category. The connection is to the humans behind the data, not the industry. "
"If the user has provided a personal note, use that to anchor Para 3 specifically."
),
domain: cfg.get("signals", []) for domain, cfg in _MISSION_DOMAINS.items()
}
def _build_mission_notes(profile=None, candidate_name: str | None = None) -> dict[str, str]:
"""Merge user's custom mission notes with generic defaults."""
"""Merge user's custom mission notes with YAML defaults.
For domains defined in mission_domains.yaml the default_note is used when
the user has not provided a custom note in user.yaml mission_preferences.
For user-defined domains (keys in mission_preferences that are NOT in the
YAML config), the custom note is used as-is; no signal detection applies.
"""
p = profile or _profile
name = candidate_name or _candidate
name = candidate_name or (p.name if p else "the candidate")
prefs = p.mission_preferences if p else {}
notes = {}
for industry, default_note in _MISSION_DEFAULTS.items():
custom = (prefs.get(industry) or "").strip()
notes: dict[str, str] = {}
for domain, cfg in _MISSION_DOMAINS.items():
default_note = (cfg.get("default_note") or "").strip()
custom = (prefs.get(domain) or "").strip()
if custom:
notes[industry] = (
notes[domain] = (
f"Mission alignment — {name} shared: \"{custom}\". "
"Para 3 should warmly and specifically reflect this authentic connection."
)
else:
notes[industry] = default_note
notes[domain] = default_note
return notes
@ -150,12 +102,15 @@ _MISSION_NOTES = _build_mission_notes()
def detect_mission_alignment(
company: str, description: str, mission_notes: dict | None = None
) -> str | None:
"""Return a mission hint string if company/JD matches a preferred industry, else None."""
"""Return a mission hint string if company/JD matches a configured domain, else None.
Checks domains in YAML file order (dict order = match priority).
"""
notes = mission_notes if mission_notes is not None else _MISSION_NOTES
text = f"{company} {description}".lower()
for industry, signals in _MISSION_SIGNALS.items():
for domain, signals in _MISSION_SIGNALS.items():
if any(sig in text for sig in signals):
return notes[industry]
return notes.get(domain)
return None

View file

@ -0,0 +1,254 @@
#!/usr/bin/env python3
"""
Generate demo/seed.sql committed seed INSERT statements for the demo DB.
Run whenever seed data needs to change:
conda run -n cf python scripts/generate_demo_seed.py
Outputs pure INSERT SQL (no DDL). Schema migrations are handled by db_migrate.py
at container startup. The seed SQL is loaded after migrations complete.
"""
from __future__ import annotations
from datetime import date, timedelta
from pathlib import Path
OUT_PATH = Path(__file__).parent.parent / "demo" / "seed.sql"
TODAY = date.today()
def _dago(n: int) -> str:
return (TODAY - timedelta(days=n)).isoformat()
def _dfrom(n: int) -> str:
return (TODAY + timedelta(days=n)).isoformat()
COVER_LETTER_SPOTIFY = """\
Dear Hiring Manager,
I'm excited to apply for the UX Designer role at Spotify. With five years of
experience designing for music discovery and cross-platform experiences, I've
consistently shipped features that make complex audio content feel effortless to
navigate. At my last role I led a redesign of the playlist creation flow that
reduced drop-off by 31%.
Spotify's commitment to artist and listener discovery — and its recent push into
audiobooks and podcast tooling aligns directly with the kind of cross-format
design challenges I'm most energised by.
I'd love to bring that focus to your product design team.
Warm regards,
[Your name]
"""
SQL_PARTS: list[str] = []
# ── Jobs ──────────────────────────────────────────────────────────────────────
# Columns: title, company, url, source, location, is_remote, salary,
# match_score, status, date_found, date_posted, cover_letter,
# applied_at, phone_screen_at, interviewing_at, offer_at, hired_at,
# interview_date, rejection_stage, hired_feedback
JOBS: list[tuple] = [
# ---- Review queue (12 jobs — mix of pending + approved) ------------------
("UX Designer",
"Spotify", "https://www.linkedin.com/jobs/view/1000001",
"linkedin", "Remote", 1, "$110k$140k",
94.0, "approved", _dago(1), _dago(3), COVER_LETTER_SPOTIFY,
None, None, None, None, None, None, None, None),
("Product Designer",
"Duolingo", "https://www.linkedin.com/jobs/view/1000002",
"linkedin", "Pittsburgh, PA", 0, "$95k$120k",
87.0, "approved", _dago(2), _dago(5), "Draft in progress — cover letter generating…",
None, None, None, None, None, None, None, None),
("UX Lead",
"NPR", "https://www.indeed.com/viewjob?jk=1000003",
"indeed", "Washington, DC", 1, "$120k$150k",
81.0, "approved", _dago(3), _dago(7), None,
None, None, None, None, None, None, None, None),
# Ghost post — date_posted 34 days ago → shadow indicator
("Senior UX Designer",
"Mozilla", "https://www.linkedin.com/jobs/view/1000004",
"linkedin", "Remote", 1, "$105k$130k",
81.0, "pending", _dago(2), _dago(34), None,
None, None, None, None, None, None, None, None),
("Interaction Designer",
"Figma", "https://www.indeed.com/viewjob?jk=1000005",
"indeed", "San Francisco, CA", 1, "$115k$145k",
78.0, "pending", _dago(4), _dago(6), None,
None, None, None, None, None, None, None, None),
("Product Designer II",
"Notion", "https://www.linkedin.com/jobs/view/1000006",
"linkedin", "Remote", 1, "$100k$130k",
76.0, "pending", _dago(5), _dago(8), None,
None, None, None, None, None, None, None, None),
("UX Designer",
"Stripe", "https://www.linkedin.com/jobs/view/1000007",
"linkedin", "Remote", 1, "$120k$150k",
74.0, "pending", _dago(6), _dago(9), None,
None, None, None, None, None, None, None, None),
("UI/UX Designer",
"Canva", "https://www.indeed.com/viewjob?jk=1000008",
"indeed", "Remote", 1, "$90k$115k",
72.0, "pending", _dago(7), _dago(10), None,
None, None, None, None, None, None, None, None),
("Senior Product Designer",
"Asana", "https://www.linkedin.com/jobs/view/1000009",
"linkedin", "San Francisco, CA", 1, "$125k$155k",
69.0, "pending", _dago(8), _dago(11), None,
None, None, None, None, None, None, None, None),
("UX Researcher",
"Intercom", "https://www.indeed.com/viewjob?jk=1000010",
"indeed", "Remote", 1, "$95k$120k",
67.0, "pending", _dago(9), _dago(12), None,
None, None, None, None, None, None, None, None),
("Product Designer",
"Linear", "https://www.linkedin.com/jobs/view/1000011",
"linkedin", "Remote", 1, "$110k$135k",
65.0, "pending", _dago(10), _dago(13), None,
None, None, None, None, None, None, None, None),
("UX Designer",
"Loom", "https://www.indeed.com/viewjob?jk=1000012",
"indeed", "Remote", 1, "$90k$110k",
62.0, "pending", _dago(11), _dago(14), None,
None, None, None, None, None, None, None, None),
# ---- Pipeline jobs (applied → hired) ------------------------------------
("Senior Product Designer",
"Asana", "https://www.asana.com/jobs/1000013",
"linkedin", "San Francisco, CA", 1, "$125k$155k",
91.0, "phone_screen", _dago(14), _dago(16), None,
_dago(7), _dfrom(0), None, None, None,
f"{_dfrom(0)}T14:00:00", None, None),
("Product Designer",
"Notion", "https://www.notion.so/jobs/1000014",
"indeed", "Remote", 1, "$100k$130k",
88.0, "interviewing", _dago(21), _dago(23), None,
_dago(14), _dago(10), _dago(3), None, None,
f"{_dfrom(7)}T10:00:00", None, None),
("Design Systems Designer",
"Figma", "https://www.figma.com/jobs/1000015",
"linkedin", "San Francisco, CA", 1, "$130k$160k",
96.0, "hired", _dago(45), _dago(47), None,
_dago(38), _dago(32), _dago(25), _dago(14), _dago(7),
None, None,
'{"factors":["clear_scope","great_manager","mission_aligned"],"notes":"Excited about design systems work. Salary met expectations."}'),
("UX Designer",
"Slack", "https://slack.com/jobs/1000016",
"indeed", "Remote", 1, "$115k$140k",
79.0, "applied", _dago(28), _dago(30), None,
_dago(18), None, None, None, None, None, None, None),
]
def _q(v: object) -> str:
"""SQL-quote a Python value."""
if v is None:
return "NULL"
return "'" + str(v).replace("'", "''") + "'"
_JOB_COLS = (
"title, company, url, source, location, is_remote, salary, "
"match_score, status, date_found, date_posted, cover_letter, "
"applied_at, phone_screen_at, interviewing_at, offer_at, hired_at, "
"interview_date, rejection_stage, hired_feedback"
)
SQL_PARTS.append("-- jobs")
for job in JOBS:
vals = ", ".join(_q(v) for v in job)
SQL_PARTS.append(f"INSERT INTO jobs ({_JOB_COLS}) VALUES ({vals});")
# ── Contacts ──────────────────────────────────────────────────────────────────
# (job_id, direction, subject, from_addr, to_addr, received_at, stage_signal)
CONTACTS: list[tuple] = [
(1, "inbound", "Excited to connect — UX Designer role at Spotify",
"jamie.chen@spotify.com", "you@example.com", _dago(3), "positive_response"),
(1, "outbound", "Re: Excited to connect — UX Designer role at Spotify",
"you@example.com", "jamie.chen@spotify.com", _dago(2), None),
(13, "inbound", "Interview Confirmation — Senior Product Designer",
"recruiting@asana.com", "you@example.com", _dago(2), "interview_scheduled"),
(14, "inbound", "Your panel interview is confirmed for Apr 22",
"recruiting@notion.so", "you@example.com", _dago(3), "interview_scheduled"),
(14, "inbound", "Pre-interview prep resources",
"marcus.webb@notion.so", "you@example.com", _dago(2), "positive_response"),
(15, "inbound", "Figma Design Systems — Offer Letter",
"offers@figma.com", "you@example.com", _dago(14), "offer_received"),
(15, "outbound", "Re: Figma Design Systems — Offer Letter (acceptance)",
"you@example.com", "offers@figma.com", _dago(10), None),
(15, "inbound", "Welcome to Figma! Onboarding next steps",
"onboarding@figma.com", "you@example.com", _dago(7), None),
(16, "inbound", "Thanks for applying to Slack",
"noreply@slack.com", "you@example.com", _dago(18), None),
]
SQL_PARTS.append("\n-- job_contacts")
for c in CONTACTS:
job_id, direction, subject, from_addr, to_addr, received_at, stage_signal = c
SQL_PARTS.append(
f"INSERT INTO job_contacts "
f"(job_id, direction, subject, from_addr, to_addr, received_at, stage_signal) "
f"VALUES ({job_id}, {_q(direction)}, {_q(subject)}, {_q(from_addr)}, "
f"{_q(to_addr)}, {_q(received_at)}, {_q(stage_signal)});"
)
# ── References ────────────────────────────────────────────────────────────────
# (name, email, role, company, relationship, notes, tags, prep_email)
REFERENCES: list[tuple] = [
("Dr. Priya Nair", "priya.nair@example.com", "Director of Design", "Acme Corp",
"former_manager",
"Managed me for 3 years on the consumer app redesign. Enthusiastic reference.",
'["manager","design"]',
"Hi Priya,\n\nI hope you're doing well! I'm currently interviewing for a few senior UX roles "
"and would be so grateful if you'd be willing to serve as a reference.\n\nThank you!\n[Your name]"),
("Sam Torres", "sam.torres@example.com", "Senior Product Designer", "Acme Corp",
"former_colleague",
"Worked together on design systems. Great at speaking to collaborative process.",
'["colleague","design_systems"]', None),
("Jordan Kim", "jordan.kim@example.com", "VP of Product", "Streamline Inc",
"former_manager",
"Led the product team I was embedded in. Can speak to business impact of design work.",
'["manager","product"]', None),
]
SQL_PARTS.append("\n-- references_")
for ref in REFERENCES:
name, email, role, company, relationship, notes, tags, prep_email = ref
SQL_PARTS.append(
f"INSERT INTO references_ "
f"(name, email, role, company, relationship, notes, tags, prep_email) "
f"VALUES ({_q(name)}, {_q(email)}, {_q(role)}, {_q(company)}, "
f"{_q(relationship)}, {_q(notes)}, {_q(tags)}, {_q(prep_email)});"
)
# ── Write output ──────────────────────────────────────────────────────────────
output = "\n".join(SQL_PARTS) + "\n"
OUT_PATH.write_text(output, encoding="utf-8")
print(
f"Wrote {OUT_PATH} "
f"({len(JOBS)} jobs, {len(CONTACTS)} contacts, {len(REFERENCES)} references)"
)

View file

@ -0,0 +1,42 @@
# BSL 1.1 — see LICENSE-BSL
"""LLM-assisted reply draft generation for inbound job contacts (BSL 1.1)."""
from __future__ import annotations
from pathlib import Path
from typing import Optional
_SYSTEM = (
"You are drafting a professional email reply on behalf of a job seeker. "
"Be concise and professional. Do not fabricate facts. If you are uncertain "
"about a detail, leave a [TODO: fill in] placeholder. "
"Output the reply body only — no subject line, no salutation preamble."
)
def _build_prompt(subject: str, from_addr: str, body: str, user_name: str, target_role: str) -> str:
return (
f"ORIGINAL EMAIL:\n"
f"Subject: {subject}\n"
f"From: {from_addr}\n"
f"Body:\n{body}\n\n"
f"USER PROFILE CONTEXT:\n"
f"Name: {user_name}\n"
f"Target role: {target_role}\n\n"
"Write a concise, professional reply to this email."
)
def generate_draft_reply(
subject: str,
from_addr: str,
body: str,
user_name: str,
target_role: str,
config_path: Optional[Path] = None,
) -> str:
"""Return a draft reply body string."""
from scripts.llm_router import LLMRouter
router = LLMRouter(config_path=config_path)
prompt = _build_prompt(subject, from_addr, body, user_name, target_role)
return router.complete(system=_SYSTEM, user=prompt).strip()

285
scripts/messaging.py Normal file
View file

@ -0,0 +1,285 @@
"""
DB helpers for the messaging feature.
Messages table: manual log entries and LLM drafts (one row per message).
Message templates table: built-in seeds and user-created templates.
Conventions (match scripts/db.py):
- All functions take db_path: Path as first argument.
- sqlite3.connect(db_path), row_factory = sqlite3.Row
- Return plain dicts (dict(row))
- Always close connection in finally
"""
import sqlite3
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _connect(db_path: Path) -> sqlite3.Connection:
con = sqlite3.connect(db_path)
con.row_factory = sqlite3.Row
return con
def _now_utc() -> str:
"""Return current UTC time as ISO 8601 string."""
return datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
# ---------------------------------------------------------------------------
# Messages
# ---------------------------------------------------------------------------
def create_message(
db_path: Path,
*,
job_id: Optional[int],
job_contact_id: Optional[int],
type: str,
direction: str,
subject: Optional[str],
body: Optional[str],
from_addr: Optional[str],
to_addr: Optional[str],
template_id: Optional[int],
logged_at: Optional[str] = None,
) -> dict:
"""Insert a new message row and return it as a dict."""
con = _connect(db_path)
try:
cur = con.execute(
"""
INSERT INTO messages
(job_id, job_contact_id, type, direction, subject, body,
from_addr, to_addr, logged_at, template_id)
VALUES
(?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(job_id, job_contact_id, type, direction, subject, body,
from_addr, to_addr, logged_at or _now_utc(), template_id),
)
con.commit()
row = con.execute(
"SELECT * FROM messages WHERE id = ?", (cur.lastrowid,)
).fetchone()
return dict(row)
finally:
con.close()
def list_messages(
db_path: Path,
*,
job_id: Optional[int] = None,
type: Optional[str] = None,
direction: Optional[str] = None,
limit: int = 100,
) -> list[dict]:
"""Return messages, optionally filtered. Ordered by logged_at DESC."""
conditions: list[str] = []
params: list = []
if job_id is not None:
conditions.append("job_id = ?")
params.append(job_id)
if type is not None:
conditions.append("type = ?")
params.append(type)
if direction is not None:
conditions.append("direction = ?")
params.append(direction)
where = ("WHERE " + " AND ".join(conditions)) if conditions else ""
params.append(limit)
con = _connect(db_path)
try:
rows = con.execute(
f"SELECT * FROM messages {where} ORDER BY logged_at DESC LIMIT ?",
params,
).fetchall()
return [dict(r) for r in rows]
finally:
con.close()
def delete_message(db_path: Path, message_id: int) -> None:
"""Delete a message by id. Raises KeyError if not found."""
con = _connect(db_path)
try:
row = con.execute(
"SELECT id FROM messages WHERE id = ?", (message_id,)
).fetchone()
if row is None:
raise KeyError(f"Message {message_id} not found")
con.execute("DELETE FROM messages WHERE id = ?", (message_id,))
con.commit()
finally:
con.close()
def approve_message(db_path: Path, message_id: int) -> dict:
"""Set approved_at to now for the given message. Raises KeyError if not found."""
con = _connect(db_path)
try:
row = con.execute(
"SELECT id FROM messages WHERE id = ?", (message_id,)
).fetchone()
if row is None:
raise KeyError(f"Message {message_id} not found")
con.execute(
"UPDATE messages SET approved_at = ? WHERE id = ?",
(_now_utc(), message_id),
)
con.commit()
updated = con.execute(
"SELECT * FROM messages WHERE id = ?", (message_id,)
).fetchone()
return dict(updated)
finally:
con.close()
# ---------------------------------------------------------------------------
# Templates
# ---------------------------------------------------------------------------
def list_templates(db_path: Path) -> list[dict]:
"""Return all templates ordered by is_builtin DESC, then title ASC."""
con = _connect(db_path)
try:
rows = con.execute(
"SELECT * FROM message_templates ORDER BY is_builtin DESC, title ASC"
).fetchall()
return [dict(r) for r in rows]
finally:
con.close()
def create_template(
db_path: Path,
*,
title: str,
category: str = "custom",
subject_template: Optional[str] = None,
body_template: str,
) -> dict:
"""Insert a new user-defined template and return it as a dict."""
con = _connect(db_path)
try:
cur = con.execute(
"""
INSERT INTO message_templates
(title, category, subject_template, body_template, is_builtin)
VALUES
(?, ?, ?, ?, 0)
""",
(title, category, subject_template, body_template),
)
con.commit()
row = con.execute(
"SELECT * FROM message_templates WHERE id = ?", (cur.lastrowid,)
).fetchone()
return dict(row)
finally:
con.close()
def update_template(db_path: Path, template_id: int, **fields) -> dict:
"""
Update allowed fields on a user-defined template.
Raises PermissionError if the template is a built-in (is_builtin=1).
Raises KeyError if the template is not found.
"""
if not fields:
# Nothing to update — just return current state
con = _connect(db_path)
try:
row = con.execute(
"SELECT * FROM message_templates WHERE id = ?", (template_id,)
).fetchone()
if row is None:
raise KeyError(f"Template {template_id} not found")
return dict(row)
finally:
con.close()
_ALLOWED_FIELDS = {
"title", "category", "subject_template", "body_template",
}
invalid = set(fields) - _ALLOWED_FIELDS
if invalid:
raise ValueError(f"Cannot update field(s): {invalid}")
con = _connect(db_path)
try:
row = con.execute(
"SELECT id, is_builtin FROM message_templates WHERE id = ?",
(template_id,),
).fetchone()
if row is None:
raise KeyError(f"Template {template_id} not found")
if row["is_builtin"]:
raise PermissionError(
f"Template {template_id} is a built-in and cannot be modified"
)
set_clause = ", ".join(f"{col} = ?" for col in fields)
values = list(fields.values()) + [_now_utc(), template_id]
con.execute(
f"UPDATE message_templates SET {set_clause}, updated_at = ? WHERE id = ?",
values,
)
con.commit()
updated = con.execute(
"SELECT * FROM message_templates WHERE id = ?", (template_id,)
).fetchone()
return dict(updated)
finally:
con.close()
def delete_template(db_path: Path, template_id: int) -> None:
"""
Delete a user-defined template.
Raises PermissionError if the template is a built-in (is_builtin=1).
Raises KeyError if the template is not found.
"""
con = _connect(db_path)
try:
row = con.execute(
"SELECT id, is_builtin FROM message_templates WHERE id = ?",
(template_id,),
).fetchone()
if row is None:
raise KeyError(f"Template {template_id} not found")
if row["is_builtin"]:
raise PermissionError(
f"Template {template_id} is a built-in and cannot be deleted"
)
con.execute("DELETE FROM message_templates WHERE id = ?", (template_id,))
con.commit()
finally:
con.close()
def update_message_body(db_path: Path, message_id: int, body: str) -> dict:
"""Update the body text of a draft message before approval. Returns updated row."""
con = _connect(db_path)
try:
row = con.execute("SELECT id FROM messages WHERE id=?", (message_id,)).fetchone()
if not row:
raise KeyError(f"message {message_id} not found")
con.execute("UPDATE messages SET body=? WHERE id=?", (body, message_id))
con.commit()
updated = con.execute("SELECT * FROM messages WHERE id=?", (message_id,)).fetchone()
return dict(updated)
finally:
con.close()

View file

@ -70,7 +70,12 @@ def extract_jd_signals(description: str, resume_text: str = "") -> list[str]:
# Extract JSON array from response (LLM may wrap it in markdown)
match = re.search(r"\[.*\]", raw, re.DOTALL)
if match:
llm_signals = json.loads(match.group(0))
json_str = match.group(0)
# LLMs occasionally emit invalid JSON escape sequences (e.g. \s, \d, \p)
# that are valid regex but not valid JSON. Replace bare backslashes that
# aren't followed by a recognised JSON escape character.
json_str = re.sub(r'\\([^"\\/bfnrtu])', r'\1', json_str)
llm_signals = json.loads(json_str)
llm_signals = [s.strip() for s in llm_signals if isinstance(s, str) and s.strip()]
except Exception:
log.warning("[resume_optimizer] LLM signal extraction failed", exc_info=True)
@ -301,7 +306,7 @@ def _apply_section_rewrite(resume: dict[str, Any], section: str, rewritten: str)
elif section == "experience":
# For experience, we keep the structured entries but replace the bullets.
# The LLM rewrites the whole section as plain text; we re-parse the bullets.
updated["experience"] = _reparse_experience_bullets(resume["experience"], rewritten)
updated["experience"] = _reparse_experience_bullets(resume.get("experience", []), rewritten)
return updated
@ -345,6 +350,198 @@ def _reparse_experience_bullets(
return result
# ── Gap framing ───────────────────────────────────────────────────────────────
def frame_skill_gaps(
struct: dict[str, Any],
gap_framings: list[dict],
job: dict[str, Any],
candidate_voice: str = "",
) -> dict[str, Any]:
"""Inject honest framing language for skills the candidate doesn't have directly.
For each gap framing decision the user provided:
- mode "adjacent": user has related experience injects one bridging sentence
into the most relevant experience entry's bullets
- mode "learning": actively developing the skill prepends a structured
"Developing: X (context)" note to the skills list
- mode "skip": no connection at all no change
The user-supplied context text is the source of truth. The LLM's job is only
to phrase it naturally in resume style not to invent new claims.
Args:
struct: Resume dict (already processed by apply_review_decisions).
gap_framings: List of dicts with keys:
skill the ATS term the candidate lacks
mode "adjacent" | "learning" | "skip"
context candidate's own words describing their related background
job: Job dict for role context in prompts.
candidate_voice: Free-text style note from user.yaml.
Returns:
New resume dict with framing language injected.
"""
from scripts.llm_router import LLMRouter
router = LLMRouter()
updated = dict(struct)
updated["experience"] = [dict(e) for e in (struct.get("experience") or [])]
adjacent_framings = [f for f in gap_framings if f.get("mode") == "adjacent" and f.get("context")]
learning_framings = [f for f in gap_framings if f.get("mode") == "learning" and f.get("context")]
# ── Adjacent experience: inject bridging sentence into most relevant entry ─
for framing in adjacent_framings:
skill = framing["skill"]
context = framing["context"]
# Find the experience entry most likely to be relevant (simple keyword match)
best_entry_idx = _find_most_relevant_entry(updated["experience"], skill)
if best_entry_idx is None:
continue
entry = updated["experience"][best_entry_idx]
bullets = list(entry.get("bullets") or [])
voice_note = (
f'\n\nCandidate voice/style: "{candidate_voice}". Match this tone.'
) if candidate_voice else ""
prompt = (
f"You are adding one honest framing sentence to a resume bullet list.\n\n"
f"The candidate does not have direct experience with '{skill}', "
f"but they have relevant background they described as:\n"
f' "{context}"\n\n'
f"Job context: {job.get('title', '')} at {job.get('company', '')}.\n\n"
f"RULES:\n"
f"1. Add exactly ONE new bullet point that bridges their background to '{skill}'.\n"
f"2. Do NOT fabricate anything beyond what their context description says.\n"
f"3. Use honest language: 'adjacent experience in', 'strong foundation applicable to', "
f" 'directly transferable background in', etc.\n"
f"4. Return ONLY the single new bullet text — no prefix, no explanation."
f"{voice_note}\n\n"
f"Existing bullets for context:\n"
+ "\n".join(f"{b}" for b in bullets[:3])
)
try:
new_bullet = router.complete(prompt).strip()
new_bullet = re.sub(r"^[•\-–—*◦▪▸►]\s*", "", new_bullet).strip()
if new_bullet:
bullets.append(new_bullet)
new_entry = dict(entry)
new_entry["bullets"] = bullets
updated["experience"][best_entry_idx] = new_entry
except Exception:
log.warning(
"[resume_optimizer] frame_skill_gaps adjacent failed for skill %r", skill,
exc_info=True,
)
# ── Learning framing: add structured note to skills list ──────────────────
if learning_framings:
skills = list(updated.get("skills") or [])
for framing in learning_framings:
skill = framing["skill"]
context = framing["context"].strip()
# Format: "Developing: Kubernetes (strong Docker/container orchestration background)"
note = f"Developing: {skill} ({context})" if context else f"Developing: {skill}"
if note not in skills:
skills.append(note)
updated["skills"] = skills
return updated
def _find_most_relevant_entry(
experience: list[dict],
skill: str,
) -> int | None:
"""Return the index of the experience entry most relevant to a skill term.
Uses simple keyword overlap between the skill and entry title/bullets.
Falls back to the most recent (first) entry if no match found.
"""
if not experience:
return None
skill_words = set(skill.lower().split())
best_idx = 0
best_score = -1
for i, entry in enumerate(experience):
entry_text = (
(entry.get("title") or "") + " " +
" ".join(entry.get("bullets") or [])
).lower()
entry_words = set(entry_text.split())
score = len(skill_words & entry_words)
if score > best_score:
best_score = score
best_idx = i
return best_idx
def apply_review_decisions(
draft: dict[str, Any],
decisions: dict[str, Any],
) -> dict[str, Any]:
"""Apply user section-level review decisions to the rewritten struct.
Handles approved skills, summary accept/reject, and per-entry experience
accept/reject. Returns the updated struct; does not call the LLM.
Args:
draft: The review draft dict from build_review_diff (contains
"sections" and "rewritten_struct").
decisions: Dict of per-section decisions from the review UI:
skills: {"approved_additions": [...]}
summary: {"accepted": bool}
experience: {"accepted_entries": [{"title", "company", "accepted"}]}
Returns:
Updated resume struct ready for gap framing and final render.
"""
struct = dict(draft.get("rewritten_struct") or {})
sections = draft.get("sections") or []
# ── Skills: keep original + only approved additions ────────────────────
skills_decision = decisions.get("skills", {})
approved_additions = set(skills_decision.get("approved_additions") or [])
for sec in sections:
if sec["section"] == "skills":
original_kept = set(sec.get("kept") or [])
struct["skills"] = sorted(original_kept | approved_additions)
break
# ── Summary: accept proposed or revert to original ──────────────────────
if not decisions.get("summary", {}).get("accepted", True):
for sec in sections:
if sec["section"] == "summary":
struct["career_summary"] = sec.get("original", struct.get("career_summary", ""))
break
# ── Experience: per-entry accept/reject ─────────────────────────────────
exp_decisions: dict[str, bool] = {
f"{ed.get('title', '')}|{ed.get('company', '')}": ed.get("accepted", True)
for ed in (decisions.get("experience", {}).get("accepted_entries") or [])
}
for sec in sections:
if sec["section"] == "experience":
for entry_diff in (sec.get("entries") or []):
key = f"{entry_diff['title']}|{entry_diff['company']}"
if not exp_decisions.get(key, True):
for exp_entry in (struct.get("experience") or []):
if (exp_entry.get("title") == entry_diff["title"] and
exp_entry.get("company") == entry_diff["company"]):
exp_entry["bullets"] = entry_diff["original_bullets"]
break
return struct
# ── Hallucination guard ───────────────────────────────────────────────────────
def hallucination_check(original: dict[str, Any], rewritten: dict[str, Any]) -> bool:
@ -437,3 +634,207 @@ def render_resume_text(resume: dict[str, Any]) -> str:
lines.append("")
return "\n".join(lines)
# ── Review diff builder ────────────────────────────────────────────────────────
def build_review_diff(
original: dict[str, Any],
rewritten: dict[str, Any],
) -> dict[str, Any]:
"""Build a structured diff between original and rewritten resume for the review UI.
Returns a dict with:
sections: list of per-section diffs
rewritten_struct: the full rewritten resume dict (used by finalize endpoint)
Each section diff has:
section: "skills" | "summary" | "experience"
type: "skills_diff" | "text_diff" | "bullets_diff"
For skills_diff:
added: list of new skill strings (each requires user approval)
removed: list of removed skill strings
kept: list of unchanged skills
For text_diff (summary):
original: str
proposed: str
For bullets_diff (experience):
entries: list of {title, company, original_bullets, proposed_bullets}
"""
sections = []
# ── Skills diff ────────────────────────────────────────────────────────
orig_skills = set(s.strip() for s in (original.get("skills") or []))
new_skills = set(s.strip() for s in (rewritten.get("skills") or []))
added = sorted(new_skills - orig_skills)
removed = sorted(orig_skills - new_skills)
kept = sorted(orig_skills & new_skills)
if added or removed:
sections.append({
"section": "skills",
"type": "skills_diff",
"added": added,
"removed": removed,
"kept": kept,
})
# ── Summary diff ───────────────────────────────────────────────────────
orig_summary = (original.get("career_summary") or "").strip()
new_summary = (rewritten.get("career_summary") or "").strip()
if orig_summary != new_summary and new_summary:
sections.append({
"section": "summary",
"type": "text_diff",
"original": orig_summary,
"proposed": new_summary,
})
# ── Experience diff ────────────────────────────────────────────────────
orig_exp = original.get("experience") or []
new_exp = rewritten.get("experience") or []
entry_diffs = []
for orig_entry, new_entry in zip(orig_exp, new_exp):
orig_bullets = orig_entry.get("bullets") or []
new_bullets = new_entry.get("bullets") or []
if orig_bullets != new_bullets:
entry_diffs.append({
"title": orig_entry.get("title", ""),
"company": orig_entry.get("company", ""),
"original_bullets": orig_bullets,
"proposed_bullets": new_bullets,
})
if entry_diffs:
sections.append({
"section": "experience",
"type": "bullets_diff",
"entries": entry_diffs,
})
return {
"sections": sections,
"rewritten_struct": rewritten,
}
# ── PDF export ─────────────────────────────────────────────────────────────────
def export_pdf(resume: dict[str, Any], output_path: str) -> None:
"""Render a structured resume dict to a clean PDF using reportlab.
Uses a single-column layout with section headers, consistent spacing,
and a readable sans-serif body font suitable for ATS submission.
Args:
resume: Structured resume dict (same format as resume_parser output).
output_path: Absolute path for the output .pdf file.
"""
from reportlab.lib.pagesizes import LETTER
from reportlab.lib.units import inch
from reportlab.lib.styles import ParagraphStyle
from reportlab.lib.enums import TA_CENTER, TA_LEFT
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, HRFlowable
from reportlab.lib import colors
MARGIN = 0.75 * inch
name_style = ParagraphStyle(
"name", fontName="Helvetica-Bold", fontSize=16, leading=20,
alignment=TA_CENTER, spaceAfter=2,
)
contact_style = ParagraphStyle(
"contact", fontName="Helvetica", fontSize=9, leading=12,
alignment=TA_CENTER, spaceAfter=6,
textColor=colors.HexColor("#555555"),
)
section_style = ParagraphStyle(
"section", fontName="Helvetica-Bold", fontSize=10, leading=14,
spaceBefore=10, spaceAfter=2,
textColor=colors.HexColor("#1a1a2e"),
)
body_style = ParagraphStyle(
"body", fontName="Helvetica", fontSize=9, leading=13, alignment=TA_LEFT,
)
role_style = ParagraphStyle(
"role", fontName="Helvetica-Bold", fontSize=9, leading=13,
)
meta_style = ParagraphStyle(
"meta", fontName="Helvetica-Oblique", fontSize=8, leading=12,
textColor=colors.HexColor("#555555"), spaceAfter=2,
)
bullet_style = ParagraphStyle(
"bullet", fontName="Helvetica", fontSize=9, leading=13, leftIndent=12,
)
def hr():
return HRFlowable(width="100%", thickness=0.5,
color=colors.HexColor("#cccccc"),
spaceAfter=4, spaceBefore=2)
story = []
if resume.get("name"):
story.append(Paragraph(resume["name"], name_style))
contact_parts = [p for p in (
resume.get("email", ""), resume.get("phone", ""),
resume.get("location", ""), resume.get("linkedin", ""),
) if p]
if contact_parts:
story.append(Paragraph(" | ".join(contact_parts), contact_style))
story.append(hr())
summary = (resume.get("career_summary") or "").strip()
if summary:
story.append(Paragraph("SUMMARY", section_style))
story.append(hr())
story.append(Paragraph(summary, body_style))
story.append(Spacer(1, 4))
if resume.get("experience"):
story.append(Paragraph("EXPERIENCE", section_style))
story.append(hr())
for exp in resume["experience"]:
dates = f"{exp.get('start_date', '')}{exp.get('end_date', '')}"
story.append(Paragraph(
f"{exp.get('title', '')} | {exp.get('company', '')}", role_style
))
story.append(Paragraph(dates, meta_style))
for bullet in (exp.get("bullets") or []):
story.append(Paragraph(f"{bullet}", bullet_style))
story.append(Spacer(1, 4))
if resume.get("education"):
story.append(Paragraph("EDUCATION", section_style))
story.append(hr())
for edu in resume["education"]:
degree = f"{edu.get('degree', '')} {edu.get('field', '')}".strip()
story.append(Paragraph(
f"{degree} | {edu.get('institution', '')} {edu.get('graduation_year', '')}".strip(),
body_style,
))
story.append(Spacer(1, 4))
if resume.get("skills"):
story.append(Paragraph("SKILLS", section_style))
story.append(hr())
story.append(Paragraph(", ".join(resume["skills"]), body_style))
story.append(Spacer(1, 4))
if resume.get("achievements"):
story.append(Paragraph("ACHIEVEMENTS", section_style))
story.append(hr())
for a in resume["achievements"]:
story.append(Paragraph(f"{a}", bullet_style))
doc = SimpleDocTemplate(
output_path, pagesize=LETTER,
leftMargin=MARGIN, rightMargin=MARGIN,
topMargin=MARGIN, bottomMargin=MARGIN,
)
doc.build(story)

217
scripts/resume_sync.py Normal file
View file

@ -0,0 +1,217 @@
"""
Resume format transform library profile.
Converts between:
- Library format: struct_json produced by resume_parser.parse_resume()
{name, email, phone, career_summary, experience[{title,company,start_date,end_date,location,bullets[]}],
education[{institution,degree,field,start_date,end_date}], skills[], achievements[]}
- Profile content format: ResumePayload content fields (plain_text_resume.yaml)
{name, surname, email, phone, career_summary,
experience[{title,company,period,location,industry,responsibilities,skills[]}],
education[{institution,degree,field,start_date,end_date}],
skills[], achievements[]}
Profile metadata fields (salary, work prefs, self-ID, PII) are never touched here.
License: MIT
"""
from __future__ import annotations
from datetime import date
from typing import Any
_CONTENT_FIELDS = frozenset({
"name", "surname", "email", "phone", "career_summary",
"experience", "skills", "education", "achievements",
})
def library_to_profile_content(struct_json: dict[str, Any]) -> dict[str, Any]:
"""Transform a library struct_json to ResumePayload content fields.
Returns only content fields. Caller is responsible for merging with existing
metadata fields (salary, preferences, self-ID) so they are not overwritten.
Lossy for experience[].industry (always blank parser does not capture it).
name is split on first space into name/surname.
"""
full_name: str = struct_json.get("name") or ""
parts = full_name.split(" ", 1)
name = parts[0]
surname = parts[1] if len(parts) > 1 else ""
experience = []
for exp in struct_json.get("experience") or []:
start = (exp.get("start_date") or "").strip()
end = (exp.get("end_date") or "").strip()
if start and end:
period = f"{start} \u2013 {end}"
elif start:
period = start
elif end:
period = end
else:
period = ""
bullets: list[str] = exp.get("bullets") or []
responsibilities = "\n".join(b for b in bullets if b)
experience.append({
"title": exp.get("title") or "",
"company": exp.get("company") or "",
"period": period,
"location": exp.get("location") or "",
"industry": "", # not captured by parser
"responsibilities": responsibilities,
"skills": [],
})
education = []
for edu in struct_json.get("education") or []:
education.append({
"institution": edu.get("institution") or "",
"degree": edu.get("degree") or "",
"field": edu.get("field") or "",
"start_date": edu.get("start_date") or "",
"end_date": edu.get("end_date") or "",
})
return {
"name": name,
"surname": surname,
"email": struct_json.get("email") or "",
"phone": struct_json.get("phone") or "",
"career_summary": struct_json.get("career_summary") or "",
"experience": experience,
"skills": list(struct_json.get("skills") or []),
"education": education,
"achievements": list(struct_json.get("achievements") or []),
}
def profile_to_library(payload: dict[str, Any]) -> tuple[str, dict[str, Any]]:
"""Transform ResumePayload content fields to (plain_text, struct_json).
Inverse of library_to_profile_content. The plain_text is a best-effort
reconstruction for display and re-parsing. struct_json is the canonical
structured representation stored in the resumes table.
"""
name_parts = [payload.get("name") or "", payload.get("surname") or ""]
full_name = " ".join(p for p in name_parts if p).strip()
career_summary = (payload.get("career_summary") or "").strip()
lines: list[str] = []
if full_name:
lines.append(full_name)
email = payload.get("email") or ""
phone = payload.get("phone") or ""
if email:
lines.append(email)
if phone:
lines.append(phone)
if career_summary:
lines += ["", "SUMMARY", career_summary]
experience_structs = []
for exp in payload.get("experience") or []:
title = (exp.get("title") or "").strip()
company = (exp.get("company") or "").strip()
period = (exp.get("period") or "").strip()
location = (exp.get("location") or "").strip()
# Split period back to start_date / end_date.
# Split on the dash/dash separator BEFORE normalising to plain hyphens
# so that ISO dates like "2023-01 2025-03" round-trip correctly.
if "\u2013" in period: # en-dash
date_parts = [p.strip() for p in period.split("\u2013", 1)]
elif "\u2014" in period: # em-dash
date_parts = [p.strip() for p in period.split("\u2014", 1)]
else:
date_parts = [period.strip()] if period.strip() else []
start_date = date_parts[0] if date_parts else ""
end_date = date_parts[1] if len(date_parts) > 1 else ""
resp = (exp.get("responsibilities") or "").strip()
bullets = [b.strip() for b in resp.split("\n") if b.strip()]
if title or company:
header = " | ".join(p for p in [title, company, period] if p)
lines += ["", header]
if location:
lines.append(location)
for b in bullets:
lines.append(f"\u2022 {b}")
experience_structs.append({
"title": title,
"company": company,
"start_date": start_date,
"end_date": end_date,
"location": location,
"bullets": bullets,
})
skills: list[str] = list(payload.get("skills") or [])
if skills:
lines += ["", "SKILLS", ", ".join(skills)]
education_structs = []
for edu in payload.get("education") or []:
institution = (edu.get("institution") or "").strip()
degree = (edu.get("degree") or "").strip()
field = (edu.get("field") or "").strip()
start_date = (edu.get("start_date") or "").strip()
end_date = (edu.get("end_date") or "").strip()
if institution or degree:
label = " ".join(p for p in [degree, field] if p)
lines.append(f"{label} \u2014 {institution}" if institution else label)
education_structs.append({
"institution": institution,
"degree": degree,
"field": field,
"start_date": start_date,
"end_date": end_date,
})
achievements: list[str] = list(payload.get("achievements") or [])
struct_json: dict[str, Any] = {
"name": full_name,
"email": email,
"phone": phone,
"career_summary": career_summary,
"experience": experience_structs,
"skills": skills,
"education": education_structs,
"achievements": achievements,
}
plain_text = "\n".join(lines).strip()
return plain_text, struct_json
def make_auto_backup_name(source_name: str) -> str:
"""Generate a timestamped auto-backup name.
Example: "Auto-backup before Senior Engineer Resume — 2026-04-16"
"""
today = date.today().isoformat()
return f"Auto-backup before {source_name} \u2014 {today}"
def blank_fields_on_import(struct_json: dict[str, Any]) -> list[str]:
"""Return content field names that will be blank after a library→profile import.
Used to warn the user in the confirmation modal so they know what to fill in.
"""
blank: list[str] = []
if struct_json.get("experience"):
# industry is always blank — parser never captures it
blank.append("experience[].industry")
# location may be blank for some entries
if any(not (e.get("location") or "").strip() for e in struct_json["experience"]):
blank.append("experience[].location")
return blank

View file

@ -0,0 +1,86 @@
# MIT License — see LICENSE
"""Survey assistant: prompt builders and LLM inference for culture-fit survey analysis.
Extracted from dev-api.py so task_runner can import this without importing the
FastAPI application. Callable directly or via the survey_analyze background task.
"""
from __future__ import annotations
import json
import logging
from pathlib import Path
from typing import Optional
log = logging.getLogger(__name__)
SURVEY_SYSTEM = (
"You are a job application advisor helping a candidate answer a culture-fit survey. "
"The candidate values collaborative teamwork, clear communication, growth, and impact. "
"Choose answers that present them in the best professional light."
)
def build_text_prompt(text: str, mode: str) -> str:
if mode == "quick":
return (
"Answer each survey question below. For each, give ONLY the letter of the best "
"option and a single-sentence reason. Format exactly as:\n"
"1. B — reason here\n2. A — reason here\n\n"
f"Survey:\n{text}"
)
return (
"Analyze each survey question below. For each question:\n"
"- Briefly evaluate each option (1 sentence each)\n"
"- State your recommendation with reasoning\n\n"
f"Survey:\n{text}"
)
def build_image_prompt(mode: str) -> str:
if mode == "quick":
return (
"This is a screenshot of a culture-fit survey. Read all questions and answer each "
"with the letter of the best option for a collaborative, growth-oriented candidate. "
"Format: '1. B — brief reason' on separate lines."
)
return (
"This is a screenshot of a culture-fit survey. For each question, evaluate each option "
"and recommend the best choice for a collaborative, growth-oriented candidate. "
"Include a brief breakdown per option and a clear recommendation."
)
def run_survey_analyze(
text: Optional[str],
image_b64: Optional[str],
mode: str,
config_path: Optional[Path] = None,
) -> dict:
"""Run LLM inference for survey analysis.
Returns {"output": str, "source": "text_paste" | "screenshot"}.
Raises on LLM failure caller is responsible for error handling.
"""
from scripts.llm_router import LLMRouter
router = LLMRouter(config_path=config_path) if config_path else LLMRouter()
if image_b64:
prompt = build_image_prompt(mode)
output = router.complete(
prompt,
images=[image_b64],
fallback_order=router.config.get("vision_fallback_order"),
)
source = "screenshot"
else:
prompt = build_text_prompt(text or "", mode)
output = router.complete(
prompt,
system=SURVEY_SYSTEM,
fallback_order=router.config.get("research_fallback_order"),
)
source = "text_paste"
return {"output": output, "source": source}

View file

@ -16,6 +16,61 @@ from pathlib import Path
log = logging.getLogger(__name__)
def _normalize_aihawk_resume(raw: dict) -> dict:
"""Convert a plain_text_resume.yaml (AIHawk format) into the optimizer struct.
Handles two AIHawk variants:
- Newer Peregrine wizard output: already uses bullets/start_date/end_date/career_summary
- Older raw AIHawk format: uses responsibilities (str), period ("YYYY Present")
"""
import re as _re
def _split_responsibilities(text: str) -> list[str]:
lines = [ln.strip() for ln in text.strip().splitlines() if ln.strip()]
return lines if lines else [text.strip()]
def _parse_period(period: str) -> tuple[str, str]:
parts = _re.split(r"\s*[–—-]\s*", period, maxsplit=1)
start = parts[0].strip() if parts else ""
end = parts[1].strip() if len(parts) > 1 else "Present"
return start, end
experience = []
for entry in raw.get("experience", []):
if "responsibilities" in entry:
bullets = _split_responsibilities(entry["responsibilities"])
else:
bullets = entry.get("bullets", [])
if "period" in entry:
start_date, end_date = _parse_period(entry["period"])
else:
start_date = entry.get("start_date", "")
end_date = entry.get("end_date", "Present")
experience.append({
"title": entry.get("title", ""),
"company": entry.get("company", ""),
"start_date": start_date,
"end_date": end_date,
"bullets": bullets,
})
# career_summary may be a string or absent; assessment field is a legacy bool in some profiles
career_summary = raw.get("career_summary", "")
if not isinstance(career_summary, str):
career_summary = ""
return {
"career_summary": career_summary,
"experience": experience,
"education": raw.get("education", []),
"skills": raw.get("skills", []),
"achievements": raw.get("achievements", []),
}
from scripts.db import (
DEFAULT_DB,
insert_task,
@ -196,9 +251,12 @@ def _run_task(db_path: Path, task_id: int, task_type: str, job_id: int,
elif task_type == "company_research":
from scripts.company_research import research_company
_cfg_dir = Path(db_path).parent / "config"
_user_llm_cfg = _cfg_dir / "llm.yaml"
result = research_company(
job,
on_stage=lambda s: update_task_stage(db_path, task_id, s),
config_path=_user_llm_cfg if _user_llm_cfg.exists() else None,
)
save_research(db_path, job_id=job_id, **result)
@ -287,13 +345,25 @@ def _run_task(db_path: Path, task_id: int, task_type: str, job_id: int,
)
from scripts.user_profile import load_user_profile
_user_yaml = Path(db_path).parent / "config" / "user.yaml"
description = job.get("description", "")
resume_path = load_user_profile().get("resume_path", "")
resume_path = load_user_profile(str(_user_yaml)).get("resume_path", "")
# Parse the candidate's resume
update_task_stage(db_path, task_id, "parsing resume")
resume_text = Path(resume_path).read_text(errors="replace") if resume_path else ""
resume_struct, parse_err = structure_resume(resume_text)
_plain_yaml = Path(db_path).parent / "config" / "plain_text_resume.yaml"
if resume_path and Path(resume_path).exists():
resume_text = Path(resume_path).read_text(errors="replace")
resume_struct, parse_err = structure_resume(resume_text)
elif _plain_yaml.exists():
import yaml as _yaml
_raw = _yaml.safe_load(_plain_yaml.read_text(encoding="utf-8")) or {}
resume_struct = _normalize_aihawk_resume(_raw)
resume_text = resume_struct.get("career_summary", "")
parse_err = ""
else:
resume_text = ""
resume_struct, parse_err = structure_resume("")
# Extract keyword gaps and build gap report (free tier)
update_task_stage(db_path, task_id, "extracting keyword gaps")
@ -301,21 +371,56 @@ def _run_task(db_path: Path, task_id: int, task_type: str, job_id: int,
prioritized = prioritize_gaps(gaps, resume_struct)
gap_report = _json.dumps(prioritized, indent=2)
# Full rewrite (paid tier only)
rewritten_text = ""
# Full rewrite (paid tier only) → enters awaiting_review, not completed
p = _json.loads(params or "{}")
selected_gaps = p.get("selected_gaps", None)
if selected_gaps is not None:
selected_set = set(selected_gaps)
prioritized = [g for g in prioritized if g.get("term") in selected_set]
if p.get("full_rewrite", False):
update_task_stage(db_path, task_id, "rewriting resume sections")
candidate_voice = load_user_profile().get("candidate_voice", "")
candidate_voice = load_user_profile(str(_user_yaml)).get("candidate_voice", "")
rewritten = rewrite_for_ats(resume_struct, prioritized, job, candidate_voice)
if hallucination_check(resume_struct, rewritten):
rewritten_text = render_resume_text(rewritten)
from scripts.resume_optimizer import build_review_diff
from scripts.db import save_resume_draft
draft = build_review_diff(resume_struct, rewritten)
# Attach gap report to draft for reference in the review UI
draft["gap_report"] = prioritized
save_resume_draft(db_path, job_id=job_id,
draft_json=_json.dumps(draft))
# Save gap report now; final text written after user review
save_optimized_resume(db_path, job_id=job_id,
text="", gap_report=gap_report)
# Park task in awaiting_review — finalize endpoint resolves it
update_task_status(db_path, task_id, "awaiting_review")
return
else:
log.warning("[task_runner] resume_optimize hallucination check failed for job %d", job_id)
save_optimized_resume(db_path, job_id=job_id,
text="", gap_report=gap_report)
else:
# Gap-only run (free tier): save report, no draft
save_optimized_resume(db_path, job_id=job_id,
text="", gap_report=gap_report)
save_optimized_resume(db_path, job_id=job_id,
text=rewritten_text,
gap_report=gap_report)
elif task_type == "survey_analyze":
import json as _json
from scripts.survey_assistant import run_survey_analyze
p = _json.loads(params or "{}")
_cfg_path = Path(db_path).parent / "config" / "llm.yaml"
update_task_stage(db_path, task_id, "analyzing survey")
result = run_survey_analyze(
text=p.get("text"),
image_b64=p.get("image_b64"),
mode=p.get("mode", "quick"),
config_path=_cfg_path if _cfg_path.exists() else None,
)
update_task_status(
db_path, task_id, "completed",
error=_json.dumps(result),
)
return
elif task_type == "prepare_training":
from scripts.prepare_training_data import build_records, write_jsonl, DEFAULT_OUTPUT

View file

@ -34,6 +34,7 @@ LLM_TASK_TYPES: frozenset[str] = frozenset({
"company_research",
"wizard_generate",
"resume_optimize",
"survey_analyze",
})
# Conservative peak VRAM estimates (GB) per task type.
@ -43,6 +44,7 @@ DEFAULT_VRAM_BUDGETS: dict[str, float] = {
"company_research": 5.0, # llama3.1:8b or vllm model
"wizard_generate": 2.5, # same model family as cover_letter
"resume_optimize": 5.0, # section-by-section rewrite; same budget as research
"survey_analyze": 2.5, # text: phi3:mini; visual: vision service (own VRAM pool)
}
_DEFAULT_MAX_QUEUE_DEPTH = 500

89
tests/test_demo_guard.py Normal file
View file

@ -0,0 +1,89 @@
"""IS_DEMO write-block guard tests."""
import importlib
import os
import sqlite3
import pytest
from fastapi.testclient import TestClient
_SCHEMA = """
CREATE TABLE jobs (
id INTEGER PRIMARY KEY, title TEXT, company TEXT, url TEXT,
location TEXT, is_remote INTEGER DEFAULT 0, salary TEXT,
match_score REAL, keyword_gaps TEXT, status TEXT DEFAULT 'pending',
date_found TEXT, cover_letter TEXT, interview_date TEXT,
rejection_stage TEXT, applied_at TEXT, phone_screen_at TEXT,
interviewing_at TEXT, offer_at TEXT, hired_at TEXT,
survey_at TEXT, date_posted TEXT, hired_feedback TEXT
);
CREATE TABLE background_tasks (
id INTEGER PRIMARY KEY, task_type TEXT, job_id INTEGER,
status TEXT DEFAULT 'queued', finished_at TEXT
);
"""
def _make_db(path: str) -> None:
con = sqlite3.connect(path)
con.executescript(_SCHEMA)
con.execute(
"INSERT INTO jobs (id, title, company, url, status) VALUES (1,'UX Designer','Spotify','https://ex.com/1','pending')"
)
con.execute(
"INSERT INTO jobs (id, title, company, url, status) VALUES (2,'Designer','Figma','https://ex.com/2','hired')"
)
con.commit()
con.close()
@pytest.fixture()
def demo_client(tmp_path, monkeypatch):
db_path = str(tmp_path / "staging.db")
_make_db(db_path)
monkeypatch.setenv("DEMO_MODE", "true")
monkeypatch.setenv("STAGING_DB", db_path)
import dev_api
importlib.reload(dev_api)
return TestClient(dev_api.app)
@pytest.fixture()
def normal_client(tmp_path, monkeypatch):
db_path = str(tmp_path / "staging.db")
_make_db(db_path)
monkeypatch.delenv("DEMO_MODE", raising=False)
monkeypatch.setenv("STAGING_DB", db_path)
import dev_api
importlib.reload(dev_api)
return TestClient(dev_api.app)
class TestDemoWriteBlock:
def test_approve_blocked_in_demo(self, demo_client):
r = demo_client.post("/api/jobs/1/approve")
assert r.status_code == 403
assert r.json()["detail"] == "demo-write-blocked"
def test_reject_blocked_in_demo(self, demo_client):
r = demo_client.post("/api/jobs/1/reject")
assert r.status_code == 403
assert r.json()["detail"] == "demo-write-blocked"
def test_cover_letter_generate_blocked_in_demo(self, demo_client):
r = demo_client.post("/api/jobs/1/cover_letter/generate")
assert r.status_code == 403
assert r.json()["detail"] == "demo-write-blocked"
def test_hired_feedback_blocked_in_demo(self, demo_client):
r = demo_client.post("/api/jobs/2/hired-feedback", json={"factors": [], "notes": ""})
assert r.status_code == 403
assert r.json()["detail"] == "demo-write-blocked"
def test_approve_allowed_in_normal_mode(self, normal_client):
r = normal_client.post("/api/jobs/1/approve")
assert r.status_code != 403
def test_config_reports_is_demo_true(self, demo_client):
r = demo_client.get("/api/config/app")
assert r.status_code == 200
assert r.json()["isDemo"] is True

View file

@ -19,7 +19,8 @@ def tmp_db(tmp_path):
match_score REAL, keyword_gaps TEXT, status TEXT,
interview_date TEXT, rejection_stage TEXT,
applied_at TEXT, phone_screen_at TEXT, interviewing_at TEXT,
offer_at TEXT, hired_at TEXT, survey_at TEXT
offer_at TEXT, hired_at TEXT, survey_at TEXT,
hired_feedback TEXT
);
CREATE TABLE job_contacts (
id INTEGER PRIMARY KEY,

View file

@ -7,35 +7,7 @@ from pathlib import Path
from unittest.mock import patch, MagicMock
from fastapi.testclient import TestClient
_WORKTREE = "/Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa"
# ── Path bootstrap ────────────────────────────────────────────────────────────
# dev_api.py inserts /Library/Development/CircuitForge/peregrine into sys.path
# at import time; the worktree has credential_store but the main repo doesn't.
# Insert the worktree first so 'scripts' resolves to the worktree version, then
# pre-cache it in sys.modules so Python won't re-look-up when dev_api adds the
# main peregrine root.
if _WORKTREE not in sys.path:
sys.path.insert(0, _WORKTREE)
# Pre-cache the worktree scripts package and submodules before dev_api import
import importlib, types
def _ensure_worktree_scripts():
import importlib.util as _ilu
_wt = _WORKTREE
# Only load if not already loaded from the worktree
_spec = _ilu.spec_from_file_location("scripts", f"{_wt}/scripts/__init__.py",
submodule_search_locations=[f"{_wt}/scripts"])
if _spec is None:
return
_mod = _ilu.module_from_spec(_spec)
sys.modules.setdefault("scripts", _mod)
try:
_spec.loader.exec_module(_mod)
except Exception:
pass
_ensure_worktree_scripts()
# credential_store.py was merged to main repo — no worktree path manipulation needed
@pytest.fixture(scope="module")
@ -211,7 +183,8 @@ def test_get_search_prefs_returns_dict(tmp_path, monkeypatch):
fake_path = tmp_path / "config" / "search_profiles.yaml"
fake_path.parent.mkdir(parents=True, exist_ok=True)
with open(fake_path, "w") as f:
yaml.dump({"default": {"remote_preference": "remote", "job_boards": []}}, f)
yaml.dump({"default": {"remote_preference": "remote",
"job_boards": [{"name": "linkedin", "enabled": True}]}}, f)
monkeypatch.setattr("dev_api._search_prefs_path", lambda: fake_path)
from dev_api import app

View file

@ -1,18 +1,36 @@
"""Tests for survey endpoints: vision health, analyze, save response, get history."""
"""Tests for survey endpoints: vision health, async analyze task queue, save response, history."""
import json
import sqlite3
import pytest
from unittest.mock import patch, MagicMock
from fastapi.testclient import TestClient
from scripts.db_migrate import migrate_db
@pytest.fixture
def client():
import sys
sys.path.insert(0, "/Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa")
from dev_api import app
return TestClient(app)
def fresh_db(tmp_path, monkeypatch):
"""Isolated DB + dev_api wired to it via _request_db and DB_PATH."""
db = tmp_path / "test.db"
migrate_db(db)
monkeypatch.setenv("STAGING_DB", str(db))
import dev_api
monkeypatch.setattr(dev_api, "DB_PATH", str(db))
monkeypatch.setattr(
dev_api,
"_request_db",
type("CV", (), {"get": lambda self: str(db), "set": lambda *a: None})(),
)
return db
# ── GET /api/vision/health ───────────────────────────────────────────────────
@pytest.fixture
def client(fresh_db):
import dev_api
return TestClient(dev_api.app)
# ── GET /api/vision/health ────────────────────────────────────────────────────
def test_vision_health_available(client):
"""Returns available=true when vision service responds 200."""
@ -32,133 +50,182 @@ def test_vision_health_unavailable(client):
assert resp.json() == {"available": False}
# ── POST /api/jobs/{id}/survey/analyze ──────────────────────────────────────
# ── POST /api/jobs/{id}/survey/analyze ──────────────────────────────────────
def test_analyze_text_quick(client):
"""Text mode quick analysis returns output and source=text_paste."""
mock_router = MagicMock()
mock_router.complete.return_value = "1. B — best option"
mock_router.config.get.return_value = ["claude_code", "vllm"]
with patch("dev_api.LLMRouter", return_value=mock_router):
def test_analyze_queues_task_and_returns_task_id(client):
"""POST analyze queues a background task and returns task_id + is_new."""
with patch("scripts.task_runner.submit_task", return_value=(42, True)) as mock_submit:
resp = client.post("/api/jobs/1/survey/analyze", json={
"text": "Q1: Do you prefer teamwork?\nA. Solo B. Together",
"mode": "quick",
})
assert resp.status_code == 200
data = resp.json()
assert data["source"] == "text_paste"
assert "B" in data["output"]
# System prompt must be passed for text path
call_kwargs = mock_router.complete.call_args[1]
assert "system" in call_kwargs
assert "culture-fit survey" in call_kwargs["system"]
assert data["task_id"] == 42
assert data["is_new"] is True
# submit_task called with survey_analyze type
call_kwargs = mock_submit.call_args
assert call_kwargs.kwargs["task_type"] == "survey_analyze"
assert call_kwargs.kwargs["job_id"] == 1
params = json.loads(call_kwargs.kwargs["params"])
assert params["mode"] == "quick"
assert params["text"] == "Q1: Do you prefer teamwork?\nA. Solo B. Together"
def test_analyze_text_detailed(client):
"""Text mode detailed analysis passes correct prompt."""
mock_router = MagicMock()
mock_router.complete.return_value = "Option A: good for... Option B: better because..."
mock_router.config.get.return_value = []
with patch("dev_api.LLMRouter", return_value=mock_router):
def test_analyze_silently_attaches_to_existing_task(client):
"""is_new=False when task already running for same input."""
with patch("scripts.task_runner.submit_task", return_value=(7, False)):
resp = client.post("/api/jobs/1/survey/analyze", json={
"text": "Q1: Describe your work style.",
"mode": "detailed",
"text": "Q1: test", "mode": "quick",
})
assert resp.status_code == 200
assert resp.json()["source"] == "text_paste"
assert resp.json()["is_new"] is False
def test_analyze_image(client):
"""Image mode routes through vision path with NO system prompt."""
mock_router = MagicMock()
mock_router.complete.return_value = "1. C — collaborative choice"
mock_router.config.get.return_value = ["vision_service", "claude_code"]
with patch("dev_api.LLMRouter", return_value=mock_router):
def test_analyze_invalid_mode_returns_400(client):
resp = client.post("/api/jobs/1/survey/analyze", json={"text": "Q1: test", "mode": "wrong"})
assert resp.status_code == 400
def test_analyze_image_mode_passes_image_in_params(client):
"""Image payload is forwarded in task params."""
with patch("scripts.task_runner.submit_task", return_value=(1, True)) as mock_submit:
resp = client.post("/api/jobs/1/survey/analyze", json={
"image_b64": "aGVsbG8=",
"mode": "quick",
})
assert resp.status_code == 200
params = json.loads(mock_submit.call_args.kwargs["params"])
assert params["image_b64"] == "aGVsbG8="
assert params["text"] is None
# ── GET /api/jobs/{id}/survey/analyze/task ────────────────────────────────────
def test_task_poll_completed_text(client, fresh_db):
"""Completed task with text result returns parsed source + output."""
result_json = json.dumps({"output": "1. B — best option", "source": "text_paste"})
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, error) VALUES (?,?,?,?)",
("survey_analyze", 1, "completed", result_json),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
data = resp.json()
assert data["source"] == "screenshot"
# No system prompt on vision path
call_kwargs = mock_router.complete.call_args[1]
assert "system" not in call_kwargs
assert data["status"] == "completed"
assert data["result"]["source"] == "text_paste"
assert "B" in data["result"]["output"]
assert data["message"] is None
def test_analyze_llm_failure(client):
"""Returns 500 when LLM raises an exception."""
mock_router = MagicMock()
mock_router.complete.side_effect = Exception("LLM unavailable")
mock_router.config.get.return_value = []
with patch("dev_api.LLMRouter", return_value=mock_router):
resp = client.post("/api/jobs/1/survey/analyze", json={
"text": "Q1: test",
"mode": "quick",
})
assert resp.status_code == 500
def test_task_poll_completed_screenshot(client, fresh_db):
"""Completed task with image result returns source=screenshot."""
result_json = json.dumps({"output": "1. C — collaborative", "source": "screenshot"})
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, error) VALUES (?,?,?,?)",
("survey_analyze", 1, "completed", result_json),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
assert resp.json()["result"]["source"] == "screenshot"
# ── POST /api/jobs/{id}/survey/responses ────────────────────────────────────
def test_task_poll_failed_returns_message(client, fresh_db):
"""Failed task returns status=failed with error message."""
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, error) VALUES (?,?,?,?)",
("survey_analyze", 1, "failed", "LLM unavailable"),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
data = resp.json()
assert data["status"] == "failed"
assert data["message"] == "LLM unavailable"
assert data["result"] is None
def test_task_poll_running_returns_stage(client, fresh_db):
"""Running task returns status=running with current stage."""
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, stage) VALUES (?,?,?,?)",
("survey_analyze", 1, "running", "analyzing survey"),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
data = resp.json()
assert data["status"] == "running"
assert data["stage"] == "analyzing survey"
def test_task_poll_none_when_no_task(client):
"""Returns status=none when no task exists for the job."""
resp = client.get("/api/jobs/999/survey/analyze/task")
assert resp.status_code == 200
assert resp.json()["status"] == "none"
# ── POST /api/jobs/{id}/survey/responses ─────────────────────────────────────
def test_save_response_text(client):
"""Save text response writes to DB and returns id."""
mock_db = MagicMock()
with patch("dev_api._get_db", return_value=mock_db):
with patch("dev_api.insert_survey_response", return_value=42) as mock_insert:
resp = client.post("/api/jobs/1/survey/responses", json={
"mode": "quick",
"source": "text_paste",
"raw_input": "Q1: test question",
"llm_output": "1. B — good reason",
})
"""Save a text-mode survey response returns an id."""
resp = client.post("/api/jobs/1/survey/responses", json={
"survey_name": "Culture Fit",
"mode": "quick",
"source": "text_paste",
"raw_input": "Q1: Teamwork?",
"llm_output": "1. B is best",
"reported_score": "85",
})
assert resp.status_code == 200
assert resp.json()["id"] == 42
# received_at generated by backend — not None
call_args = mock_insert.call_args
assert call_args[1]["received_at"] is not None or call_args[0][3] is not None
assert "id" in resp.json()
def test_save_response_with_image(client, tmp_path, monkeypatch):
"""Save image response writes PNG file and stores path in DB."""
monkeypatch.setenv("STAGING_DB", str(tmp_path / "test.db"))
with patch("dev_api.insert_survey_response", return_value=7) as mock_insert:
with patch("dev_api.Path") as mock_path_cls:
mock_path_cls.return_value.__truediv__ = lambda s, o: tmp_path / o
resp = client.post("/api/jobs/1/survey/responses", json={
"mode": "quick",
"source": "screenshot",
"image_b64": "aGVsbG8=", # valid base64
"llm_output": "1. B — reason",
})
def test_save_response_with_image(client):
"""Save a screenshot-mode survey response returns an id."""
resp = client.post("/api/jobs/1/survey/responses", json={
"survey_name": None,
"mode": "quick",
"source": "screenshot",
"image_b64": "aGVsbG8=",
"llm_output": "1. C collaborative",
"reported_score": None,
})
assert resp.status_code == 200
assert resp.json()["id"] == 7
assert "id" in resp.json()
# ── GET /api/jobs/{id}/survey/responses ─────────────────────────────────────
def test_get_history_empty(client):
"""Returns empty list when no history exists."""
with patch("dev_api.get_survey_responses", return_value=[]):
resp = client.get("/api/jobs/1/survey/responses")
"""History is empty for a fresh job."""
resp = client.get("/api/jobs/1/survey/responses")
assert resp.status_code == 200
assert resp.json() == []
def test_get_history_populated(client):
"""Returns history rows newest first."""
rows = [
{"id": 2, "survey_name": "Round 2", "mode": "detailed", "source": "text_paste",
"raw_input": None, "image_path": None, "llm_output": "Option A is best",
"reported_score": "90%", "received_at": "2026-03-21T14:00:00", "created_at": "2026-03-21T14:00:01"},
{"id": 1, "survey_name": "Round 1", "mode": "quick", "source": "text_paste",
"raw_input": "Q1: test", "image_path": None, "llm_output": "1. B",
"reported_score": None, "received_at": "2026-03-21T12:00:00", "created_at": "2026-03-21T12:00:01"},
]
with patch("dev_api.get_survey_responses", return_value=rows):
resp = client.get("/api/jobs/1/survey/responses")
"""History returns all saved responses for a job in reverse order."""
for i in range(2):
client.post("/api/jobs/1/survey/responses", json={
"survey_name": f"Survey {i}",
"mode": "quick",
"source": "text_paste",
"llm_output": f"Output {i}",
})
resp = client.get("/api/jobs/1/survey/responses")
assert resp.status_code == 200
data = resp.json()
assert len(data) == 2
assert data[0]["id"] == 2
assert data[0]["survey_name"] == "Round 2"
assert len(resp.json()) == 2

399
tests/test_messaging.py Normal file
View file

@ -0,0 +1,399 @@
"""
Unit tests for scripts/messaging.py DB helpers for messages and message_templates.
TDD approach: tests written before implementation.
"""
import sqlite3
from pathlib import Path
import pytest
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
def _apply_migration_008(db_path: Path) -> None:
"""Apply migration 008 directly so tests run without the full migrate_db stack."""
migration = (
Path(__file__).parent.parent / "migrations" / "008_messaging.sql"
)
sql = migration.read_text(encoding="utf-8")
con = sqlite3.connect(db_path)
try:
# Create jobs table stub so FK references don't break
con.execute("""
CREATE TABLE IF NOT EXISTS jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT
)
""")
con.execute("""
CREATE TABLE IF NOT EXISTS job_contacts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER
)
""")
# Execute migration statements
statements = [s.strip() for s in sql.split(";") if s.strip()]
for stmt in statements:
stripped = "\n".join(
ln for ln in stmt.splitlines() if not ln.strip().startswith("--")
).strip()
if stripped:
con.execute(stripped)
con.commit()
finally:
con.close()
@pytest.fixture()
def db_path(tmp_path: Path) -> Path:
"""Temporary SQLite DB with migration 008 applied."""
path = tmp_path / "test.db"
_apply_migration_008(path)
return path
@pytest.fixture()
def job_id(db_path: Path) -> int:
"""Insert a dummy job and return its id."""
con = sqlite3.connect(db_path)
try:
cur = con.execute("INSERT INTO jobs (title) VALUES ('Test Job')")
con.commit()
return cur.lastrowid
finally:
con.close()
# ---------------------------------------------------------------------------
# Message tests
# ---------------------------------------------------------------------------
class TestCreateMessage:
def test_create_returns_dict(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import create_message
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Hello",
body="Body text",
from_addr="me@example.com",
to_addr="them@example.com",
template_id=None,
)
assert isinstance(msg, dict)
assert msg["subject"] == "Hello"
assert msg["body"] == "Body text"
assert msg["direction"] == "outbound"
assert msg["type"] == "email"
assert "id" in msg
assert msg["id"] > 0
def test_create_persists_to_db(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import create_message
create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Persisted",
body="Stored body",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
con = sqlite3.connect(db_path)
try:
row = con.execute(
"SELECT subject FROM messages WHERE subject='Persisted'"
).fetchone()
assert row is not None
finally:
con.close()
class TestListMessages:
def _make_message(
self,
db_path: Path,
job_id: int,
*,
type: str = "email",
direction: str = "outbound",
subject: str = "Subject",
) -> dict:
from scripts.messaging import create_message
return create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type=type,
direction=direction,
subject=subject,
body="body",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
def test_list_returns_all_messages(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
self._make_message(db_path, job_id, subject="First")
self._make_message(db_path, job_id, subject="Second")
result = list_messages(db_path)
assert len(result) == 2
def test_list_filtered_by_job_id(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
# Create a second job
con = sqlite3.connect(db_path)
try:
cur = con.execute("INSERT INTO jobs (title) VALUES ('Other Job')")
con.commit()
other_job_id = cur.lastrowid
finally:
con.close()
self._make_message(db_path, job_id, subject="For job 1")
self._make_message(db_path, other_job_id, subject="For job 2")
result = list_messages(db_path, job_id=job_id)
assert len(result) == 1
assert result[0]["subject"] == "For job 1"
def test_list_filtered_by_type(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
self._make_message(db_path, job_id, type="email", subject="Email msg")
self._make_message(db_path, job_id, type="sms", subject="SMS msg")
emails = list_messages(db_path, type="email")
assert len(emails) == 1
assert emails[0]["type"] == "email"
def test_list_filtered_by_direction(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
self._make_message(db_path, job_id, direction="outbound")
self._make_message(db_path, job_id, direction="inbound")
outbound = list_messages(db_path, direction="outbound")
assert len(outbound) == 1
assert outbound[0]["direction"] == "outbound"
def test_list_respects_limit(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
for i in range(5):
self._make_message(db_path, job_id, subject=f"Msg {i}")
result = list_messages(db_path, limit=3)
assert len(result) == 3
class TestDeleteMessage:
def test_delete_removes_message(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import create_message, delete_message, list_messages
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="To delete",
body="bye",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
delete_message(db_path, msg["id"])
assert list_messages(db_path) == []
def test_delete_raises_key_error_when_not_found(self, db_path: Path) -> None:
from scripts.messaging import delete_message
with pytest.raises(KeyError):
delete_message(db_path, 99999)
class TestApproveMessage:
def test_approve_sets_approved_at(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import approve_message, create_message
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Draft",
body="Draft body",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
assert msg.get("approved_at") is None
updated = approve_message(db_path, msg["id"])
assert updated["approved_at"] is not None
assert updated["id"] == msg["id"]
def test_approve_returns_full_dict(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import approve_message, create_message
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Draft",
body="Body here",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
updated = approve_message(db_path, msg["id"])
assert updated["body"] == "Body here"
assert updated["subject"] == "Draft"
def test_approve_raises_key_error_when_not_found(self, db_path: Path) -> None:
from scripts.messaging import approve_message
with pytest.raises(KeyError):
approve_message(db_path, 99999)
# ---------------------------------------------------------------------------
# Template tests
# ---------------------------------------------------------------------------
class TestListTemplates:
def test_includes_four_builtins(self, db_path: Path) -> None:
from scripts.messaging import list_templates
templates = list_templates(db_path)
builtin_keys = {t["key"] for t in templates if t["is_builtin"]}
assert builtin_keys == {
"follow_up",
"thank_you",
"accommodation_request",
"withdrawal",
}
def test_returns_list_of_dicts(self, db_path: Path) -> None:
from scripts.messaging import list_templates
templates = list_templates(db_path)
assert isinstance(templates, list)
assert all(isinstance(t, dict) for t in templates)
class TestCreateTemplate:
def test_create_returns_dict(self, db_path: Path) -> None:
from scripts.messaging import create_template
tmpl = create_template(
db_path,
title="My Template",
category="custom",
subject_template="Hello {{name}}",
body_template="Dear {{name}}, ...",
)
assert isinstance(tmpl, dict)
assert tmpl["title"] == "My Template"
assert tmpl["category"] == "custom"
assert tmpl["is_builtin"] == 0
assert "id" in tmpl
def test_create_default_category(self, db_path: Path) -> None:
from scripts.messaging import create_template
tmpl = create_template(
db_path,
title="No Category",
body_template="Body",
)
assert tmpl["category"] == "custom"
def test_create_appears_in_list(self, db_path: Path) -> None:
from scripts.messaging import create_template, list_templates
create_template(db_path, title="Listed", body_template="Body")
titles = [t["title"] for t in list_templates(db_path)]
assert "Listed" in titles
class TestUpdateTemplate:
def test_update_user_template(self, db_path: Path) -> None:
from scripts.messaging import create_template, update_template
tmpl = create_template(db_path, title="Original", body_template="Old body")
updated = update_template(db_path, tmpl["id"], title="Updated", body_template="New body")
assert updated["title"] == "Updated"
assert updated["body_template"] == "New body"
def test_update_returns_persisted_values(self, db_path: Path) -> None:
from scripts.messaging import create_template, list_templates, update_template
tmpl = create_template(db_path, title="Before", body_template="x")
update_template(db_path, tmpl["id"], title="After")
templates = list_templates(db_path)
titles = [t["title"] for t in templates]
assert "After" in titles
assert "Before" not in titles
def test_update_builtin_raises_permission_error(self, db_path: Path) -> None:
from scripts.messaging import list_templates, update_template
builtin = next(t for t in list_templates(db_path) if t["is_builtin"])
with pytest.raises(PermissionError):
update_template(db_path, builtin["id"], title="Hacked")
def test_update_missing_raises_key_error(self, db_path):
from scripts.messaging import update_template
with pytest.raises(KeyError):
update_template(db_path, 9999, title="Ghost")
class TestDeleteTemplate:
def test_delete_user_template(self, db_path: Path) -> None:
from scripts.messaging import create_template, delete_template, list_templates
tmpl = create_template(db_path, title="To Delete", body_template="bye")
initial_count = len(list_templates(db_path))
delete_template(db_path, tmpl["id"])
assert len(list_templates(db_path)) == initial_count - 1
def test_delete_builtin_raises_permission_error(self, db_path: Path) -> None:
from scripts.messaging import delete_template, list_templates
builtin = next(t for t in list_templates(db_path) if t["is_builtin"])
with pytest.raises(PermissionError):
delete_template(db_path, builtin["id"])
def test_delete_missing_raises_key_error(self, db_path: Path) -> None:
from scripts.messaging import delete_template
with pytest.raises(KeyError):
delete_template(db_path, 99999)

View file

@ -0,0 +1,195 @@
"""Integration tests for messaging endpoints."""
import os
from pathlib import Path
import pytest
from fastapi.testclient import TestClient
from scripts.db_migrate import migrate_db
@pytest.fixture
def fresh_db(tmp_path, monkeypatch):
"""Set up a fresh isolated DB wired to dev_api._request_db."""
db = tmp_path / "test.db"
monkeypatch.setenv("STAGING_DB", str(db))
migrate_db(db)
import dev_api
monkeypatch.setattr(
dev_api,
"_request_db",
type("CV", (), {"get": lambda self: str(db), "set": lambda *a: None})(),
)
monkeypatch.setattr(dev_api, "DB_PATH", str(db))
return db
@pytest.fixture
def client(fresh_db):
import dev_api
return TestClient(dev_api.app)
# ---------------------------------------------------------------------------
# Messages
# ---------------------------------------------------------------------------
def test_create_and_list_message(client):
"""POST /api/messages creates a row; GET /api/messages?job_id= returns it."""
payload = {
"job_id": 1,
"type": "email",
"direction": "outbound",
"subject": "Hello recruiter",
"body": "I am very interested in this role.",
"to_addr": "recruiter@example.com",
}
resp = client.post("/api/messages", json=payload)
assert resp.status_code == 200, resp.text
created = resp.json()
assert created["subject"] == "Hello recruiter"
assert created["job_id"] == 1
resp = client.get("/api/messages", params={"job_id": 1})
assert resp.status_code == 200
messages = resp.json()
assert any(m["id"] == created["id"] for m in messages)
def test_delete_message(client):
"""DELETE removes the message; subsequent GET no longer returns it."""
resp = client.post("/api/messages", json={"type": "email", "direction": "outbound", "body": "bye"})
assert resp.status_code == 200
msg_id = resp.json()["id"]
resp = client.delete(f"/api/messages/{msg_id}")
assert resp.status_code == 200
assert resp.json()["ok"] is True
resp = client.get("/api/messages")
assert resp.status_code == 200
ids = [m["id"] for m in resp.json()]
assert msg_id not in ids
def test_delete_message_not_found(client):
"""DELETE /api/messages/9999 returns 404."""
resp = client.delete("/api/messages/9999")
assert resp.status_code == 404
# ---------------------------------------------------------------------------
# Templates
# ---------------------------------------------------------------------------
def test_list_templates_has_builtins(client):
"""GET /api/message-templates includes the seeded built-in keys."""
resp = client.get("/api/message-templates")
assert resp.status_code == 200
templates = resp.json()
keys = {t["key"] for t in templates}
assert "follow_up" in keys
assert "thank_you" in keys
def test_template_create_update_delete(client):
"""Full lifecycle: create → update title → delete a user-defined template."""
# Create
resp = client.post("/api/message-templates", json={
"title": "My Template",
"category": "custom",
"body_template": "Hello {{name}}",
})
assert resp.status_code == 200
tmpl = resp.json()
assert tmpl["title"] == "My Template"
assert tmpl["is_builtin"] == 0
tmpl_id = tmpl["id"]
# Update title
resp = client.put(f"/api/message-templates/{tmpl_id}", json={"title": "Updated Title"})
assert resp.status_code == 200
assert resp.json()["title"] == "Updated Title"
# Delete
resp = client.delete(f"/api/message-templates/{tmpl_id}")
assert resp.status_code == 200
assert resp.json()["ok"] is True
# Confirm gone
resp = client.get("/api/message-templates")
ids = [t["id"] for t in resp.json()]
assert tmpl_id not in ids
def test_builtin_template_put_returns_403(client):
"""PUT on a built-in template returns 403."""
resp = client.get("/api/message-templates")
builtin = next(t for t in resp.json() if t["is_builtin"] == 1)
resp = client.put(f"/api/message-templates/{builtin['id']}", json={"title": "Hacked"})
assert resp.status_code == 403
def test_builtin_template_delete_returns_403(client):
"""DELETE on a built-in template returns 403."""
resp = client.get("/api/message-templates")
builtin = next(t for t in resp.json() if t["is_builtin"] == 1)
resp = client.delete(f"/api/message-templates/{builtin['id']}")
assert resp.status_code == 403
# ---------------------------------------------------------------------------
# Draft reply (tier gate)
# ---------------------------------------------------------------------------
def test_draft_without_llm_returns_402(fresh_db, monkeypatch):
"""POST /api/contacts/{id}/draft-reply with free tier + no LLM configured returns 402."""
import dev_api
from scripts.db import add_contact
# Insert a job_contacts row via the db helper so schema changes stay in sync
contact_id = add_contact(
fresh_db,
job_id=None,
direction="inbound",
subject="Test subject",
from_addr="hr@example.com",
body="We would like to schedule...",
)
# Ensure has_configured_llm returns False at both import locations
monkeypatch.setattr("app.wizard.tiers.has_configured_llm", lambda *a, **kw: False)
# Force free tier via the tiers module (not via header — header is no longer trusted)
monkeypatch.setattr("app.wizard.tiers.effective_tier", lambda: "free")
client = TestClient(dev_api.app)
resp = client.post(f"/api/contacts/{contact_id}/draft-reply")
assert resp.status_code == 402
# ---------------------------------------------------------------------------
# Approve
# ---------------------------------------------------------------------------
def test_approve_message(client):
"""POST /api/messages then POST /api/messages/{id}/approve returns body + approved_at."""
resp = client.post("/api/messages", json={
"type": "draft",
"direction": "outbound",
"body": "This is my draft reply.",
})
assert resp.status_code == 200
msg_id = resp.json()["id"]
assert resp.json()["approved_at"] is None
resp = client.post(f"/api/messages/{msg_id}/approve")
assert resp.status_code == 200
data = resp.json()
assert data["body"] == "This is my draft reply."
assert data["approved_at"] is not None
def test_approve_message_not_found(client):
"""POST /api/messages/9999/approve returns 404."""
resp = client.post("/api/messages/9999/approve")
assert resp.status_code == 404

View file

@ -0,0 +1,161 @@
# tests/test_mission_domains.py
"""Tests for YAML-driven mission domain configuration."""
import sys
from pathlib import Path
import pytest
import yaml
sys.path.insert(0, str(Path(__file__).parent.parent))
# ── load_mission_domains ──────────────────────────────────────────────────────
def test_load_mission_domains_returns_dict(tmp_path: Path) -> None:
"""load_mission_domains parses a valid YAML file into a dict."""
cfg = tmp_path / "mission_domains.yaml"
cfg.write_text(
"domains:\n"
" music:\n"
" signals: [music, spotify]\n"
" default_note: A music note.\n"
)
from scripts.generate_cover_letter import load_mission_domains
result = load_mission_domains(cfg)
assert "music" in result
assert result["music"]["signals"] == ["music", "spotify"]
assert result["music"]["default_note"] == "A music note."
def test_load_mission_domains_missing_file_returns_empty(tmp_path: Path) -> None:
"""load_mission_domains returns {} when the file does not exist."""
from scripts.generate_cover_letter import load_mission_domains
result = load_mission_domains(tmp_path / "nonexistent.yaml")
assert result == {}
def test_load_mission_domains_empty_file_returns_empty(tmp_path: Path) -> None:
"""load_mission_domains returns {} for a blank file."""
cfg = tmp_path / "mission_domains.yaml"
cfg.write_text("")
from scripts.generate_cover_letter import load_mission_domains
result = load_mission_domains(cfg)
assert result == {}
# ── detect_mission_alignment ─────────────────────────────────────────────────
def _make_signals(domains: dict[str, dict]) -> dict[str, list[str]]:
return {d: cfg.get("signals", []) for d, cfg in domains.items()}
def test_detect_returns_note_on_signal_match() -> None:
"""detect_mission_alignment returns the domain note when a signal is present."""
from scripts.generate_cover_letter import detect_mission_alignment
notes = {"music": "Music note here."}
result = detect_mission_alignment("Spotify", "We stream music worldwide.", notes)
assert result == "Music note here."
def test_detect_returns_none_on_no_match() -> None:
"""detect_mission_alignment returns None when no signal matches."""
from scripts.generate_cover_letter import detect_mission_alignment
notes = {"music": "Music note."}
result = detect_mission_alignment("Acme Corp", "We sell widgets.", notes)
assert result is None
def test_detect_is_case_insensitive() -> None:
"""Signal matching is case-insensitive (text is lowercased before scan)."""
from scripts.generate_cover_letter import detect_mission_alignment
notes = {"animal_welfare": "Animal note."}
result = detect_mission_alignment("ASPCA", "We care for ANIMALS.", notes)
assert result == "Animal note."
def test_detect_uses_default_mission_notes_when_none_passed() -> None:
"""detect_mission_alignment uses module-level _MISSION_NOTES when notes=None."""
from scripts.generate_cover_letter import detect_mission_alignment, _MISSION_DOMAINS
if "music" not in _MISSION_DOMAINS:
pytest.skip("music domain not present in loaded config")
result = detect_mission_alignment("Spotify", "We build music streaming products.")
assert result is not None
assert len(result) > 10 # some non-empty hint
# ── _build_mission_notes ─────────────────────────────────────────────────────
def test_build_mission_notes_uses_default_when_no_custom(tmp_path: Path) -> None:
"""_build_mission_notes uses YAML default_note when user has no custom note."""
cfg = tmp_path / "mission_domains.yaml"
cfg.write_text(
"domains:\n"
" music:\n"
" signals: [music]\n"
" default_note: Generic music note.\n"
)
class EmptyProfile:
name = "Test User"
mission_preferences: dict = {}
from scripts.generate_cover_letter import load_mission_domains, _build_mission_notes
import scripts.generate_cover_letter as gcl
domains_orig = gcl._MISSION_DOMAINS
signals_orig = gcl._MISSION_SIGNALS
try:
gcl._MISSION_DOMAINS = load_mission_domains(cfg)
gcl._MISSION_SIGNALS = _make_signals(gcl._MISSION_DOMAINS)
notes = _build_mission_notes(profile=EmptyProfile())
assert notes["music"] == "Generic music note."
finally:
gcl._MISSION_DOMAINS = domains_orig
gcl._MISSION_SIGNALS = signals_orig
def test_build_mission_notes_uses_custom_note_when_provided(tmp_path: Path) -> None:
"""_build_mission_notes wraps user's custom note in a prompt hint."""
cfg = tmp_path / "mission_domains.yaml"
cfg.write_text(
"domains:\n"
" music:\n"
" signals: [music]\n"
" default_note: Default.\n"
)
class FakeProfile:
name = "Alex"
mission_preferences = {"music": "I played guitar for 10 years."}
from scripts.generate_cover_letter import load_mission_domains, _build_mission_notes
import scripts.generate_cover_letter as gcl
domains_orig = gcl._MISSION_DOMAINS
signals_orig = gcl._MISSION_SIGNALS
try:
gcl._MISSION_DOMAINS = load_mission_domains(cfg)
gcl._MISSION_SIGNALS = _make_signals(gcl._MISSION_DOMAINS)
notes = _build_mission_notes(profile=FakeProfile())
assert "I played guitar for 10 years." in notes["music"]
assert "Alex" in notes["music"]
finally:
gcl._MISSION_DOMAINS = domains_orig
gcl._MISSION_SIGNALS = signals_orig
# ── committed config sanity checks ───────────────────────────────────────────
def test_committed_config_has_required_domains() -> None:
"""The committed mission_domains.yaml contains the original 4 domains + 3 new ones."""
from scripts.generate_cover_letter import _MISSION_DOMAINS
required = {"music", "animal_welfare", "education", "social_impact", "health",
"privacy", "accessibility", "open_source"}
missing = required - set(_MISSION_DOMAINS.keys())
assert not missing, f"Missing domains in committed config: {missing}"
def test_committed_config_each_domain_has_signals_and_note() -> None:
"""Every domain in the committed config has a non-empty signals list and default_note."""
from scripts.generate_cover_letter import _MISSION_DOMAINS
for domain, cfg in _MISSION_DOMAINS.items():
assert cfg.get("signals"), f"Domain '{domain}' has no signals"
assert cfg.get("default_note", "").strip(), f"Domain '{domain}' has no default_note"

207
tests/test_resume_sync.py Normal file
View file

@ -0,0 +1,207 @@
"""Unit tests for scripts.resume_sync — format transform between library and profile."""
import json
import pytest
from scripts.resume_sync import (
library_to_profile_content,
profile_to_library,
make_auto_backup_name,
blank_fields_on_import,
)
# ── Fixtures ──────────────────────────────────────────────────────────────────
STRUCT_JSON = {
"name": "Alex Rivera",
"email": "alex@example.com",
"phone": "555-0100",
"career_summary": "Senior UX Designer with 6 years experience.",
"experience": [
{
"title": "Senior UX Designer",
"company": "StreamNote",
"start_date": "2023",
"end_date": "present",
"location": "Remote",
"bullets": ["Led queue redesign", "Built component library"],
}
],
"education": [
{
"institution": "State University",
"degree": "B.F.A.",
"field": "Graphic Design",
"start_date": "2015",
"end_date": "2019",
}
],
"skills": ["Figma", "User Research"],
"achievements": ["Design award 2024"],
}
PROFILE_PAYLOAD = {
"name": "Alex",
"surname": "Rivera",
"email": "alex@example.com",
"phone": "555-0100",
"career_summary": "Senior UX Designer with 6 years experience.",
"experience": [
{
"title": "Senior UX Designer",
"company": "StreamNote",
"period": "2023 present",
"location": "Remote",
"industry": "",
"responsibilities": "Led queue redesign\nBuilt component library",
"skills": [],
}
],
"education": [
{
"institution": "State University",
"degree": "B.F.A.",
"field": "Graphic Design",
"start_date": "2015",
"end_date": "2019",
}
],
"skills": ["Figma", "User Research"],
"achievements": ["Design award 2024"],
}
# ── library_to_profile_content ────────────────────────────────────────────────
def test_library_to_profile_splits_name():
result = library_to_profile_content(STRUCT_JSON)
assert result["name"] == "Alex"
assert result["surname"] == "Rivera"
def test_library_to_profile_single_word_name():
result = library_to_profile_content({**STRUCT_JSON, "name": "Cher"})
assert result["name"] == "Cher"
assert result["surname"] == ""
def test_library_to_profile_email_phone():
result = library_to_profile_content(STRUCT_JSON)
assert result["email"] == "alex@example.com"
assert result["phone"] == "555-0100"
def test_library_to_profile_career_summary():
result = library_to_profile_content(STRUCT_JSON)
assert result["career_summary"] == "Senior UX Designer with 6 years experience."
def test_library_to_profile_experience_period():
result = library_to_profile_content(STRUCT_JSON)
assert result["experience"][0]["period"] == "2023 present"
def test_library_to_profile_experience_bullets_joined():
result = library_to_profile_content(STRUCT_JSON)
assert result["experience"][0]["responsibilities"] == "Led queue redesign\nBuilt component library"
def test_library_to_profile_experience_industry_blank():
result = library_to_profile_content(STRUCT_JSON)
assert result["experience"][0]["industry"] == ""
def test_library_to_profile_education():
result = library_to_profile_content(STRUCT_JSON)
assert result["education"][0]["institution"] == "State University"
assert result["education"][0]["degree"] == "B.F.A."
def test_library_to_profile_skills():
result = library_to_profile_content(STRUCT_JSON)
assert result["skills"] == ["Figma", "User Research"]
def test_library_to_profile_achievements():
result = library_to_profile_content(STRUCT_JSON)
assert result["achievements"] == ["Design award 2024"]
def test_library_to_profile_missing_fields_no_keyerror():
result = library_to_profile_content({})
assert result["name"] == ""
assert result["experience"] == []
assert result["education"] == []
assert result["skills"] == []
assert result["achievements"] == []
# ── profile_to_library ────────────────────────────────────────────────────────
def test_profile_to_library_full_name():
text, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["name"] == "Alex Rivera"
def test_profile_to_library_experience_bullets_reconstructed():
_, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["experience"][0]["bullets"] == ["Led queue redesign", "Built component library"]
def test_profile_to_library_period_split():
_, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["experience"][0]["start_date"] == "2023"
assert struct["experience"][0]["end_date"] == "present"
def test_profile_to_library_period_split_iso_dates():
"""ISO dates (with hyphens) must round-trip through the period field correctly."""
payload = {
**PROFILE_PAYLOAD,
"experience": [{
**PROFILE_PAYLOAD["experience"][0],
"period": "2023-01 \u2013 2025-03",
}],
}
_, struct = profile_to_library(payload)
assert struct["experience"][0]["start_date"] == "2023-01"
assert struct["experience"][0]["end_date"] == "2025-03"
def test_profile_to_library_period_split_em_dash():
"""Em-dash separator is also handled."""
payload = {
**PROFILE_PAYLOAD,
"experience": [{
**PROFILE_PAYLOAD["experience"][0],
"period": "2022-06 \u2014 2023-12",
}],
}
_, struct = profile_to_library(payload)
assert struct["experience"][0]["start_date"] == "2022-06"
assert struct["experience"][0]["end_date"] == "2023-12"
def test_profile_to_library_education_round_trip():
_, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["education"][0]["institution"] == "State University"
def test_profile_to_library_plain_text_contains_name():
text, _ = profile_to_library(PROFILE_PAYLOAD)
assert "Alex Rivera" in text
def test_profile_to_library_plain_text_contains_summary():
text, _ = profile_to_library(PROFILE_PAYLOAD)
assert "Senior UX Designer" in text
def test_profile_to_library_empty_payload_no_crash():
text, struct = profile_to_library({})
assert isinstance(text, str)
assert isinstance(struct, dict)
# ── make_auto_backup_name ─────────────────────────────────────────────────────
def test_backup_name_format():
name = make_auto_backup_name("Senior Engineer Resume")
import re
assert re.match(r"Auto-backup before Senior Engineer Resume — \d{4}-\d{2}-\d{2}", name)
# ── blank_fields_on_import ────────────────────────────────────────────────────
def test_blank_fields_industry_always_listed():
result = blank_fields_on_import(STRUCT_JSON)
assert "experience[].industry" in result
def test_blank_fields_location_listed_when_missing():
no_loc = {**STRUCT_JSON, "experience": [{**STRUCT_JSON["experience"][0], "location": ""}]}
result = blank_fields_on_import(no_loc)
assert "experience[].location" in result
def test_blank_fields_location_not_listed_when_present():
result = blank_fields_on_import(STRUCT_JSON)
assert "experience[].location" not in result

View file

@ -0,0 +1,134 @@
"""Integration tests for resume library<->profile sync endpoints."""
import json
import os
from pathlib import Path
import pytest
import yaml
from fastapi.testclient import TestClient
from scripts.db import create_resume, get_resume, list_resumes
from scripts.db_migrate import migrate_db
STRUCT_JSON = {
"name": "Alex Rivera",
"email": "alex@example.com",
"phone": "555-0100",
"career_summary": "Senior UX Designer.",
"experience": [{"title": "Designer", "company": "Acme", "start_date": "2022",
"end_date": "present", "location": "Remote", "bullets": ["Led redesign"]}],
"education": [{"institution": "State U", "degree": "B.A.", "field": "Design",
"start_date": "2016", "end_date": "2020"}],
"skills": ["Figma"],
"achievements": ["Design award"],
}
@pytest.fixture
def fresh_db(tmp_path, monkeypatch):
"""Set up a fresh isolated DB + config dir, wired to dev_api._request_db."""
db = tmp_path / "test.db"
cfg = tmp_path / "config"
cfg.mkdir()
# STAGING_DB drives _user_yaml_path() -> dirname(db)/config/user.yaml
monkeypatch.setenv("STAGING_DB", str(db))
migrate_db(db)
import dev_api
monkeypatch.setattr(
dev_api,
"_request_db",
type("CV", (), {"get": lambda self: str(db), "set": lambda *a: None})(),
)
return db, cfg
def test_apply_to_profile_updates_yaml(fresh_db, monkeypatch):
db, cfg = fresh_db
import dev_api
client = TestClient(dev_api.app)
entry = create_resume(db, name="Test Resume",
text="Alex Rivera\n", source="uploaded",
struct_json=json.dumps(STRUCT_JSON))
resp = client.post(f"/api/resumes/{entry['id']}/apply-to-profile")
assert resp.status_code == 200
data = resp.json()
assert data["ok"] is True
assert "backup_id" in data
assert "Auto-backup before Test Resume" in data["backup_name"]
profile_yaml = cfg / "plain_text_resume.yaml"
assert profile_yaml.exists()
profile = yaml.safe_load(profile_yaml.read_text())
assert profile["career_summary"] == "Senior UX Designer."
# Name split: "Alex Rivera" -> name="Alex", surname="Rivera"
assert profile["name"] == "Alex"
assert profile["surname"] == "Rivera"
assert profile["education"][0]["institution"] == "State U"
def test_apply_to_profile_creates_backup(fresh_db, monkeypatch):
db, cfg = fresh_db
profile_path = cfg / "plain_text_resume.yaml"
profile_path.write_text(yaml.dump({"name": "Old Name", "career_summary": "Old summary"}))
entry = create_resume(db, name="New Resume",
text="Alex Rivera\n", source="uploaded",
struct_json=json.dumps(STRUCT_JSON))
import dev_api
client = TestClient(dev_api.app)
client.post(f"/api/resumes/{entry['id']}/apply-to-profile")
resumes = list_resumes(db_path=db)
backup = next((r for r in resumes if r["source"] == "auto_backup"), None)
assert backup is not None
def test_apply_to_profile_preserves_metadata(fresh_db, monkeypatch):
db, cfg = fresh_db
profile_path = cfg / "plain_text_resume.yaml"
profile_path.write_text(yaml.dump({
"name": "Old", "salary_min": 80000, "salary_max": 120000,
"remote": True, "gender": "non-binary",
}))
entry = create_resume(db, name="New",
text="Alex\n", source="uploaded",
struct_json=json.dumps(STRUCT_JSON))
import dev_api
client = TestClient(dev_api.app)
client.post(f"/api/resumes/{entry['id']}/apply-to-profile")
profile = yaml.safe_load(profile_path.read_text())
assert profile["salary_min"] == 80000
assert profile["remote"] is True
assert profile["gender"] == "non-binary"
def test_save_resume_syncs_to_default_library_entry(fresh_db, monkeypatch):
db, cfg = fresh_db
entry = create_resume(db, name="My Resume",
text="Original", source="manual")
user_yaml = cfg / "user.yaml"
user_yaml.write_text(yaml.dump({"default_resume_id": entry["id"], "wizard_complete": True}))
import dev_api
client = TestClient(dev_api.app)
resp = client.put("/api/settings/resume", json={
"name": "Alex", "career_summary": "Updated summary",
"experience": [], "education": [], "achievements": [], "skills": [],
})
assert resp.status_code == 200
data = resp.json()
assert data["synced_library_entry_id"] == entry["id"]
updated = get_resume(db_path=db, resume_id=entry["id"])
assert updated["synced_at"] is not None
struct = json.loads(updated["struct_json"])
assert struct["career_summary"] == "Updated summary"
def test_save_resume_no_default_no_crash(fresh_db, monkeypatch):
db, cfg = fresh_db
user_yaml = cfg / "user.yaml"
user_yaml.write_text(yaml.dump({"wizard_complete": True}))
import dev_api
client = TestClient(dev_api.app)
resp = client.put("/api/settings/resume", json={
"name": "Alex", "career_summary": "", "experience": [],
"education": [], "achievements": [], "skills": [],
})
assert resp.status_code == 200
assert resp.json()["synced_library_entry_id"] is None

View file

@ -104,7 +104,7 @@ class TestWizardHardware:
r = client.get("/api/wizard/hardware")
assert r.status_code == 200
body = r.json()
assert set(body["profiles"]) == {"remote", "cpu", "single-gpu", "dual-gpu"}
assert {"remote", "cpu", "single-gpu", "dual-gpu"}.issubset(set(body["profiles"]))
assert "gpus" in body
assert "suggested_profile" in body
@ -245,8 +245,10 @@ class TestWizardStep:
assert r.status_code == 200
assert search_path.exists()
prefs = yaml.safe_load(search_path.read_text())
assert prefs["default"]["job_titles"] == ["Software Engineer", "Backend Developer"]
assert "Remote" in prefs["default"]["location"]
# Step 6 writes canonical {profiles: [{name, titles, locations, ...}]} format
default = next(p for p in prefs["profiles"] if p["name"] == "default")
assert default["titles"] == ["Software Engineer", "Backend Developer"]
assert "Remote" in default["locations"]
def test_step7_only_advances_counter(self, client, tmp_path):
yaml_path = tmp_path / "config" / "user.yaml"

View file

@ -11,6 +11,9 @@
html, body { margin: 0; background: #eaeff8; min-height: 100vh; }
@media (prefers-color-scheme: dark) { html, body { background: #16202e; } }
</style>
<!-- Plausible analytics: cookie-free, GDPR-compliant, self-hosted.
Skips localhost/127.0.0.1. Reports to hostname + circuitforge.tech rollup. -->
<script>(function(){if(/localhost|127\.0\.0\.1/.test(location.hostname))return;var s=document.createElement('script');s.defer=true;s.dataset.domain=location.hostname+',circuitforge.tech';s.dataset.api='https://analytics.circuitforge.tech/api/event';s.src='https://analytics.circuitforge.tech/js/script.js';document.head.appendChild(s);})();</script>
</head>
<body>
<!-- Mount target only — App.vue root must NOT use id="app". Gotcha #1. -->

39
web/package-lock.json generated
View file

@ -12,9 +12,12 @@
"@fontsource/fraunces": "^5.2.9",
"@fontsource/jetbrains-mono": "^5.2.8",
"@heroicons/vue": "^2.2.0",
"@types/dompurify": "^3.0.5",
"@vueuse/core": "^14.2.1",
"@vueuse/integrations": "^14.2.1",
"animejs": "^4.3.6",
"dompurify": "^3.4.0",
"marked": "^18.0.0",
"pinia": "^3.0.4",
"vue": "^3.5.25",
"vue-router": "^5.0.3"
@ -1718,6 +1721,15 @@
"dev": true,
"license": "MIT"
},
"node_modules/@types/dompurify": {
"version": "3.0.5",
"resolved": "https://registry.npmjs.org/@types/dompurify/-/dompurify-3.0.5.tgz",
"integrity": "sha512-1Wg0g3BtQF7sSb27fJQAKck1HECM6zV1EB66j8JH9i3LCjYabJa0FSdiSgsD5K/RbrsR0SiraKacLB+T8ZVYAg==",
"license": "MIT",
"dependencies": {
"@types/trusted-types": "*"
}
},
"node_modules/@types/estree": {
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/@types/estree/-/estree-1.0.8.tgz",
@ -1735,6 +1747,12 @@
"undici-types": "~7.16.0"
}
},
"node_modules/@types/trusted-types": {
"version": "2.0.7",
"resolved": "https://registry.npmjs.org/@types/trusted-types/-/trusted-types-2.0.7.tgz",
"integrity": "sha512-ScaPdn1dQczgbl0QFTeTOmVHFULt394XJgOQNoyVhZ6r2vLnMLJfBPd53SB52T/3G36VI1/g2MZaX0cwDuXsfw==",
"license": "MIT"
},
"node_modules/@types/web-bluetooth": {
"version": "0.0.21",
"resolved": "https://registry.npmjs.org/@types/web-bluetooth/-/web-bluetooth-0.0.21.tgz",
@ -2944,6 +2962,15 @@
"dev": true,
"license": "MIT"
},
"node_modules/dompurify": {
"version": "3.4.0",
"resolved": "https://registry.npmjs.org/dompurify/-/dompurify-3.4.0.tgz",
"integrity": "sha512-nolgK9JcaUXMSmW+j1yaSvaEaoXYHwWyGJlkoCTghc97KgGDDSnpoU/PlEnw63Ah+TGKFOyY+X5LnxaWbCSfXg==",
"license": "(MPL-2.0 OR Apache-2.0)",
"optionalDependencies": {
"@types/trusted-types": "^2.0.7"
}
},
"node_modules/duplexer": {
"version": "0.1.2",
"resolved": "https://registry.npmjs.org/duplexer/-/duplexer-0.1.2.tgz",
@ -3472,6 +3499,18 @@
"url": "https://github.com/sponsors/sxzz"
}
},
"node_modules/marked": {
"version": "18.0.0",
"resolved": "https://registry.npmjs.org/marked/-/marked-18.0.0.tgz",
"integrity": "sha512-2e7Qiv/HJSXj8rDEpgTvGKsP8yYtI9xXHKDnrftrmnrJPaFNM7VRb2YCzWaX4BP1iCJ/XPduzDJZMFoqTCcIMA==",
"license": "MIT",
"bin": {
"marked": "bin/marked.js"
},
"engines": {
"node": ">= 20"
}
},
"node_modules/mdn-data": {
"version": "2.27.1",
"resolved": "https://registry.npmjs.org/mdn-data/-/mdn-data-2.27.1.tgz",

View file

@ -15,9 +15,12 @@
"@fontsource/fraunces": "^5.2.9",
"@fontsource/jetbrains-mono": "^5.2.8",
"@heroicons/vue": "^2.2.0",
"@types/dompurify": "^3.0.5",
"@vueuse/core": "^14.2.1",
"@vueuse/integrations": "^14.2.1",
"animejs": "^4.3.6",
"dompurify": "^3.4.0",
"marked": "^18.0.0",
"pinia": "^3.0.4",
"vue": "^3.5.25",
"vue-router": "^5.0.3"

View file

@ -7,10 +7,9 @@
<!-- Skip to main content link (screen reader / keyboard nav) -->
<a href="#main-content" class="skip-link">Skip to main content</a>
<!-- Demo mode banner sticky top bar, visible on all pages -->
<div v-if="config.isDemo" class="demo-banner" role="status" aria-live="polite">
👁 Demo mode changes are not saved and AI features are disabled.
</div>
<!-- Demo mode banner + welcome modal rendered when isDemo -->
<DemoBanner v-if="config.isDemo" />
<WelcomeModal v-if="config.isDemo" />
<RouterView />
@ -32,6 +31,8 @@ import { useHackerMode, useKonamiCode } from './composables/useEasterEgg'
import { useTheme } from './composables/useTheme'
import { useToast } from './composables/useToast'
import AppNav from './components/AppNav.vue'
import DemoBanner from './components/DemoBanner.vue'
import WelcomeModal from './components/WelcomeModal.vue'
import { useAppConfigStore } from './stores/appConfig'
import { useDigestStore } from './stores/digest'
@ -128,20 +129,6 @@ body {
padding-bottom: 0;
}
/* Demo mode banner — sticky top bar */
.demo-banner {
position: sticky;
top: 0;
z-index: 200;
background: var(--color-warning);
color: #1a1a1a; /* forced dark — warning bg is always light enough */
text-align: center;
font-size: 0.85rem;
font-weight: 600;
padding: 6px var(--space-4, 16px);
letter-spacing: 0.01em;
}
/* Global toast — bottom-center, above tab bar */
.global-toast {
position: fixed;

View file

@ -77,6 +77,7 @@ body {
}
/* ── Dark mode ─────────────────────────────────────── */
/* Covers both: OS-level dark preference AND explicit dark theme selection in UI */
@media (prefers-color-scheme: dark) {
:root:not([data-theme="hacker"]) {
--app-primary: #68A8D8; /* Falcon Blue (dark) — 6.54:1 on #16202e ✅ AA */
@ -97,6 +98,26 @@ body {
}
}
/* Explicit [data-theme="dark"] fires when user picks dark via theme picker
on a light-OS machine (where prefers-color-scheme: dark won't match) */
[data-theme="dark"]:not([data-theme="hacker"]) {
--app-primary: #68A8D8;
--app-primary-hover: #7BBDE6;
--app-primary-light: #0D1F35;
--app-accent: #F6872A;
--app-accent-hover: #FF9840;
--app-accent-light: #2D1505;
--app-accent-text: #1a2338;
--score-mid-high: #5ba3d9;
--status-synced: #9b8fea;
--status-survey: #b08fea;
--status-phone: #4ec9be;
--status-offer: #f5a43a;
}
/* ── Hacker mode (Konami easter egg) ──────────────── */
[data-theme="hacker"] {
--app-primary: #00ff41;

View file

@ -96,6 +96,8 @@ import {
NewspaperIcon,
Cog6ToothIcon,
DocumentTextIcon,
UsersIcon,
IdentificationIcon,
} from '@heroicons/vue/24/outline'
import { useDigestStore } from '../stores/digest'
@ -155,6 +157,8 @@ const navLinks = computed(() => [
{ to: '/apply', icon: PencilSquareIcon, label: 'Apply' },
{ to: '/resumes', icon: DocumentTextIcon, label: 'Resumes' },
{ to: '/interviews', icon: CalendarDaysIcon, label: 'Interviews' },
{ to: '/messages', icon: UsersIcon, label: 'Messages' },
{ to: '/references', icon: IdentificationIcon, label: 'References' },
{ to: '/digest', icon: NewspaperIcon, label: 'Digest',
badge: digestStore.entries.length || undefined },
{ to: '/prep', icon: LightBulbIcon, label: 'Interview Prep' },

View file

@ -28,7 +28,7 @@
<span v-if="job.is_remote" class="remote-badge">Remote</span>
</div>
<h1 class="job-details__title">{{ job.title }}</h1>
<h2 class="job-details__title">{{ job.title }}</h2>
<div class="job-details__company">
{{ job.company }}
<span v-if="job.location" aria-hidden="true"> · </span>
@ -38,7 +38,7 @@
<!-- Description -->
<div class="job-details__desc" :class="{ 'job-details__desc--clamped': !descExpanded }">
{{ job.description ?? 'No description available.' }}
<MarkdownView :content="job.description ?? 'No description available.'" />
</div>
<button
v-if="(job.description?.length ?? 0) > 300"
@ -199,7 +199,7 @@
<!-- Application Q&A -->
<div class="qa-section">
<button class="section-toggle" @click="qaExpanded = !qaExpanded">
<button class="section-toggle" :aria-expanded="qaExpanded" @click="qaExpanded = !qaExpanded">
<span class="section-toggle__label">Application Q&amp;A</span>
<span v-if="qaItems.length" class="qa-count">{{ qaItems.length }}</span>
<span class="section-toggle__icon" aria-hidden="true">{{ qaExpanded ? '▲' : '▼' }}</span>
@ -290,6 +290,7 @@ import { useAppConfigStore } from '../stores/appConfig'
import type { Job } from '../stores/review'
import ResumeOptimizerPanel from './ResumeOptimizerPanel.vue'
import ResumeLibraryCard from './ResumeLibraryCard.vue'
import MarkdownView from './MarkdownView.vue'
const config = useAppConfigStore()
@ -458,6 +459,10 @@ async function markApplied() {
async function rejectListing() {
if (actioning.value) return
const title = job.value?.title ?? 'this listing'
const company = job.value?.company ?? ''
const label = company ? `"${title}" at ${company}` : `"${title}"`
if (!window.confirm(`Reject ${label}? This cannot be undone.`)) return
actioning.value = 'reject'
await useApiFetch(`/api/jobs/${props.jobId}/reject`, { method: 'POST' })
actioning.value = null
@ -706,7 +711,6 @@ declare module '../stores/review' {
font-size: var(--text-sm);
color: var(--color-text);
line-height: 1.6;
white-space: pre-wrap;
overflow-wrap: break-word;
}
@ -860,7 +864,7 @@ declare module '../stores/review' {
overflow: hidden;
}
.cl-editor__textarea:focus { outline: none; }
.cl-editor__textarea:focus-visible { outline: 2px solid var(--app-primary); outline-offset: 2px; }
.cl-regen {
align-self: flex-end;
@ -1209,9 +1213,12 @@ declare module '../stores/review' {
}
.qa-item__answer:focus {
outline: none;
border-color: var(--app-primary);
}
.qa-item__answer:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.qa-suggest-btn { align-self: flex-end; }
@ -1234,9 +1241,12 @@ declare module '../stores/review' {
}
.qa-add__input:focus {
outline: none;
border-color: var(--app-primary);
}
.qa-add__input:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.qa-add__input::placeholder { color: var(--color-text-muted); }

View file

@ -0,0 +1,79 @@
<template>
<div class="demo-banner" role="status" aria-live="polite">
<span class="demo-banner__label">👁 Demo mode changes are not saved</span>
<div class="demo-banner__ctas">
<a
href="https://circuitforge.tech/peregrine"
class="demo-banner__cta demo-banner__cta--primary"
target="_blank"
rel="noopener"
>Get free key</a>
<a
href="https://git.opensourcesolarpunk.com/Circuit-Forge/peregrine"
class="demo-banner__cta demo-banner__cta--secondary"
target="_blank"
rel="noopener"
>Self-host</a>
</div>
</div>
</template>
<script setup lang="ts">
// No props DemoBanner is only rendered when config.isDemo is true (App.vue)
</script>
<style scoped>
.demo-banner {
position: sticky;
top: 0;
z-index: 200;
background: color-mix(in srgb, var(--color-primary) 8%, var(--color-surface-raised));
border-bottom: 1px solid color-mix(in srgb, var(--color-primary) 20%, var(--color-border));
display: flex;
align-items: center;
justify-content: space-between;
padding: 6px var(--space-4);
gap: var(--space-3);
}
.demo-banner__label {
font-size: 0.8rem;
color: var(--color-text-muted);
}
.demo-banner__ctas {
display: flex;
gap: var(--space-2);
flex-shrink: 0;
}
.demo-banner__cta {
font-size: 0.75rem;
font-weight: 600;
padding: 3px 10px;
border-radius: var(--radius-sm);
text-decoration: none;
transition: opacity var(--transition);
}
.demo-banner__cta:hover {
opacity: 0.85;
}
.demo-banner__cta--primary {
background: var(--color-primary);
color: var(--color-surface); /* surface is dark in dark mode, light in light mode — always contrasts primary */
}
.demo-banner__cta--secondary {
background: none;
border: 1px solid var(--color-border);
color: var(--color-text-muted);
}
@media (max-width: 480px) {
.demo-banner__label {
display: none;
}
}
</style>

View file

@ -0,0 +1,63 @@
<template>
<div v-if="!dismissed" class="hint-chip" role="status">
<span aria-hidden="true" class="hint-chip__icon">💡</span>
<span class="hint-chip__message">{{ message }}</span>
<button
class="hint-chip__dismiss"
@click="dismiss"
:aria-label="`Dismiss hint for ${viewKey}`"
></button>
</div>
</template>
<script setup lang="ts">
import { ref } from 'vue'
const props = defineProps<{
viewKey: string // used for localStorage key e.g. 'home', 'review'
message: string
}>()
const LS_KEY = `peregrine_hint_${props.viewKey}`
const dismissed = ref(!!localStorage.getItem(LS_KEY))
function dismiss(): void {
localStorage.setItem(LS_KEY, '1')
dismissed.value = true
}
</script>
<style scoped>
.hint-chip {
display: flex;
align-items: flex-start;
gap: var(--space-2, 8px);
background: var(--color-surface, #0d1829);
border: 1px solid var(--app-primary, #2B6CB0);
border-radius: var(--radius-md, 8px);
padding: var(--space-2, 8px) var(--space-3, 12px);
margin-bottom: var(--space-3, 12px);
}
.hint-chip__icon { flex-shrink: 0; font-size: 0.9rem; }
.hint-chip__message {
flex: 1;
font-size: 0.85rem;
color: var(--color-text, #1a202c);
line-height: 1.4;
}
.hint-chip__dismiss {
flex-shrink: 0;
background: none;
border: none;
color: var(--color-text-muted, #8898aa);
cursor: pointer;
font-size: 0.75rem;
padding: 0 2px;
line-height: 1;
}
.hint-chip__dismiss:hover { color: var(--color-text, #eaeff8); }
</style>

View file

@ -4,6 +4,44 @@ import type { PipelineJob } from '../stores/interviews'
import type { StageSignal, PipelineStage } from '../stores/interviews'
import { useApiFetch } from '../composables/useApi'
// Date picker
const DATE_STAGES = new Set(['phone_screen', 'interviewing'])
function toDatetimeLocal(iso: string | null | undefined): string {
if (!iso) return ''
// Trim seconds/ms so <input type="datetime-local"> accepts it
const d = new Date(iso)
const pad = (n: number) => String(n).padStart(2, '0')
return `${d.getFullYear()}-${pad(d.getMonth() + 1)}-${pad(d.getDate())}T${pad(d.getHours())}:${pad(d.getMinutes())}`
}
async function onDateChange(value: string) {
if (!value) return
const prev = props.job.interview_date
// Optimistic update
props.job.interview_date = new Date(value).toISOString()
const { error } = await useApiFetch(`/api/jobs/${props.job.id}/interview_date`, {
method: 'PATCH',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ interview_date: value }),
})
if (error) props.job.interview_date = prev
}
// Calendar push
type CalPushStatus = 'idle' | 'loading' | 'synced' | 'failed'
const calPushStatus = ref<CalPushStatus>('idle')
let calPushTimer: ReturnType<typeof setTimeout> | null = null
async function pushCalendar() {
if (calPushStatus.value === 'loading') return
calPushStatus.value = 'loading'
const { error } = await useApiFetch(`/api/jobs/${props.job.id}/calendar_push`, { method: 'POST' })
calPushStatus.value = error ? 'failed' : 'synced'
if (calPushTimer) clearTimeout(calPushTimer)
calPushTimer = setTimeout(() => { calPushStatus.value = 'idle' }, 3000)
}
const props = defineProps<{
job: PipelineJob
focused?: boolean
@ -153,6 +191,37 @@ const columnColor = computed(() => {
}
return map[props.job.status] ?? 'var(--color-border)'
})
// Hired feedback
const FEEDBACK_FACTORS = [
'Resume match',
'Cover letter',
'Interview prep',
'Company research',
'Network / referral',
'Salary negotiation',
] as const
const feedbackDismissed = ref(false)
const feedbackSaved = ref(!!props.job.hired_feedback)
const feedbackText = ref('')
const feedbackFactors = ref<string[]>([])
const feedbackSaving = ref(false)
const showFeedbackWidget = computed(() =>
props.job.status === 'hired' && !feedbackDismissed.value && !feedbackSaved.value
)
async function saveFeedback() {
feedbackSaving.value = true
const { error } = await useApiFetch(`/api/jobs/${props.job.id}/hired-feedback`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ what_helped: feedbackText.value, factors: feedbackFactors.value }),
})
feedbackSaving.value = false
if (!error) feedbackSaved.value = true
}
</script>
<template>
@ -178,6 +247,17 @@ const columnColor = computed(() => {
<div v-if="interviewDateLabel" class="date-chip">
{{ dateChipIcon }} {{ interviewDateLabel }}
</div>
<!-- Inline date picker for phone_screen and interviewing -->
<div v-if="DATE_STAGES.has(job.status)" class="date-picker-wrap">
<input
type="datetime-local"
class="date-picker"
:value="toDatetimeLocal(job.interview_date)"
:aria-label="`Interview date for ${job.title}`"
@change="onDateChange(($event.target as HTMLInputElement).value)"
@click.stop
/>
</div>
</div>
<footer class="card-footer">
<button class="card-action" @click.stop="emit('move', job.id)">Move to </button>
@ -188,6 +268,20 @@ const columnColor = computed(() => {
class="card-action"
@click.stop="emit('survey', job.id)"
>Survey </button>
<!-- Calendar push phone_screen and interviewing only -->
<button
v-if="DATE_STAGES.has(job.status)"
class="card-action card-action--cal"
:class="`card-action--cal-${calPushStatus}`"
:disabled="calPushStatus === 'loading'"
@click.stop="pushCalendar"
:aria-label="`Push ${job.title} to calendar`"
>
<span v-if="calPushStatus === 'loading'"></span>
<span v-else-if="calPushStatus === 'synced'">Synced </span>
<span v-else-if="calPushStatus === 'failed'">Failed </span>
<span v-else>📅 Calendar</span>
</button>
</footer>
<!-- Signal banners -->
<template v-if="job.stage_signals?.length">
@ -244,6 +338,38 @@ const columnColor = computed(() => {
@click.stop="sigExpanded = !sigExpanded"
>{{ sigExpanded ? ' less' : `+${(job.stage_signals?.length ?? 1) - 1} more` }}</button>
</template>
<!-- Hired feedback widget -->
<div v-if="showFeedbackWidget" class="hired-feedback" @click.stop>
<div class="hired-feedback__header">
<span class="hired-feedback__title">What helped you land this role?</span>
<button class="hired-feedback__dismiss" @click="feedbackDismissed = true" aria-label="Dismiss feedback"></button>
</div>
<div class="hired-feedback__factors">
<label
v-for="factor in FEEDBACK_FACTORS"
:key="factor"
class="hired-feedback__factor"
>
<input type="checkbox" :value="factor" v-model="feedbackFactors" />
{{ factor }}
</label>
</div>
<textarea
v-model="feedbackText"
class="hired-feedback__textarea"
placeholder="Anything else that made the difference…"
rows="2"
/>
<button
class="hired-feedback__save"
:disabled="feedbackSaving"
@click="saveFeedback"
>{{ feedbackSaving ? 'Saving…' : 'Save reflection' }}</button>
</div>
<div v-else-if="job.status === 'hired' && feedbackSaved" class="hired-feedback hired-feedback--saved">
Reflection saved.
</div>
</article>
</template>
@ -338,6 +464,31 @@ const columnColor = computed(() => {
align-self: flex-start;
}
.date-picker-wrap {
margin-top: 4px;
}
.date-picker {
width: 100%;
font-size: 0.72rem;
padding: 3px 6px;
border: 1px solid var(--color-border);
border-radius: var(--radius-md, 6px);
background: var(--color-surface);
color: var(--color-text);
cursor: pointer;
transition: border-color var(--transition, 150ms);
}
.date-picker:hover,
.date-picker:focus {
border-color: var(--color-info);
}
.date-picker:focus-visible {
outline: 2px solid var(--color-info);
outline-offset: 2px;
}
.card-footer {
border-top: 1px solid var(--color-border-light);
@ -363,6 +514,26 @@ const columnColor = computed(() => {
background: var(--color-surface);
}
.card-action--cal {
margin-left: auto;
min-width: 72px;
text-align: center;
transition: background var(--transition, 150ms), color var(--transition, 150ms);
}
.card-action--cal-synced {
color: var(--color-success);
}
.card-action--cal-failed {
color: var(--color-error);
}
.card-action--cal:disabled {
opacity: 0.6;
cursor: default;
}
.signal-banner {
border-top: 1px solid transparent; /* color set inline */
padding: 8px 12px;
@ -425,4 +596,80 @@ const columnColor = computed(() => {
background: none; border: none; font-size: 0.75em; color: var(--color-info); cursor: pointer;
padding: 4px 12px; text-align: left;
}
/* ── Hired feedback widget ── */
.hired-feedback {
padding: var(--space-3) var(--space-4);
border-top: 1px solid var(--color-border);
background: rgba(39, 174, 96, 0.04);
display: flex;
flex-direction: column;
gap: var(--space-2);
}
.hired-feedback--saved {
font-size: var(--text-sm);
color: var(--color-text-muted);
text-align: center;
padding: var(--space-2) var(--space-4);
}
.hired-feedback__header {
display: flex;
justify-content: space-between;
align-items: center;
}
.hired-feedback__title {
font-size: var(--text-sm);
font-weight: 600;
color: var(--color-success);
}
.hired-feedback__dismiss {
background: none;
border: none;
color: var(--color-text-muted);
cursor: pointer;
font-size: var(--text-sm);
padding: 2px 4px;
}
.hired-feedback__factors {
display: flex;
flex-wrap: wrap;
gap: var(--space-2);
}
.hired-feedback__factor {
display: flex;
align-items: center;
gap: 4px;
font-size: var(--text-xs);
color: var(--color-text-muted);
cursor: pointer;
}
.hired-feedback__textarea {
width: 100%;
font-size: var(--text-sm);
padding: var(--space-2);
border: 1px solid var(--color-border);
border-radius: 6px;
background: var(--color-surface);
color: var(--color-text);
resize: vertical;
box-sizing: border-box;
}
.hired-feedback__textarea:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.hired-feedback__save {
align-self: flex-end;
padding: var(--space-1) var(--space-3);
font-size: var(--text-sm);
background: var(--color-success);
color: #fff;
border: none;
border-radius: 6px;
cursor: pointer;
}
.hired-feedback__save:disabled {
opacity: 0.6;
cursor: not-allowed;
}
</style>

View file

@ -7,7 +7,7 @@
}"
:aria-label="`${job.title} at ${job.company}`"
>
<!-- Score badge + remote badge -->
<!-- Score badge + remote badge + shadow badge -->
<div class="job-card__badges">
<span
v-if="job.match_score !== null"
@ -18,6 +18,18 @@
{{ job.match_score }}%
</span>
<span v-if="job.is_remote" class="remote-badge">Remote</span>
<span
v-if="job.shadow_score === 'shadow'"
class="shadow-badge shadow-badge--shadow"
:title="`Posted 30+ days before discovery — may already be filled`"
aria-label="Possible shadow listing: posted long before discovery"
>Ghost post</span>
<span
v-else-if="job.shadow_score === 'stale'"
class="shadow-badge shadow-badge--stale"
:title="`Posted 14+ days before discovery — listing may be stale`"
aria-label="Stale listing: posted over 2 weeks before discovery"
>Stale</span>
</div>
<!-- Title + company -->
@ -178,6 +190,28 @@ const formattedDate = computed(() => {
color: var(--app-primary);
}
.shadow-badge {
display: inline-flex;
align-items: center;
padding: 2px var(--space-2);
border-radius: 999px;
font-size: var(--text-xs);
font-weight: 600;
cursor: help;
}
.shadow-badge--shadow {
background: rgba(99, 99, 99, 0.15);
color: var(--color-text-muted);
border: 1px solid rgba(99, 99, 99, 0.3);
}
.shadow-badge--stale {
background: rgba(212, 137, 26, 0.12);
color: var(--score-mid);
border: 1px solid rgba(212, 137, 26, 0.25);
}
.job-card__title {
font-family: var(--font-display);
font-size: var(--text-xl);

View file

@ -216,7 +216,23 @@ watch(() => props.job.id, () => {
}
})
defineExpose({ dismissApprove, dismissReject, dismissSkip })
/** Restore card to its neutral state — used when an action is blocked (e.g. demo guard). */
function resetCard() {
dx.value = 0
dy.value = 0
isExiting.value = false
isHeld.value = false
if (wrapperEl.value) {
wrapperEl.value.style.transition = 'none'
wrapperEl.value.style.transform = ''
wrapperEl.value.style.opacity = ''
requestAnimationFrame(() => {
if (wrapperEl.value) wrapperEl.value.style.transition = ''
})
}
}
defineExpose({ dismissApprove, dismissReject, dismissSkip, resetCard })
</script>
<style scoped>

View file

@ -0,0 +1,77 @@
<template>
<!-- eslint-disable-next-line vue/no-v-html -->
<div class="markdown-body" :class="className" v-html="rendered" />
</template>
<script setup lang="ts">
import { computed } from 'vue'
import { marked } from 'marked'
import DOMPurify from 'dompurify'
const props = defineProps<{
content: string
className?: string
}>()
// Configure marked: gfm for GitHub-flavored markdown, breaks converts \n <br>
marked.setOptions({ gfm: true, breaks: true })
const rendered = computed(() => {
if (!props.content?.trim()) return ''
const html = marked(props.content) as string
return DOMPurify.sanitize(html, {
ALLOWED_TAGS: ['p','br','strong','em','b','i','ul','ol','li','h1','h2','h3','h4','blockquote','code','pre','a','hr'],
ALLOWED_ATTR: ['href','target','rel'],
})
})
</script>
<style scoped>
.markdown-body { line-height: 1.6; color: var(--color-text); }
.markdown-body :deep(p) { margin: 0 0 0.75em; }
.markdown-body :deep(p:last-child) { margin-bottom: 0; }
.markdown-body :deep(ul), .markdown-body :deep(ol) { margin: 0 0 0.75em; padding-left: 1.5em; }
.markdown-body :deep(li) { margin-bottom: 0.25em; }
.markdown-body :deep(h1), .markdown-body :deep(h2), .markdown-body :deep(h3), .markdown-body :deep(h4) {
font-weight: 700; margin: 1em 0 0.4em; color: var(--color-text);
}
.markdown-body :deep(h1) { font-size: 1.2em; }
.markdown-body :deep(h2) { font-size: 1.1em; }
.markdown-body :deep(h3) { font-size: 1em; }
.markdown-body :deep(strong), .markdown-body :deep(b) { font-weight: 700; }
.markdown-body :deep(em), .markdown-body :deep(i) { font-style: italic; }
.markdown-body :deep(code) {
font-family: var(--font-mono);
font-size: 0.875em;
background: var(--color-surface-alt);
border: 1px solid var(--color-border-light);
padding: 0.1em 0.3em;
border-radius: var(--radius-sm);
}
.markdown-body :deep(pre) {
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
padding: var(--space-3);
overflow-x: auto;
font-size: 0.875em;
}
.markdown-body :deep(pre code) { background: none; border: none; padding: 0; }
.markdown-body :deep(blockquote) {
border-left: 3px solid var(--color-accent);
margin: 0.75em 0;
padding: 0.25em 0 0.25em 1em;
color: var(--color-text-muted);
font-style: italic;
}
.markdown-body :deep(hr) {
border: none;
border-top: 1px solid var(--color-border);
margin: 1em 0;
}
.markdown-body :deep(a) {
color: var(--color-accent);
text-decoration: underline;
text-underline-offset: 2px;
}
</style>

View file

@ -0,0 +1,200 @@
<!-- web/src/components/MessageLogModal.vue -->
<template>
<Teleport to="body">
<div
v-if="show"
class="modal-backdrop"
@click.self="emit('close')"
>
<div
ref="dialogEl"
class="modal-dialog"
role="dialog"
aria-modal="true"
:aria-label="title"
tabindex="-1"
@keydown.esc="emit('close')"
>
<header class="modal-header">
<h2 class="modal-title">{{ title }}</h2>
<button class="modal-close" @click="emit('close')" aria-label="Close"></button>
</header>
<form class="modal-body" @submit.prevent="handleSubmit">
<!-- Direction (not shown for pure notes) -->
<div v-if="type !== 'in_person'" class="field">
<label class="field-label" for="log-direction">Direction</label>
<select id="log-direction" v-model="form.direction" class="field-select">
<option value="">-- not specified --</option>
<option value="inbound">Inbound (they called me)</option>
<option value="outbound">Outbound (I called them)</option>
</select>
</div>
<div class="field">
<label class="field-label" for="log-subject">Subject (optional)</label>
<input id="log-subject" v-model="form.subject" type="text" class="field-input" />
</div>
<div class="field">
<label class="field-label" for="log-body">
Notes <span class="field-required" aria-hidden="true">*</span>
</label>
<textarea
id="log-body"
v-model="form.body"
class="field-textarea"
rows="5"
required
aria-required="true"
/>
</div>
<div class="field">
<label class="field-label" for="log-date">Date/time</label>
<input id="log-date" v-model="form.logged_at" type="datetime-local" class="field-input" />
</div>
<p v-if="error" class="modal-error" role="alert">{{ error }}</p>
<footer class="modal-footer">
<button type="button" class="btn btn--ghost" @click="emit('close')">Cancel</button>
<button type="submit" class="btn btn--primary" :disabled="saving">
{{ saving ? 'Saving…' : 'Save' }}
</button>
</footer>
</form>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
import { ref, computed, watch, nextTick } from 'vue'
import { useMessagingStore } from '../stores/messaging'
const props = defineProps<{
show: boolean
jobId: number
type: 'call_note' | 'in_person'
}>()
const emit = defineEmits<{
(e: 'close'): void
(e: 'saved'): void
}>()
const store = useMessagingStore()
const dialogEl = ref<HTMLElement | null>(null)
const saving = ref(false)
const error = ref<string | null>(null)
const title = computed(() =>
props.type === 'call_note' ? 'Log a call' : 'Log an in-person note'
)
const form = ref({
direction: '',
subject: '',
body: '',
logged_at: '',
})
// Focus the dialog when it opens; compute localNow fresh each time
watch(() => props.show, async (val) => {
if (val) {
const now = new Date()
const localNow = new Date(now.getTime() - now.getTimezoneOffset() * 60000)
.toISOString()
.slice(0, 16)
error.value = null
form.value = { direction: '', subject: '', body: '', logged_at: localNow }
await nextTick()
dialogEl.value?.focus()
}
})
async function handleSubmit() {
if (!form.value.body.trim()) { error.value = 'Notes are required.'; return }
saving.value = true
error.value = null
const result = await store.createMessage({
job_id: props.jobId,
job_contact_id: null,
type: props.type,
direction: form.value.direction || null,
subject: form.value.subject || null,
body: form.value.body,
from_addr: null,
to_addr: null,
template_id: null,
logged_at: form.value.logged_at || undefined,
})
saving.value = false
if (result) emit('saved')
else error.value = store.error ?? 'Save failed.'
}
</script>
<style scoped>
.modal-backdrop {
position: fixed;
inset: 0;
background: rgba(0,0,0,0.5);
display: flex;
align-items: center;
justify-content: center;
z-index: 200;
}
.modal-dialog {
background: var(--color-surface-raised);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
width: min(480px, 95vw);
max-height: 90vh;
overflow-y: auto;
outline: none;
}
.modal-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: var(--space-4) var(--space-5);
border-bottom: 1px solid var(--color-border-light);
}
.modal-title { font-size: var(--text-lg); font-weight: 600; margin: 0; }
.modal-close {
background: none; border: none; cursor: pointer;
color: var(--color-text-muted); font-size: var(--text-lg);
padding: var(--space-1); border-radius: var(--radius-sm);
min-width: 32px; min-height: 32px;
}
.modal-close:hover { background: var(--color-surface-alt); }
.modal-body { padding: var(--space-4) var(--space-5); display: flex; flex-direction: column; gap: var(--space-4); }
.field { display: flex; flex-direction: column; gap: var(--space-1); }
.field-label { font-size: var(--text-sm); font-weight: 500; color: var(--color-text-muted); }
.field-required { color: var(--app-accent); }
.field-input, .field-select, .field-textarea {
padding: var(--space-2) var(--space-3);
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text);
font-size: var(--text-sm);
font-family: var(--font-body);
width: 100%;
}
.field-input:focus-visible, .field-select:focus-visible, .field-textarea:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.field-textarea { resize: vertical; }
.modal-error { color: var(--app-accent); font-size: var(--text-sm); margin: 0; }
.modal-footer { display: flex; justify-content: flex-end; gap: var(--space-3); padding-top: var(--space-2); }
.btn { padding: var(--space-2) var(--space-4); border-radius: var(--radius-md); font-size: var(--text-sm); font-weight: 500; cursor: pointer; min-height: 40px; }
.btn--primary { background: var(--app-primary); color: var(--color-surface); border: none; }
.btn--primary:hover:not(:disabled) { opacity: 0.9; }
.btn--primary:disabled { opacity: 0.5; cursor: not-allowed; }
.btn--ghost { background: none; border: 1px solid var(--color-border); color: var(--color-text); }
.btn--ghost:hover { background: var(--color-surface-alt); }
</style>

View file

@ -0,0 +1,289 @@
<!-- web/src/components/MessageTemplateModal.vue -->
<template>
<Teleport to="body">
<div
v-if="show"
class="modal-backdrop"
@click.self="emit('close')"
>
<div
ref="dialogEl"
class="modal-dialog modal-dialog--wide"
role="dialog"
aria-modal="true"
:aria-label="title"
tabindex="-1"
@keydown.esc="emit('close')"
>
<header class="modal-header">
<h2 class="modal-title">{{ title }}</h2>
<button class="modal-close" @click="emit('close')" aria-label="Close"></button>
</header>
<!-- APPLY MODE -->
<div v-if="mode === 'apply'" class="modal-body">
<div class="tpl-list" role="list" aria-label="Available templates">
<button
v-for="tpl in store.templates"
:key="tpl.id"
class="tpl-item"
:class="{ 'tpl-item--selected': selectedId === tpl.id }"
role="listitem"
@click="selectTemplate(tpl)"
>
<span class="tpl-item__icon" aria-hidden="true">
{{ tpl.is_builtin ? '🔒' : '📝' }}
</span>
<span class="tpl-item__title">{{ tpl.title }}</span>
<span class="tpl-item__cat">{{ tpl.category }}</span>
</button>
</div>
<div v-if="preview" class="tpl-preview">
<p class="tpl-preview__subject" v-if="preview.subject">
<strong>Subject:</strong> <span v-html="highlightTokens(preview.subject)" />
</p>
<pre class="tpl-preview__body" v-html="highlightTokens(preview.body)" />
<div class="tpl-preview__actions">
<button class="btn btn--primary" @click="copyPreview">Copy body</button>
<button class="btn btn--ghost" @click="emit('close')">Cancel</button>
</div>
</div>
<p v-else class="tpl-hint">Select a template to preview it with your job details.</p>
</div>
<!-- CREATE / EDIT MODE -->
<form v-else class="modal-body" @submit.prevent="handleSubmit">
<div class="field">
<label class="field-label" for="tpl-title">Title *</label>
<input id="tpl-title" v-model="form.title" type="text" class="field-input" required aria-required="true" />
</div>
<div class="field">
<label class="field-label" for="tpl-category">Category</label>
<select id="tpl-category" v-model="form.category" class="field-select">
<option value="follow_up">Follow-up</option>
<option value="thank_you">Thank you</option>
<option value="accommodation">Accommodation request</option>
<option value="withdrawal">Withdrawal</option>
<option value="custom">Custom</option>
</select>
</div>
<div class="field">
<label class="field-label" for="tpl-subject">Subject template (optional)</label>
<input id="tpl-subject" v-model="form.subject_template" type="text" class="field-input"
placeholder="e.g. Following up — {{role}} application" />
</div>
<div class="field">
<label class="field-label" for="tpl-body">Body template *</label>
<p class="field-hint">Use <code>{{name}}</code>, <code>{{company}}</code>, <code>{{role}}</code>, <code>{{recruiter_name}}</code>, <code>{{date}}</code>, <code>{{accommodation_details}}</code></p>
<textarea id="tpl-body" v-model="form.body_template" class="field-textarea" rows="8"
required aria-required="true" />
</div>
<p v-if="error" class="modal-error" role="alert">{{ error }}</p>
<footer class="modal-footer">
<button type="button" class="btn btn--ghost" @click="emit('close')">Cancel</button>
<button type="submit" class="btn btn--primary" :disabled="store.saving">
{{ store.saving ? 'Saving…' : (mode === 'create' ? 'Create template' : 'Save changes') }}
</button>
</footer>
</form>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
import { ref, computed, watch, nextTick } from 'vue'
import { useMessagingStore, type MessageTemplate } from '../stores/messaging'
const props = defineProps<{
show: boolean
mode: 'apply' | 'create' | 'edit'
jobTokens?: Record<string, string> // { name, company, role, recruiter_name, date }
editTemplate?: MessageTemplate // required when mode='edit'
}>()
const emit = defineEmits<{
(e: 'close'): void
(e: 'saved'): void
(e: 'applied', body: string): void
}>()
const store = useMessagingStore()
const dialogEl = ref<HTMLElement | null>(null)
const selectedId = ref<number | null>(null)
const error = ref<string | null>(null)
const form = ref({
title: '',
category: 'custom',
subject_template: '',
body_template: '',
})
const title = computed(() => ({
apply: 'Use a template',
create: 'Create template',
edit: 'Edit template',
}[props.mode]))
watch(() => props.show, async (val) => {
if (!val) return
error.value = null
selectedId.value = null
if (props.mode === 'edit' && props.editTemplate) {
form.value = {
title: props.editTemplate.title,
category: props.editTemplate.category,
subject_template: props.editTemplate.subject_template ?? '',
body_template: props.editTemplate.body_template,
}
} else {
form.value = { title: '', category: 'custom', subject_template: '', body_template: '' }
}
await nextTick()
dialogEl.value?.focus()
})
function substituteTokens(text: string): string {
const tokens = props.jobTokens ?? {}
return text.replace(/\{\{(\w+)\}\}/g, (_, key) => tokens[key] ?? `{{${key}}}`)
}
function highlightTokens(text: string): string {
// Remaining unresolved tokens are highlighted
const escaped = text.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;')
return escaped.replace(
/\{\{(\w+)\}\}/g,
'<mark class="token-unresolved">{{$1}}</mark>'
)
}
interface PreviewData { subject: string; body: string }
const preview = computed<PreviewData | null>(() => {
if (props.mode !== 'apply' || selectedId.value === null) return null
const tpl = store.templates.find(t => t.id === selectedId.value)
if (!tpl) return null
return {
subject: substituteTokens(tpl.subject_template ?? ''),
body: substituteTokens(tpl.body_template),
}
})
function selectTemplate(tpl: MessageTemplate) {
selectedId.value = tpl.id
}
function copyPreview() {
if (!preview.value) return
navigator.clipboard.writeText(preview.value.body)
emit('applied', preview.value.body)
emit('close')
}
async function handleSubmit() {
error.value = null
if (props.mode === 'create') {
const result = await store.createTemplate({
title: form.value.title,
category: form.value.category,
subject_template: form.value.subject_template || undefined,
body_template: form.value.body_template,
})
if (result) emit('saved')
else error.value = store.error ?? 'Save failed.'
} else if (props.mode === 'edit' && props.editTemplate) {
const result = await store.updateTemplate(props.editTemplate.id, {
title: form.value.title,
category: form.value.category,
subject_template: form.value.subject_template || undefined,
body_template: form.value.body_template,
})
if (result) emit('saved')
else error.value = store.error ?? 'Save failed.'
}
}
</script>
<style scoped>
.modal-backdrop {
position: fixed; inset: 0;
background: rgba(0,0,0,0.5);
display: flex; align-items: center; justify-content: center;
z-index: 200;
}
.modal-dialog {
background: var(--color-surface-raised);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
width: min(560px, 95vw);
max-height: 90vh;
overflow-y: auto;
outline: none;
}
.modal-dialog--wide { width: min(700px, 95vw); }
.modal-header {
display: flex; align-items: center; justify-content: space-between;
padding: var(--space-4) var(--space-5);
border-bottom: 1px solid var(--color-border-light);
}
.modal-title { font-size: var(--text-lg); font-weight: 600; margin: 0; }
.modal-close {
background: none; border: none; cursor: pointer;
color: var(--color-text-muted); font-size: var(--text-lg);
padding: var(--space-1); border-radius: var(--radius-sm);
min-width: 32px; min-height: 32px;
}
.modal-close:hover { background: var(--color-surface-alt); }
.modal-body { padding: var(--space-4) var(--space-5); display: flex; flex-direction: column; gap: var(--space-4); }
.tpl-list { display: flex; flex-direction: column; gap: var(--space-1); max-height: 220px; overflow-y: auto; }
.tpl-item {
display: flex; align-items: center; gap: var(--space-2);
padding: var(--space-2) var(--space-3);
border: 1px solid var(--color-border); border-radius: var(--radius-md);
background: var(--color-surface-alt); cursor: pointer;
text-align: left; width: 100%;
transition: border-color 150ms, background 150ms;
}
.tpl-item:hover { border-color: var(--app-primary); background: var(--app-primary-light); }
.tpl-item--selected { border-color: var(--app-primary); background: var(--app-primary-light); font-weight: 600; }
.tpl-item__title { flex: 1; font-size: var(--text-sm); }
.tpl-item__cat { font-size: var(--text-xs); color: var(--color-text-muted); text-transform: capitalize; }
.tpl-preview { border: 1px solid var(--color-border); border-radius: var(--radius-md); padding: var(--space-4); background: var(--color-surface); }
.tpl-preview__subject { margin: 0 0 var(--space-2); font-size: var(--text-sm); }
.tpl-preview__body {
font-size: var(--text-sm); white-space: pre-wrap; font-family: var(--font-body);
margin: 0 0 var(--space-3); max-height: 200px; overflow-y: auto;
}
.tpl-preview__actions { display: flex; gap: var(--space-2); }
.tpl-hint { color: var(--color-text-muted); font-size: var(--text-sm); margin: 0; }
:global(.token-unresolved) {
background: var(--app-accent-light, #fef3c7);
color: var(--app-accent, #d97706);
border-radius: 2px;
padding: 0 2px;
}
.field { display: flex; flex-direction: column; gap: var(--space-1); }
.field-label { font-size: var(--text-sm); font-weight: 500; color: var(--color-text-muted); }
.field-hint { font-size: var(--text-xs); color: var(--color-text-muted); margin: 0; }
.field-input, .field-select, .field-textarea {
padding: var(--space-2) var(--space-3);
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text); font-size: var(--text-sm); font-family: var(--font-body); width: 100%;
}
.field-input:focus-visible, .field-select:focus-visible, .field-textarea:focus-visible {
outline: 2px solid var(--app-primary); outline-offset: 2px;
}
.field-textarea { resize: vertical; }
.modal-error { color: var(--app-accent); font-size: var(--text-sm); margin: 0; }
.modal-footer { display: flex; justify-content: flex-end; gap: var(--space-3); padding-top: var(--space-2); }
.btn { padding: var(--space-2) var(--space-4); border-radius: var(--radius-md); font-size: var(--text-sm); font-weight: 500; cursor: pointer; min-height: 40px; }
.btn--primary { background: var(--app-primary); color: var(--color-surface); border: none; }
.btn--primary:hover:not(:disabled) { opacity: 0.9; }
.btn--primary:disabled { opacity: 0.5; cursor: not-allowed; }
.btn--ghost { background: none; border: 1px solid var(--color-border); color: var(--color-text); }
.btn--ghost:hover { background: var(--color-surface-alt); }
</style>

View file

@ -0,0 +1,146 @@
<template>
<Teleport to="body">
<div v-if="show" class="sync-modal__overlay" role="dialog" aria-modal="true"
aria-labelledby="sync-modal-title" @keydown.esc="$emit('cancel')">
<div class="sync-modal">
<h2 id="sync-modal-title" class="sync-modal__title">Replace profile content?</h2>
<div class="sync-modal__comparison">
<div class="sync-modal__col sync-modal__col--before">
<div class="sync-modal__col-label">Current profile</div>
<div class="sync-modal__col-name">{{ currentSummary.name || '(no name)' }}</div>
<div class="sync-modal__col-summary">{{ currentSummary.careerSummary || '(no summary)' }}</div>
<div class="sync-modal__col-role">{{ currentSummary.latestRole || '(no experience)' }}</div>
</div>
<div class="sync-modal__arrow" aria-hidden="true"></div>
<div class="sync-modal__col sync-modal__col--after">
<div class="sync-modal__col-label">Replacing with</div>
<div class="sync-modal__col-name">{{ sourceSummary.name || '(no name)' }}</div>
<div class="sync-modal__col-summary">{{ sourceSummary.careerSummary || '(no summary)' }}</div>
<div class="sync-modal__col-role">{{ sourceSummary.latestRole || '(no experience)' }}</div>
</div>
</div>
<div v-if="blankFields.length" class="sync-modal__blank-warning">
<strong>Fields that will be blank after import:</strong>
<ul>
<li v-for="f in blankFields" :key="f">{{ f }}</li>
</ul>
<p class="sync-modal__blank-note">You can fill these in after importing.</p>
</div>
<p class="sync-modal__preserve-note">
Your salary, work preferences, and contact details are not affected.
</p>
<div class="sync-modal__actions">
<button class="btn-secondary" @click="$emit('cancel')">Keep current profile</button>
<button class="btn-danger" @click="$emit('confirm')">Replace profile content</button>
</div>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
interface ContentSummary {
name: string
careerSummary: string
latestRole: string
}
defineProps<{
show: boolean
currentSummary: ContentSummary
sourceSummary: ContentSummary
blankFields: string[]
}>()
defineEmits<{
confirm: []
cancel: []
}>()
</script>
<style scoped>
.sync-modal__overlay {
position: fixed; inset: 0; z-index: 1000;
background: rgba(0,0,0,0.5);
display: flex; align-items: center; justify-content: center;
padding: var(--space-4);
}
.sync-modal {
background: var(--color-surface-raised);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg, 0.75rem);
padding: var(--space-6);
max-width: 600px; width: 100%;
max-height: 90vh; overflow-y: auto;
}
.sync-modal__title {
font-size: 1.15rem; font-weight: 700;
margin-bottom: var(--space-5);
color: var(--color-text);
}
.sync-modal__comparison {
display: grid; grid-template-columns: 1fr auto 1fr; gap: var(--space-3);
align-items: start; margin-bottom: var(--space-5);
}
.sync-modal__arrow {
font-size: 1.5rem; color: var(--color-text-muted);
padding-top: var(--space-5);
}
.sync-modal__col {
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
padding: var(--space-3);
}
.sync-modal__col--after { border-color: var(--color-primary); }
.sync-modal__col-label {
font-size: 0.75rem; font-weight: 600; color: var(--color-text-muted);
text-transform: uppercase; letter-spacing: 0.05em;
margin-bottom: var(--space-2);
}
.sync-modal__col-name { font-weight: 600; color: var(--color-text); margin-bottom: var(--space-1); }
.sync-modal__col-summary {
font-size: 0.82rem; color: var(--color-text-muted);
overflow: hidden; display: -webkit-box;
-webkit-line-clamp: 2; -webkit-box-orient: vertical;
margin-bottom: var(--space-1);
}
.sync-modal__col-role { font-size: 0.82rem; color: var(--color-text-muted); font-style: italic; }
.sync-modal__blank-warning {
background: color-mix(in srgb, var(--color-warning, #d97706) 10%, var(--color-surface-alt));
border: 1px solid color-mix(in srgb, var(--color-warning, #d97706) 30%, var(--color-border));
border-radius: var(--radius-md); padding: var(--space-3);
margin-bottom: var(--space-4);
font-size: 0.85rem;
}
.sync-modal__blank-warning ul { margin: var(--space-2) 0 0 var(--space-4); }
.sync-modal__blank-note { margin-top: var(--space-2); color: var(--color-text-muted); }
.sync-modal__preserve-note {
font-size: 0.82rem; color: var(--color-text-muted);
margin-bottom: var(--space-5);
}
.sync-modal__actions {
display: flex; gap: var(--space-3); justify-content: flex-end; flex-wrap: wrap;
}
.btn-danger {
padding: var(--space-2) var(--space-4);
background: var(--color-error, #dc2626);
color: #fff; border: none;
border-radius: var(--radius-md); cursor: pointer;
font-size: var(--font-sm); font-weight: 600;
}
.btn-danger:hover { filter: brightness(1.1); }
.btn-secondary {
padding: var(--space-2) var(--space-4);
background: transparent;
color: var(--color-text);
border: 1px solid var(--color-border);
border-radius: var(--radius-md); cursor: pointer;
font-size: var(--font-sm);
}
.btn-secondary:hover { background: var(--color-surface-alt); }
</style>

View file

@ -0,0 +1,160 @@
<template>
<Teleport to="body">
<div v-if="visible" class="welcome-modal-overlay" @click.self="dismiss">
<div
class="welcome-modal"
role="dialog"
aria-modal="true"
aria-labelledby="welcome-modal-title"
>
<span aria-hidden="true" class="welcome-modal__icon">🦅</span>
<h2 id="welcome-modal-title" class="welcome-modal__heading">
Welcome to Peregrine
</h2>
<p class="welcome-modal__desc">
A live demo with realistic job search data. Explore freely nothing you do here is saved.
</p>
<ul class="welcome-modal__features" aria-label="What to try">
<li>📋 Review &amp; rate matched jobs</li>
<li> Draft a cover letter with AI</li>
<li>📅 Track your interview pipeline</li>
<li>🎉 See a hired outcome</li>
</ul>
<button class="welcome-modal__explore" @click="dismiss">
Explore the demo
</button>
<div class="welcome-modal__links">
<a
href="https://circuitforge.tech/account"
class="welcome-modal__link welcome-modal__link--primary"
target="_blank"
rel="noopener"
>Get a free key</a>
<a
href="https://git.opensourcesolarpunk.com/Circuit-Forge/peregrine"
class="welcome-modal__link welcome-modal__link--secondary"
target="_blank"
rel="noopener"
>Self-host </a>
</div>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
import { ref } from 'vue'
const LS_KEY = 'peregrine_demo_visited'
const emit = defineEmits<{ dismissed: [] }>()
const visible = ref(!localStorage.getItem(LS_KEY))
function dismiss(): void {
localStorage.setItem(LS_KEY, '1')
visible.value = false
emit('dismissed')
}
</script>
<style scoped>
.welcome-modal-overlay {
position: fixed;
inset: 0;
background: rgba(0, 0, 0, 0.6);
display: flex;
align-items: center;
justify-content: center;
z-index: 1000;
padding: var(--space-4, 16px);
}
.welcome-modal {
background: var(--color-surface-raised, #1e2d45);
border: 1px solid var(--color-border, #2a3a56);
border-radius: var(--radius-lg, 12px);
padding: var(--space-6, 24px);
width: 100%;
max-width: 360px;
box-shadow: 0 12px 40px rgba(0, 0, 0, 0.6);
display: flex;
flex-direction: column;
gap: var(--space-3, 12px);
}
.welcome-modal__icon { font-size: 2rem; }
.welcome-modal__heading {
font-size: 1.15rem;
font-weight: 700;
color: var(--color-text, #eaeff8);
margin: 0;
}
.welcome-modal__desc {
font-size: 0.85rem;
color: var(--color-text-muted, #8898aa);
line-height: 1.5;
margin: 0;
}
.welcome-modal__features {
list-style: none;
padding: 0;
margin: 0;
border-top: 1px solid var(--color-border, #2a3a56);
padding-top: var(--space-3, 12px);
display: flex;
flex-direction: column;
gap: var(--space-2, 8px);
}
.welcome-modal__features li {
font-size: 0.85rem;
color: var(--color-text-muted, #8898aa);
}
.welcome-modal__explore {
width: 100%;
background: var(--app-primary, #2B6CB0);
color: #fff;
border: none;
border-radius: var(--radius-md, 8px);
padding: 10px;
font-size: 0.9rem;
font-weight: 600;
cursor: pointer;
transition: opacity 150ms;
}
.welcome-modal__explore:hover { opacity: 0.85; }
.welcome-modal__links {
display: grid;
grid-template-columns: 1fr 1fr;
gap: var(--space-2, 8px);
}
.welcome-modal__link {
text-align: center;
font-size: 0.8rem;
font-weight: 600;
padding: 6px;
border-radius: var(--radius-sm, 4px);
text-decoration: none;
transition: opacity 150ms;
}
.welcome-modal__link:hover { opacity: 0.85; }
.welcome-modal__link--primary {
border: 1px solid var(--app-primary, #2B6CB0);
color: var(--app-primary-light, #68A8D8);
}
.welcome-modal__link--secondary {
border: 1px solid var(--color-border, #2a3a56);
color: var(--color-text-muted, #8898aa);
}
</style>

View file

@ -0,0 +1,22 @@
import { describe, it, expect } from 'vitest'
import { mount } from '@vue/test-utils'
import DemoBanner from '../DemoBanner.vue'
describe('DemoBanner', () => {
it('renders the demo label', () => {
const w = mount(DemoBanner)
expect(w.text()).toContain('Demo mode')
})
it('renders a free key link', () => {
const w = mount(DemoBanner)
expect(w.find('a.demo-banner__cta--primary').exists()).toBe(true)
expect(w.find('a.demo-banner__cta--primary').text()).toContain('free key')
})
it('renders a self-host link', () => {
const w = mount(DemoBanner)
expect(w.find('a.demo-banner__cta--secondary').exists()).toBe(true)
expect(w.find('a.demo-banner__cta--secondary').text()).toContain('Self-host')
})
})

View file

@ -0,0 +1,28 @@
import { describe, it, expect, beforeEach } from 'vitest'
import { mount } from '@vue/test-utils'
import HintChip from '../HintChip.vue'
beforeEach(() => { localStorage.clear() })
const factory = (viewKey = 'home', message = 'Test hint') =>
mount(HintChip, { props: { viewKey, message } })
describe('HintChip', () => {
it('renders the message', () => {
const w = factory()
expect(w.text()).toContain('Test hint')
})
it('is hidden when localStorage key is already set', () => {
localStorage.setItem('peregrine_hint_home', '1')
const w = factory()
expect(w.find('.hint-chip').exists()).toBe(false)
})
it('hides and sets localStorage when dismiss button is clicked', async () => {
const w = factory()
await w.find('.hint-chip__dismiss').trigger('click')
expect(w.find('.hint-chip').exists()).toBe(false)
expect(localStorage.getItem('peregrine_hint_home')).toBe('1')
})
})

View file

@ -0,0 +1,35 @@
import { describe, it, expect, beforeEach } from 'vitest'
import { mount } from '@vue/test-utils'
import WelcomeModal from '../WelcomeModal.vue'
const LS_KEY = 'peregrine_demo_visited'
beforeEach(() => {
localStorage.clear()
})
describe('WelcomeModal', () => {
it('is visible when localStorage key is absent', () => {
const w = mount(WelcomeModal, { global: { stubs: { Teleport: true } } })
expect(w.find('.welcome-modal').exists()).toBe(true)
})
it('is hidden when localStorage key is set', () => {
localStorage.setItem(LS_KEY, '1')
const w = mount(WelcomeModal, { global: { stubs: { Teleport: true } } })
expect(w.find('.welcome-modal').exists()).toBe(false)
})
it('dismisses and sets localStorage on primary CTA click', async () => {
const w = mount(WelcomeModal, { global: { stubs: { Teleport: true } } })
await w.find('.welcome-modal__explore').trigger('click')
expect(w.find('.welcome-modal').exists()).toBe(false)
expect(localStorage.getItem(LS_KEY)).toBe('1')
})
it('emits dismissed event on close', async () => {
const w = mount(WelcomeModal, { global: { stubs: { Teleport: true } } })
await w.find('.welcome-modal__explore').trigger('click')
expect(w.emitted('dismissed')).toBeTruthy()
})
})

View file

@ -1,6 +1,9 @@
import { showToast } from './useToast'
export type ApiError =
| { kind: 'network'; message: string }
| { kind: 'http'; status: number; detail: string }
| { kind: 'demo-blocked' }
// Strip trailing slash so '/peregrine/' + '/api/...' → '/peregrine/api/...'
const _apiBase = import.meta.env.BASE_URL.replace(/\/$/, '')
@ -12,8 +15,20 @@ export async function useApiFetch<T>(
try {
const res = await fetch(_apiBase + url, opts)
if (!res.ok) {
const detail = await res.text().catch(() => '')
return { data: null, error: { kind: 'http', status: res.status, detail } }
const rawText = await res.text().catch(() => '')
// Demo mode: show toast and swallow the error so callers don't need to handle it
if (res.status === 403) {
try {
const body = JSON.parse(rawText) as { detail?: string }
if (body.detail === 'demo-write-blocked') {
showToast('Demo mode — sign in to save changes')
// Return a truthy error so callers bail early (no optimistic UI update),
// but the toast is already shown so no additional error handling needed.
return { data: null, error: { kind: 'demo-blocked' as const } }
}
} catch { /* not JSON — fall through to normal error */ }
}
return { data: null, error: { kind: 'http', status: res.status, detail: rawText } }
}
const data = await res.json() as T
return { data, error: null }

View file

@ -12,6 +12,9 @@ export const router = createRouter({
{ path: '/apply/:id', component: () => import('../views/ApplyWorkspaceView.vue') },
{ path: '/resumes', component: () => import('../views/ResumesView.vue') },
{ path: '/interviews', component: () => import('../views/InterviewsView.vue') },
{ path: '/messages', component: () => import('../views/MessagingView.vue') },
{ path: '/contacts', redirect: '/messages' },
{ path: '/references', component: () => import('../views/ReferencesView.vue') },
{ path: '/digest', component: () => import('../views/DigestView.vue') },
{ path: '/prep', component: () => import('../views/InterviewPrepView.vue') },
{ path: '/prep/:id', component: () => import('../views/InterviewPrepView.vue') },
@ -26,6 +29,7 @@ export const router = createRouter({
{ path: 'resume', component: () => import('../views/settings/ResumeProfileView.vue') },
{ path: 'search', component: () => import('../views/settings/SearchPrefsView.vue') },
{ path: 'system', component: () => import('../views/settings/SystemSettingsView.vue') },
{ path: 'connections', component: () => import('../views/settings/ConnectionsSettingsView.vue') },
{ path: 'fine-tune', component: () => import('../views/settings/FineTuneView.vue') },
{ path: 'license', component: () => import('../views/settings/LicenseView.vue') },
{ path: 'data', component: () => import('../views/settings/DataView.vue') },

View file

@ -30,6 +30,7 @@ export interface PipelineJob {
offer_at: string | null
hired_at: string | null
survey_at: string | null
hired_feedback: string | null // JSON: { what_helped, factors }
stage_signals: StageSignal[] // undismissed signals, newest first
}

174
web/src/stores/messaging.ts Normal file
View file

@ -0,0 +1,174 @@
// web/src/stores/messaging.ts
import { ref } from 'vue'
import { defineStore } from 'pinia'
import { useApiFetch } from '../composables/useApi'
export interface Message {
id: number
job_id: number | null
job_contact_id: number | null
type: 'call_note' | 'in_person' | 'email' | 'draft'
direction: 'inbound' | 'outbound' | null
subject: string | null
body: string | null
from_addr: string | null
to_addr: string | null
logged_at: string
approved_at: string | null
template_id: number | null
osprey_call_id: string | null
}
export interface MessageTemplate {
id: number
key: string | null
title: string
category: string
subject_template: string | null
body_template: string
is_builtin: number
is_community: number
community_source: string | null
created_at: string
updated_at: string
}
export const useMessagingStore = defineStore('messaging', () => {
const messages = ref<Message[]>([])
const templates = ref<MessageTemplate[]>([])
const loading = ref(false)
const saving = ref(false)
const error = ref<string | null>(null)
const draftPending = ref<number | null>(null) // message_id of pending draft
async function fetchMessages(jobId: number) {
loading.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<Message[]>(
`/api/messages?job_id=${jobId}`
)
loading.value = false
if (fetchErr) { error.value = 'Could not load messages.'; return }
messages.value = data ?? []
}
async function fetchTemplates() {
const { data, error: fetchErr } = await useApiFetch<MessageTemplate[]>(
'/api/message-templates'
)
if (fetchErr) { error.value = 'Could not load templates.'; return }
templates.value = data ?? []
}
async function createMessage(payload: Omit<Message, 'id' | 'approved_at' | 'osprey_call_id'> & { logged_at?: string }) {
saving.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<Message>(
'/api/messages',
{ method: 'POST', body: JSON.stringify(payload), headers: { 'Content-Type': 'application/json' } }
)
saving.value = false
if (fetchErr || !data) { error.value = 'Failed to save message.'; return null }
messages.value = [data, ...messages.value]
return data
}
async function deleteMessage(id: number) {
const { error: fetchErr } = await useApiFetch(
`/api/messages/${id}`,
{ method: 'DELETE' }
)
if (fetchErr) { error.value = 'Failed to delete message.'; return }
messages.value = messages.value.filter(m => m.id !== id)
}
async function createTemplate(payload: Pick<MessageTemplate, 'title' | 'category' | 'body_template'> & { subject_template?: string }) {
saving.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<MessageTemplate>(
'/api/message-templates',
{ method: 'POST', body: JSON.stringify(payload), headers: { 'Content-Type': 'application/json' } }
)
saving.value = false
if (fetchErr || !data) { error.value = 'Failed to create template.'; return null }
templates.value = [...templates.value, data]
return data
}
async function updateTemplate(id: number, payload: Partial<Pick<MessageTemplate, 'title' | 'category' | 'subject_template' | 'body_template'>>) {
saving.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<MessageTemplate>(
`/api/message-templates/${id}`,
{ method: 'PUT', body: JSON.stringify(payload), headers: { 'Content-Type': 'application/json' } }
)
saving.value = false
if (fetchErr || !data) { error.value = 'Failed to update template.'; return null }
templates.value = templates.value.map(t => t.id === id ? data : t)
return data
}
async function deleteTemplate(id: number) {
const { error: fetchErr } = await useApiFetch(
`/api/message-templates/${id}`,
{ method: 'DELETE' }
)
if (fetchErr) { error.value = 'Failed to delete template.'; return }
templates.value = templates.value.filter(t => t.id !== id)
}
async function requestDraft(contactId: number) {
loading.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<{ message_id: number }>(
`/api/contacts/${contactId}/draft-reply`,
{ method: 'POST', headers: { 'Content-Type': 'application/json' } }
)
loading.value = false
if (fetchErr || !data) {
error.value = 'Could not generate draft. Check LLM settings.'
return null
}
draftPending.value = data.message_id
return data.message_id
}
async function updateMessageBody(id: number, body: string) {
const { data, error: fetchErr } = await useApiFetch<Message>(
`/api/messages/${id}`,
{ method: 'PUT', body: JSON.stringify({ body }), headers: { 'Content-Type': 'application/json' } }
)
if (fetchErr || !data) { error.value = 'Failed to save edits.'; return null }
messages.value = messages.value.map(m => m.id === id ? { ...m, body: data.body } : m)
return data
}
async function approveDraft(messageId: number): Promise<string | null> {
const { data, error: fetchErr } = await useApiFetch<{ body: string; approved_at: string }>(
`/api/messages/${messageId}/approve`,
{ method: 'POST' }
)
if (fetchErr || !data) { error.value = 'Approve failed.'; return null }
messages.value = messages.value.map(m =>
m.id === messageId ? { ...m, approved_at: data.approved_at } : m
)
draftPending.value = null
return data.body
}
function clear() {
messages.value = []
templates.value = []
loading.value = false
saving.value = false
error.value = null
draftPending.value = null
}
return {
messages, templates, loading, saving, error, draftPending,
fetchMessages, fetchTemplates, createMessage, deleteMessage,
createTemplate, updateTemplate, deleteTemplate,
requestDraft, approveDraft, updateMessageBody, clear,
}
})

View file

@ -22,6 +22,11 @@ export interface Contact {
received_at: string | null
}
export interface QAItem {
question: string
answer: string
}
export interface TaskStatus {
status: 'queued' | 'running' | 'completed' | 'failed' | 'none' | null
stage: string | null
@ -43,6 +48,8 @@ export const usePrepStore = defineStore('prep', () => {
const research = ref<ResearchBrief | null>(null)
const contacts = ref<Contact[]>([])
const contactsError = ref<string | null>(null)
const qaItems = ref<QAItem[]>([])
const qaError = ref<string | null>(null)
const taskStatus = ref<TaskStatus>({ status: null, stage: null, message: null })
const fullJob = ref<FullJobDetail | null>(null)
const loading = ref(false)
@ -64,6 +71,8 @@ export const usePrepStore = defineStore('prep', () => {
research.value = null
contacts.value = []
contactsError.value = null
qaItems.value = []
qaError.value = null
taskStatus.value = { status: null, stage: null, message: null }
fullJob.value = null
error.value = null
@ -72,9 +81,10 @@ export const usePrepStore = defineStore('prep', () => {
loading.value = true
try {
const [researchResult, contactsResult, taskResult, jobResult] = await Promise.all([
const [researchResult, contactsResult, qaResult, taskResult, jobResult] = await Promise.all([
useApiFetch<ResearchBrief>(`/api/jobs/${jobId}/research`),
useApiFetch<Contact[]>(`/api/jobs/${jobId}/contacts`),
useApiFetch<QAItem[]>(`/api/jobs/${jobId}/qa`),
useApiFetch<TaskStatus>(`/api/jobs/${jobId}/research/task`),
useApiFetch<FullJobDetail>(`/api/jobs/${jobId}`),
])
@ -100,6 +110,15 @@ export const usePrepStore = defineStore('prep', () => {
contactsError.value = null
}
// Q&A failure is non-fatal — degrade the Practice Q&A tab only
if (qaResult.error && !(qaResult.error.kind === 'http' && qaResult.error.status === 404)) {
qaError.value = 'Could not load Q&A history.'
qaItems.value = []
} else {
qaItems.value = qaResult.data ?? []
qaError.value = null
}
taskStatus.value = taskResult.data ?? { status: null, stage: null, message: null }
fullJob.value = jobResult.data ?? null
@ -144,11 +163,23 @@ export const usePrepStore = defineStore('prep', () => {
}, 3000)
}
async function fetchContacts(jobId: number) {
const { data, error: fetchError } = await useApiFetch<Contact[]>(`/api/jobs/${jobId}/contacts`)
if (fetchError) {
contactsError.value = 'Could not load email history.'
} else {
contacts.value = data ?? []
contactsError.value = null
}
}
function clear() {
_clearInterval()
research.value = null
contacts.value = []
contactsError.value = null
qaItems.value = []
qaError.value = null
taskStatus.value = { status: null, stage: null, message: null }
fullJob.value = null
loading.value = false
@ -160,12 +191,15 @@ export const usePrepStore = defineStore('prep', () => {
research,
contacts,
contactsError,
qaItems,
qaError,
taskStatus,
fullJob,
loading,
error,
currentJobId,
fetchFor,
fetchContacts,
generateResearch,
pollTask,
clear,

View file

@ -3,19 +3,21 @@ import { ref, computed } from 'vue'
import { useApiFetch } from '../composables/useApi'
export interface Job {
id: number
title: string
company: string
url: string
source: string | null
location: string | null
is_remote: boolean
salary: string | null
description: string | null
match_score: number | null
keyword_gaps: string | null // JSON-encoded string[]
date_found: string
status: string
id: number
title: string
company: string
url: string
source: string | null
location: string | null
is_remote: boolean
salary: string | null
description: string | null
match_score: number | null
keyword_gaps: string | null // JSON-encoded string[]
date_found: string
date_posted: string | null
shadow_score: 'shadow' | 'stale' | null
status: string
}
interface UndoEntry {

View file

@ -8,6 +8,12 @@ export interface WorkEntry {
industry: string; responsibilities: string; skills: string[]
}
export interface EducationEntry {
id: string
institution: string; degree: string; field: string
start_date: string; end_date: string
}
export const useResumeStore = defineStore('settings/resume', () => {
const hasResume = ref(false)
const loading = ref(false)
@ -31,6 +37,16 @@ export const useResumeStore = defineStore('settings/resume', () => {
const veteran_status = ref(''); const disability = ref('')
// Keywords
const skills = ref<string[]>([]); const domains = ref<string[]>([]); const keywords = ref<string[]>([])
// Extended profile fields
const career_summary = ref('')
const education = ref<EducationEntry[]>([])
const achievements = ref<string[]>([])
const lastSynced = ref<string | null>(null)
// LLM suggestions (pending, not yet accepted)
const skillSuggestions = ref<string[]>([])
const domainSuggestions = ref<string[]>([])
const keywordSuggestions = ref<string[]>([])
const suggestingField = ref<'skills' | 'domains' | 'keywords' | null>(null)
function syncFromProfile(p: { name: string; email: string; phone: string; linkedin_url: string }) {
name.value = p.name; email.value = p.email
@ -64,6 +80,9 @@ export const useResumeStore = defineStore('settings/resume', () => {
skills.value = (data.skills as string[]) ?? []
domains.value = (data.domains as string[]) ?? []
keywords.value = (data.keywords as string[]) ?? []
career_summary.value = String(data.career_summary ?? '')
education.value = ((data.education as Omit<EducationEntry, 'id'>[]) ?? []).map(e => ({ ...e, id: crypto.randomUUID() }))
achievements.value = (data.achievements as string[]) ?? []
}
async function save() {
@ -79,12 +98,19 @@ export const useResumeStore = defineStore('settings/resume', () => {
gender: gender.value, pronouns: pronouns.value, ethnicity: ethnicity.value,
veteran_status: veteran_status.value, disability: disability.value,
skills: skills.value, domains: domains.value, keywords: keywords.value,
career_summary: career_summary.value,
education: education.value.map(({ id: _id, ...e }) => e),
achievements: achievements.value,
}
const { error } = await useApiFetch('/api/settings/resume', {
method: 'PUT', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(body),
})
saving.value = false
if (error) saveError.value = 'Save failed — please try again.'
if (error) {
saveError.value = 'Save failed — please try again.'
} else {
lastSynced.value = new Date().toISOString()
}
}
async function createBlank() {
@ -100,6 +126,40 @@ export const useResumeStore = defineStore('settings/resume', () => {
experience.value.splice(idx, 1)
}
function addEducation() {
education.value.push({
id: crypto.randomUUID(), institution: '', degree: '', field: '', start_date: '', end_date: ''
})
}
function removeEducation(idx: number) {
education.value.splice(idx, 1)
}
async function suggestTags(field: 'skills' | 'domains' | 'keywords') {
suggestingField.value = field
const current = field === 'skills' ? skills.value : field === 'domains' ? domains.value : keywords.value
const { data } = await useApiFetch<{ suggestions: string[] }>('/api/settings/resume/suggest-tags', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ type: field, current }),
})
suggestingField.value = null
if (!data?.suggestions) return
const existing = field === 'skills' ? skills.value : field === 'domains' ? domains.value : keywords.value
const fresh = data.suggestions.filter(s => !existing.includes(s))
if (field === 'skills') skillSuggestions.value = fresh
else if (field === 'domains') domainSuggestions.value = fresh
else keywordSuggestions.value = fresh
}
function acceptTagSuggestion(field: 'skills' | 'domains' | 'keywords', value: string) {
addTag(field, value)
if (field === 'skills') skillSuggestions.value = skillSuggestions.value.filter(s => s !== value)
else if (field === 'domains') domainSuggestions.value = domainSuggestions.value.filter(s => s !== value)
else keywordSuggestions.value = keywordSuggestions.value.filter(s => s !== value)
}
function addTag(field: 'skills' | 'domains' | 'keywords', value: string) {
const arr = field === 'skills' ? skills.value : field === 'domains' ? domains.value : keywords.value
const trimmed = value.trim()
@ -119,7 +179,9 @@ export const useResumeStore = defineStore('settings/resume', () => {
experience, salary_min, salary_max, notice_period, remote, relocation, assessment, background_check,
gender, pronouns, ethnicity, veteran_status, disability,
skills, domains, keywords,
skillSuggestions, domainSuggestions, keywordSuggestions, suggestingField,
career_summary, education, achievements, lastSynced,
syncFromProfile, load, save, createBlank,
addExperience, removeExperience, addTag, removeTag,
addExperience, removeExperience, addEducation, removeEducation, addTag, removeTag, suggestTags, acceptTagSuggestion,
}
})

View file

@ -28,14 +28,33 @@ export interface SurveyResponse {
created_at: string | null
}
interface TaskStatus {
status: 'queued' | 'running' | 'completed' | 'failed' | 'none' | null
stage: string | null
result: { output: string; source: string } | null
message: string | null
}
export const useSurveyStore = defineStore('survey', () => {
const analysis = ref<SurveyAnalysis | null>(null)
const history = ref<SurveyResponse[]>([])
const loading = ref(false)
const saving = ref(false)
const error = ref<string | null>(null)
const analysis = ref<SurveyAnalysis | null>(null)
const history = ref<SurveyResponse[]>([])
const loading = ref(false)
const saving = ref(false)
const error = ref<string | null>(null)
const taskStatus = ref<TaskStatus>({ status: null, stage: null, result: null, message: null })
const visionAvailable = ref(false)
const currentJobId = ref<number | null>(null)
const currentJobId = ref<number | null>(null)
// Pending analyze payload held across the poll lifecycle so rawInput/mode survive
const _pendingPayload = ref<{ text?: string; image_b64?: string; mode: 'quick' | 'detailed' } | null>(null)
let pollInterval: ReturnType<typeof setInterval> | null = null
function _clearInterval() {
if (pollInterval !== null) {
clearInterval(pollInterval)
pollInterval = null
}
}
async function fetchFor(jobId: number) {
if (jobId !== currentJobId.value) {
@ -43,6 +62,7 @@ export const useSurveyStore = defineStore('survey', () => {
history.value = []
error.value = null
visionAvailable.value = false
taskStatus.value = { status: null, stage: null, result: null, message: null }
currentJobId.value = jobId
}
@ -69,23 +89,55 @@ export const useSurveyStore = defineStore('survey', () => {
jobId: number,
payload: { text?: string; image_b64?: string; mode: 'quick' | 'detailed' }
) {
_clearInterval()
loading.value = true
error.value = null
const { data, error: fetchError } = await useApiFetch<{ output: string; source: string }>(
_pendingPayload.value = payload
const { data, error: fetchError } = await useApiFetch<{ task_id: number; is_new: boolean }>(
`/api/jobs/${jobId}/survey/analyze`,
{ method: 'POST', body: JSON.stringify(payload) }
)
loading.value = false
if (fetchError || !data) {
error.value = 'Analysis failed. Please try again.'
loading.value = false
error.value = 'Failed to start analysis. Please try again.'
return
}
analysis.value = {
output: data.output,
source: isValidSource(data.source) ? data.source : 'text_paste',
mode: payload.mode,
rawInput: payload.text ?? null,
}
// Silently attach to the existing task if is_new=false — same task_id, same poll
taskStatus.value = { status: 'queued', stage: null, result: null, message: null }
pollTask(jobId, data.task_id)
}
function pollTask(jobId: number, taskId: number) {
_clearInterval()
pollInterval = setInterval(async () => {
const { data } = await useApiFetch<TaskStatus>(
`/api/jobs/${jobId}/survey/analyze/task?task_id=${taskId}`
)
if (!data) return
taskStatus.value = data
if (data.status === 'completed' || data.status === 'failed') {
_clearInterval()
loading.value = false
if (data.status === 'completed' && data.result) {
const payload = _pendingPayload.value
analysis.value = {
output: data.result.output,
source: isValidSource(data.result.source) ? data.result.source : 'text_paste',
mode: payload?.mode ?? 'quick',
rawInput: payload?.text ?? null,
}
} else if (data.status === 'failed') {
error.value = data.message ?? 'Analysis failed. Please try again.'
}
_pendingPayload.value = null
}
}, 3000)
}
async function saveResponse(
@ -96,12 +148,12 @@ export const useSurveyStore = defineStore('survey', () => {
saving.value = true
error.value = null
const body = {
survey_name: args.surveyName || undefined,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_b64: args.image_b64,
llm_output: analysis.value.output,
survey_name: args.surveyName || undefined,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_b64: args.image_b64,
llm_output: analysis.value.output,
reported_score: args.reportedScore || undefined,
}
const { data, error: fetchError } = await useApiFetch<{ id: number }>(
@ -113,32 +165,34 @@ export const useSurveyStore = defineStore('survey', () => {
error.value = 'Save failed. Your analysis is preserved — try again.'
return
}
// Prepend the saved response to history
const now = new Date().toISOString()
const saved: SurveyResponse = {
id: data.id,
survey_name: args.surveyName || null,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_path: null,
llm_output: analysis.value.output,
id: data.id,
survey_name: args.surveyName || null,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_path: null,
llm_output: analysis.value.output,
reported_score: args.reportedScore || null,
received_at: now,
created_at: now,
received_at: now,
created_at: now,
}
history.value = [saved, ...history.value]
analysis.value = null
}
function clear() {
analysis.value = null
history.value = []
loading.value = false
saving.value = false
error.value = null
_clearInterval()
analysis.value = null
history.value = []
loading.value = false
saving.value = false
error.value = null
taskStatus.value = { status: null, stage: null, result: null, message: null }
visionAvailable.value = false
currentJobId.value = null
currentJobId.value = null
_pendingPayload.value = null
}
return {
@ -147,6 +201,7 @@ export const useSurveyStore = defineStore('survey', () => {
loading,
saving,
error,
taskStatus,
visionAvailable,
currentJobId,
fetchFor,

View file

@ -1,6 +1,11 @@
<template>
<!-- Mobile: full-width list -->
<div v-if="isMobile" class="apply-list">
<HintChip
v-if="config.isDemo"
view-key="apply"
message="The Spotify cover letter is ready — open it to see how AI drafts from your resume"
/>
<header class="apply-list__header">
<h1 class="apply-list__title">Apply</h1>
<p class="apply-list__subtitle">Approved jobs ready for applications</p>
@ -50,6 +55,11 @@
<div v-else class="apply-split" :class="{ 'has-selection': selectedJobId !== null }" ref="splitEl">
<!-- Left: narrow job list -->
<div class="apply-split__list">
<HintChip
v-if="config.isDemo"
view-key="apply"
message="The Spotify cover letter is ready — open it to see how AI drafts from your resume"
/>
<div class="split-list__header">
<h1 class="split-list__title">Apply</h1>
<span v-if="coverLetterCount >= 5" class="marathon-badge" title="You're on a roll!">
@ -124,6 +134,10 @@ import { ref, onMounted, onUnmounted } from 'vue'
import { RouterLink } from 'vue-router'
import { useApiFetch } from '../composables/useApi'
import ApplyWorkspace from '../components/ApplyWorkspace.vue'
import HintChip from '../components/HintChip.vue'
import { useAppConfigStore } from '../stores/appConfig'
const config = useAppConfigStore()
// Responsive

View file

@ -0,0 +1,342 @@
<script setup lang="ts">
import { ref, computed, onMounted } from 'vue'
import { useApiFetch } from '../composables/useApi'
import HintChip from '../components/HintChip.vue'
import { useAppConfigStore } from '../stores/appConfig'
const config = useAppConfigStore()
interface Contact {
id: number
job_id: number
direction: 'inbound' | 'outbound'
subject: string | null
from_addr: string | null
to_addr: string | null
received_at: string | null
stage_signal: string | null
job_title: string | null
job_company: string | null
}
const contacts = ref<Contact[]>([])
const total = ref(0)
const loading = ref(false)
const error = ref<string | null>(null)
const search = ref('')
const direction = ref<'all' | 'inbound' | 'outbound'>('all')
const searchInput = ref('')
let debounceTimer: ReturnType<typeof setTimeout> | null = null
async function fetchContacts() {
loading.value = true
error.value = null
const params = new URLSearchParams({ limit: '100' })
if (direction.value !== 'all') params.set('direction', direction.value)
if (search.value) params.set('search', search.value)
const { data, error: fetchErr } = await useApiFetch<{ total: number; contacts: Contact[] }>(
`/api/contacts?${params}`
)
loading.value = false
if (fetchErr || !data) {
error.value = 'Failed to load contacts.'
return
}
contacts.value = data.contacts
total.value = data.total
}
function onSearchInput() {
if (debounceTimer) clearTimeout(debounceTimer)
debounceTimer = setTimeout(() => {
search.value = searchInput.value
fetchContacts()
}, 300)
}
function onDirectionChange() {
fetchContacts()
}
function formatDate(iso: string | null): string {
if (!iso) return '—'
return new Date(iso).toLocaleDateString([], { month: 'short', day: 'numeric', year: 'numeric' })
}
function displayAddr(contact: Contact): string {
return contact.direction === 'inbound'
? contact.from_addr ?? '—'
: contact.to_addr ?? '—'
}
const signalLabel: Record<string, string> = {
interview_scheduled: '📅 Interview',
offer_received: '🟢 Offer',
rejected: '✖ Rejected',
positive_response: '✅ Positive',
survey_received: '📋 Survey',
}
onMounted(fetchContacts)
</script>
<template>
<div class="contacts-view">
<HintChip
v-if="config.isDemo"
view-key="contacts"
message="Peregrine logs every recruiter email automatically — no manual entry needed"
/>
<header class="contacts-header">
<h1 class="contacts-title">Contacts</h1>
<span class="contacts-count" v-if="total > 0">{{ total }} total</span>
</header>
<div class="contacts-toolbar">
<input
v-model="searchInput"
class="contacts-search"
type="search"
placeholder="Search name, email, or subject…"
aria-label="Search contacts"
@input="onSearchInput"
/>
<div class="contacts-filter" role="group" aria-label="Filter by direction">
<button
v-for="opt in (['all', 'inbound', 'outbound'] as const)"
:key="opt"
class="filter-btn"
:class="{ 'filter-btn--active': direction === opt }"
@click="direction = opt; onDirectionChange()"
>{{ opt === 'all' ? 'All' : opt === 'inbound' ? 'Inbound' : 'Outbound' }}</button>
</div>
</div>
<div v-if="loading" class="contacts-empty">Loading</div>
<div v-else-if="error" class="contacts-empty contacts-empty--error">{{ error }}</div>
<div v-else-if="contacts.length === 0" class="contacts-empty">
No contacts found{{ search ? ' for that search' : '' }}.
</div>
<div v-else class="contacts-table-wrap">
<table class="contacts-table" aria-label="Contacts">
<thead>
<tr>
<th>Contact</th>
<th>Subject</th>
<th>Job</th>
<th>Signal</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr
v-for="c in contacts"
:key="c.id"
class="contacts-row"
:class="{ 'contacts-row--inbound': c.direction === 'inbound' }"
>
<td class="contacts-cell contacts-cell--addr">
<span class="dir-chip" :class="`dir-chip--${c.direction}`">
{{ c.direction === 'inbound' ? '↓' : '↑' }}
</span>
{{ displayAddr(c) }}
</td>
<td class="contacts-cell contacts-cell--subject">
{{ c.subject ? c.subject.slice(0, 60) + (c.subject.length > 60 ? '…' : '') : '—' }}
</td>
<td class="contacts-cell contacts-cell--job">
<span v-if="c.job_title">
{{ c.job_title }}<span v-if="c.job_company" class="job-company"> · {{ c.job_company }}</span>
</span>
<span v-else class="text-muted"></span>
</td>
<td class="contacts-cell contacts-cell--signal">
<span v-if="c.stage_signal && signalLabel[c.stage_signal]" class="signal-chip">
{{ signalLabel[c.stage_signal] }}
</span>
</td>
<td class="contacts-cell contacts-cell--date">{{ formatDate(c.received_at) }}</td>
</tr>
</tbody>
</table>
</div>
</div>
</template>
<style scoped>
.contacts-view {
padding: var(--space-6);
max-width: 1000px;
}
.contacts-header {
display: flex;
align-items: baseline;
gap: var(--space-3);
margin-bottom: var(--space-5);
}
.contacts-title {
font-size: var(--text-2xl);
font-weight: 700;
color: var(--color-text);
margin: 0;
}
.contacts-count {
font-size: var(--text-sm);
color: var(--color-text-muted);
}
.contacts-toolbar {
display: flex;
gap: var(--space-3);
align-items: center;
margin-bottom: var(--space-4);
flex-wrap: wrap;
}
.contacts-search {
flex: 1;
min-width: 200px;
padding: var(--space-2) var(--space-3);
border: 1px solid var(--color-border);
border-radius: 8px;
background: var(--color-surface);
color: var(--color-text);
font-size: var(--text-sm);
}
.contacts-search:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.contacts-filter {
display: flex;
gap: 4px;
}
.filter-btn {
padding: var(--space-1) var(--space-3);
border: 1px solid var(--color-border);
border-radius: 6px;
background: var(--color-surface);
color: var(--color-text-muted);
font-size: var(--text-sm);
cursor: pointer;
}
.filter-btn--active {
background: var(--app-primary-light);
color: var(--app-primary);
border-color: var(--app-primary);
font-weight: 600;
}
.contacts-empty {
color: var(--color-text-muted);
font-size: var(--text-sm);
padding: var(--space-8) 0;
text-align: center;
}
.contacts-empty--error {
color: var(--color-error, #c0392b);
}
.contacts-table-wrap {
overflow-x: auto;
}
.contacts-table {
width: 100%;
border-collapse: collapse;
font-size: var(--text-sm);
}
.contacts-table th {
text-align: left;
padding: var(--space-2) var(--space-3);
font-size: var(--text-xs);
font-weight: 600;
color: var(--color-text-muted);
text-transform: uppercase;
letter-spacing: 0.04em;
border-bottom: 1px solid var(--color-border);
}
.contacts-row {
border-bottom: 1px solid var(--color-border);
}
.contacts-row:hover {
background: var(--color-hover);
}
.contacts-cell {
padding: var(--space-3);
vertical-align: top;
color: var(--color-text);
}
.contacts-cell--addr {
white-space: nowrap;
font-size: var(--text-xs);
font-family: var(--font-mono);
display: flex;
align-items: center;
gap: var(--space-2);
}
.contacts-cell--subject {
color: var(--color-text-muted);
}
.contacts-cell--job {
font-size: var(--text-xs);
}
.job-company {
color: var(--color-text-muted);
}
.contacts-cell--date {
white-space: nowrap;
color: var(--color-text-muted);
font-size: var(--text-xs);
}
.dir-chip {
display: inline-flex;
align-items: center;
justify-content: center;
width: 18px;
height: 18px;
border-radius: 4px;
font-size: 10px;
font-weight: 700;
flex-shrink: 0;
}
.dir-chip--inbound {
background: rgba(39, 174, 96, 0.15);
color: var(--color-success);
}
.dir-chip--outbound {
background: var(--app-primary-light);
color: var(--app-primary);
}
.signal-chip {
font-size: var(--text-xs);
white-space: nowrap;
}
.text-muted {
color: var(--color-text-muted);
}
</style>

View file

@ -1,5 +1,10 @@
<template>
<div class="home">
<HintChip
v-if="config.isDemo"
view-key="home"
message="Start in Job Review — 12 jobs are waiting for your verdict"
/>
<!-- Header -->
<header class="home__header">
<div>
@ -371,6 +376,10 @@ import { RouterLink } from 'vue-router'
import { useJobsStore } from '../stores/jobs'
import { useApiFetch } from '../composables/useApi'
import WorkflowButton from '../components/WorkflowButton.vue'
import HintChip from '../components/HintChip.vue'
import { useAppConfigStore } from '../stores/appConfig'
const config = useAppConfigStore()
const store = useJobsStore()

Some files were not shown because too many files have changed in this diff Show more