Compare commits

...

39 commits

Author SHA1 Message Date
63334f5278 feat: messaging tab — messages, templates, draft reply (#74)
Some checks failed
CI / Backend (Python) (push) Failing after 1m48s
CI / Frontend (Vue) (push) Failing after 21s
Mirror / mirror (push) Failing after 7s
Merges feature/messaging-tab into main.

Features:
- Migration 008: messages + message_templates tables with 4 built-in templates
- API endpoints: CRUD for messages and templates, draft-reply (BYOK tier gate), approve
- PUT /api/messages/{id} for draft body persistence
- Pinia store (messaging.ts) with full action set
- MessageLogModal: log calls and in-person meetings with backdated timestamps
- MessageTemplateModal: apply (with token substitution + highlight), create, edit
- MessagingView: two-panel job list + UNION timeline (contacts + messages), Osprey easter egg
- Router: /messages route, /contacts redirect, nav renamed Messages
- Integration test suite (8 tests, 766 total passing)
- CRITICAL fix: _get_effective_tier() no longer trusts X-CF-Tier client header
2026-04-20 20:26:41 -07:00
b1e92b0e52 feat(docker): add /peregrine/ base-path routing in nginx
Adds location blocks for /peregrine/assets/ and /peregrine/ so the SPA
works correctly when accessed via a Caddy prefix that does not strip the
path (e.g. direct host access without reverse proxy stripping).
2026-04-20 20:26:31 -07:00
91e2faf5d0 fix: tier bypass, draft body persistence, canDraftLlm cleanup, limit cap
- CRITICAL: Remove X-CF-Tier header trust from _get_effective_tier; use
  Heimdall in cloud mode and APP_TIER env var in single-tenant only
- HIGH: Add update_message_body helper + PUT /api/messages/{id} endpoint;
  updateMessageBody store action; approveDraft now persists edits to DB
  before calling approve so history always shows the final approved text
- Cleanup: Remove dead canDraftLlm ref, checkLlmAvailable function, and
  v-else-if Enable LLM drafts link; show Draft reply button unconditionally
- MEDIUM: Cap GET /api/messages limit param with Query(ge=1, le=1000)
- Test: Update test_draft_without_llm_returns_402 to patch effective_tier
  instead of sending X-CF-Tier header
2026-04-20 17:19:17 -07:00
6812e3f9ef feat: /messages route + /contacts redirect + nav rename (#74) 2026-04-20 13:04:27 -07:00
899cd3604b feat: MessagingView two-panel layout + draft approval + Osprey easter egg (#74) 2026-04-20 13:02:24 -07:00
aa09b20e7e feat: MessageTemplateModal component (apply/create/edit modes) (#74) 2026-04-20 12:58:00 -07:00
b77ec81cc6 fix: thread logged_at through message stack; Esc handler and localNow fixes
- scripts/messaging.py: add logged_at param to create_message; use provided value or fall back to _now_utc()
- dev-api.py: add logged_at: Optional[str] = None to MessageCreateBody
- web/src/stores/messaging.ts: remove logged_at from Omit, add as optional intersection so callers can pass it through
- web/src/components/MessageLogModal.vue: pass logged_at in handleSubmit payload; move @keydown.esc from backdrop to modal-dialog (which holds focus); compute localNow fresh inside watch so it reflects actual open time
2026-04-20 12:55:41 -07:00
8df3297ab6 feat: MessageLogModal component (#74) 2026-04-20 12:52:19 -07:00
222eb4a088 fix: messaging store error handling and Content-Type headers 2026-04-20 12:50:51 -07:00
47a40c9e36 feat: messaging Pinia store (#74) 2026-04-20 12:48:15 -07:00
dfcc264aba test: use db.add_contact helper in integration test fixture
Replace raw sqlite3 INSERT in test_draft_without_llm_returns_402 with
add_contact() so the fixture stays in sync with schema changes
automatically.
2026-04-20 12:45:47 -07:00
d3dfd015bf feat(cloud): add CF_APP_NAME=peregrine for coordinator pipeline attribution
Allocations from peregrine cloud containers were showing pipeline=null
in cf-orch analytics. Adding CF_APP_NAME to both app and api service
blocks so LLMRouter passes it as the pipeline tag on each allocation.
2026-04-20 12:43:05 -07:00
e11750e0e6 test: messaging HTTP integration tests (#74) 2026-04-20 12:41:45 -07:00
715a8aa33e feat: LLM reply draft, tiers BYOK gate, and messaging API endpoints (#74) 2026-04-20 12:36:16 -07:00
091834f1ae test: add missing update_template KeyError test (#74) 2026-04-20 12:32:35 -07:00
ea961d6da9 feat: messaging DB helpers + unit tests (#74) 2026-04-20 11:55:43 -07:00
9eca0c21ab feat: migration 008 — messages + message_templates tables (#74) 2026-04-20 11:51:59 -07:00
5020144f8d fix: update interview + survey tests for hired_feedback column and async analyze endpoint 2026-04-20 11:48:22 -07:00
9101e716ba fix: async survey/analyze via task queue (#107)
Move POST /api/jobs/:id/survey/analyze off the FastAPI worker thread
by routing it through the LLM task queue (same pattern as cover_letter,
company_research, resume_optimize).

- Extract prompt builders + run_survey_analyze() to scripts/survey_assistant.py
- Add survey_analyze to LLM_TASK_TYPES (task_scheduler.py) with 2.5 GB VRAM budget
  (text mode: phi3:mini; visual mode uses vision service's own VRAM pool)
- Add elif branch in task_runner._run_task; result stored as JSON in error col
- Replace sync endpoint body with submit_task(); add GET /survey/analyze/task poll
- Update survey.ts store: analyze() now fires task + polls at 3s interval;
  silently attaches to existing in-flight task when is_new=false
- SurveyView button label shows task stage while polling

Fixes load-test spike: ~22 greenlets blocking on LLM inference at 100 concurrent
users, causing 90s poll timeouts on cover_letter and research tasks.
2026-04-20 11:06:14 -07:00
acc04b04eb docs(config): add cf_text and cf_voice trunk service backends to llm.yaml.example
Documents the cf-orch allocation pattern for cf-text and cf-voice as
openai_compat backends with a cf_orch block. Products enable these when
CF_ORCH_URL is set; the router allocates via the broker and calls the
managed service directly. No catalog or leaf details here — those live
in cf-orch node profiles (The Orchard trunk/leaf split).
2026-04-20 10:56:22 -07:00
280f4271a5 feat: add Plausible analytics to Vue SPA and docs
Some checks failed
CI / Backend (Python) (push) Failing after 1m13s
CI / Frontend (Vue) (push) Failing after 20s
Mirror / mirror (push) Failing after 7s
2026-04-16 21:15:55 -07:00
1c9bfc9fb6 test: integration tests for resume library<->profile sync endpoints 2026-04-16 14:29:00 -07:00
22bc57242e feat: ResumeProfileView — career_summary, education, achievements sections and sync status label 2026-04-16 14:22:36 -07:00
9f984c22cb feat: resume store — add career_summary, education, achievements, lastSynced state
Extends the resume Pinia store with EducationEntry interface, four new
refs (career_summary, education, achievements, lastSynced), education
CRUD helpers, and load/save wiring for all new fields. lastSynced is
set to current ISO timestamp on successful save.
2026-04-16 14:15:07 -07:00
fe3e4ff539 feat: ResumesView — Apply to profile button, Active profile badge, sync notice, unsaved-changes guard 2026-04-16 14:13:44 -07:00
43599834d5 feat: ResumeSyncConfirmModal — before/after confirmation for profile sync 2026-04-16 14:11:37 -07:00
fe5371613e feat: extend PUT /api/settings/resume to sync content back to default library entry
When a default_resume_id is set in user.yaml, saving the resume profile
now calls profile_to_library and update_resume_content to keep the
library entry in sync. Returns {"ok": true, "synced_library_entry_id": <int|null>}.
2026-04-16 14:09:56 -07:00
369bf68399 feat: POST /api/resumes/{id}/apply-to-profile — library→profile sync with auto-backup 2026-04-16 14:06:52 -07:00
eef6c33d94 feat: add EducationEntry model, extend ResumePayload with education/achievements/career_summary
- Add EducationEntry Pydantic model (institution, degree, field, start_date, end_date)
- Extend ResumePayload with career_summary str, education List[EducationEntry], achievements List[str]
- Rewrite _normalize_experience to pass through Vue-native format (period/responsibilities keys) unchanged; AIHawk format (key_responsibilities/employment_period) still converted
- Extend GET /api/settings/resume to fall back to user.yaml for legacy career_summary when resume YAML is missing or the field is empty
2026-04-16 14:02:59 -07:00
53bfe6b326 feat: add update_resume_synced_at and update_resume_content db helpers
Expose synced_at in _resume_as_dict (with safe fallback for pre-migration
DBs), and add two new helpers: update_resume_synced_at (library→profile
direction) and update_resume_content (profile→library direction, updates
text/struct_json/word_count/synced_at/updated_at).
2026-04-16 13:14:10 -07:00
cd787a2509 fix: period split in profile_to_library handles ISO dates with hyphens
Fixes a bug where ISO-formatted dates (e.g. '2023-01 – 2025-03') in the
period field were split incorrectly. The old code replaced the en-dash with
a hyphen first, then split on the first hyphen, causing dates like '2023-01'
to be split into '2023' and '01' instead of the expected start/end pair.

The fix splits on the dash/dash separator *before* normalizing to plain
hyphens, ensuring round-trip conversion of dates with embedded hyphens.

Adds two regression tests:
- test_profile_to_library_period_split_iso_dates: verifies en-dash separation
- test_profile_to_library_period_split_em_dash: verifies em-dash separation
2026-04-16 13:11:22 -07:00
048a5f4cc3 feat: resume_sync.py — library↔profile transform functions with tests
Pure transform functions (no LLM, no DB) bridging the two resume
representations: library struct_json ↔ ResumePayload content fields.
Exports library_to_profile_content, profile_to_library,
make_auto_backup_name, blank_fields_on_import. 22 tests, all passing.
2026-04-16 13:04:56 -07:00
fe4947a72f feat: add synced_at column to resumes table (migration 007) 2026-04-16 12:58:00 -07:00
4e11cf3cfa fix: sanitize invalid JSON escape sequences from LLM output in resume optimizer
LLMs occasionally emit backslash sequences that are valid regex but not valid
JSON (e.g. \s, \d, \p). This caused extract_jd_signals() to fall through to
the exception handler, leaving llm_signals empty. With no LLM signals, the
optimizer fell back to TF-IDF only — which is more conservative and can
legitimately return zero gaps, making the UI appear to say the resume is fine.

Fix: strip bare backslashes not followed by a recognised JSON escape character
("  \  /  b  f  n  r  t  u) before parsing. Preserves \n, \", etc.

Reproduces: cover letter generation concurrent with gap analysis raises the
probability of a slightly malformed LLM response due to model load.
2026-04-16 11:11:50 -07:00
a4a2216c2f ci: add GitHub Actions CI for public credibility badge
Some checks failed
CI / Backend (Python) (push) Failing after 1m16s
CI / Frontend (Vue) (push) Failing after 19s
Mirror / mirror (push) Failing after 7s
Lean self-contained workflow — no Forgejo-specific secrets.
circuitforge-core installs from Forgejo git (public repo).
Forgejo (.forgejo/workflows/ci.yml) remains the canonical CI.

Backend: ruff + pytest | Frontend: vue-tsc + vitest
2026-04-15 20:20:13 -07:00
797032bd97 ci: remove stale .github/workflows/ci.yml
Some checks failed
CI / Backend (Python) (push) Failing after 1m21s
CI / Frontend (Vue) (push) Failing after 19s
Mirror / mirror (push) Failing after 10s
The .forgejo/workflows/ci.yml is the canonical CI definition.
The old .github/workflows/ci.yml was being mirrored to GitHub via
--mirror push, triggering GitHub Actions runs that fail because
FORGEJO_TOKEN and other Forgejo-specific secrets are not set there.

GitHub Actions does not process .forgejo/workflows/ so removing
this file stops the spurious GitHub runs. ISSUE_TEMPLATE and
pull_request_template.md are preserved in .github/.
2026-04-15 20:11:07 -07:00
fb8b464dd0 fix: use resume_parser extractors in import endpoint to clean CID glyphs
The import endpoint was doing its own inline PDF/DOCX/ODT extraction
without calling _clean_cid(). Bullet CIDs (127, 149, 183) and other
ATS-reembedded font artifacts were stored raw, surfacing as (cid:127)
in the resume library. Switch to extract_text_from_pdf/docx/odt from
resume_parser.py which already handle two-column layouts and CID cleaning.
2026-04-15 12:23:12 -07:00
ec521e14c5 fix: sweep user DBs on cloud startup for pending migrations 2026-04-15 12:18:23 -07:00
a302049f72 fix: add date_posted migration + cloud startup sweep
date_posted column was added to db.py CREATE TABLE but had no migration
file, so existing user DBs were missing it. The list_jobs endpoint queries
this column, causing 500 errors and empty Apply/Review queues for all
existing cloud users while job_counts (which doesn't touch date_posted)
continued to work — making the home page show correct counts but tabs show
empty data.

Fixes:
- migrations/006_date_posted.sql: ALTER TABLE to add date_posted to existing DBs
- dev_api.py lifespan: on startup in cloud mode, sweep all user DBs in
  CLOUD_DATA_ROOT and apply pending migrations — ensures schema changes land
  for every user on each deploy, not only on their first post-deploy request
2026-04-15 12:17:55 -07:00
38 changed files with 4129 additions and 260 deletions

View file

@ -1,3 +1,7 @@
# Peregrine CI — runs on GitHub mirror for public credibility badge.
# Forgejo (.forgejo/workflows/ci.yml) is the canonical CI — keep these in sync.
# No Forgejo-specific secrets used here; circuitforge-core is public on Forgejo.
name: CI
on:
@ -7,29 +11,46 @@ on:
branches: [main]
jobs:
test:
backend:
name: Backend (Python)
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install system dependencies
run: sudo apt-get update -q && sudo apt-get install -y libsqlcipher-dev
- name: Set up Python
uses: actions/setup-python@v5
- uses: actions/setup-python@v5
with:
python-version: "3.11"
python-version: '3.12'
cache: pip
- name: Configure git credentials for Forgejo
env:
FORGEJO_TOKEN: ${{ secrets.FORGEJO_TOKEN }}
run: |
git config --global url."https://oauth2:${FORGEJO_TOKEN}@git.opensourcesolarpunk.com/".insteadOf "https://git.opensourcesolarpunk.com/"
- name: Install dependencies
run: pip install -r requirements.txt
- name: Run tests
- name: Lint
run: ruff check .
- name: Test
run: pytest tests/ -v --tb=short
frontend:
name: Frontend (Vue)
runs-on: ubuntu-latest
defaults:
run:
working-directory: web
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: '20'
cache: npm
cache-dependency-path: web/package-lock.json
- name: Install dependencies
run: npm ci
- name: Type check
run: npx vue-tsc --noEmit
- name: Test
run: npm run test

View file

@ -49,6 +49,7 @@ FEATURES: dict[str, str] = {
"company_research": "paid",
"interview_prep": "paid",
"survey_assistant": "paid",
"llm_reply_draft": "paid",
# Orchestration / infrastructure — stays gated
"email_classifier": "paid",
@ -81,6 +82,7 @@ BYOK_UNLOCKABLE: frozenset[str] = frozenset({
"company_research",
"interview_prep",
"survey_assistant",
"llm_reply_draft",
})
# Demo mode flag — read from environment at module load time.

View file

@ -36,6 +36,7 @@ services:
- PYTHONUNBUFFERED=1
- PEREGRINE_CADDY_PROXY=1
- CF_ORCH_URL=http://host.docker.internal:7700
- CF_APP_NAME=peregrine
- DEMO_MODE=false
- FORGEJO_API_TOKEN=${FORGEJO_API_TOKEN:-}
depends_on:
@ -52,7 +53,7 @@ services:
command: >
bash -c "uvicorn dev_api:app --host 0.0.0.0 --port 8601"
ports:
- "127.0.0.1:8601:8601" # localhost-only — Caddy + avocet imitate tab
- "8601:8601" # LAN-accessible — Caddy gates the public route; Kuma monitors this port directly
volumes:
- /devl/menagerie-data:/devl/menagerie-data
- ./config/llm.cloud.yaml:/app/config/llm.yaml:ro
@ -68,6 +69,7 @@ services:
- PYTHONUNBUFFERED=1
- FORGEJO_API_TOKEN=${FORGEJO_API_TOKEN:-}
- CF_ORCH_URL=http://host.docker.internal:7700
- CF_APP_NAME=peregrine
extra_hosts:
- "host.docker.internal:host-gateway"
restart: unless-stopped

View file

@ -45,6 +45,39 @@ backends:
enabled: false
type: vision_service
supports_images: true
# ── cf-orch trunk services ─────────────────────────────────────────────────
# These backends allocate via cf-orch rather than connecting to a static URL.
# cf-orch starts the service on-demand and returns its URL; the router then
# calls it directly using the openai_compat path.
# Set CF_ORCH_URL (env) or url below; leave enabled: false if cf-orch is
# not deployed in your environment.
cf_text:
type: openai_compat
enabled: false
base_url: http://localhost:8008/v1 # fallback when cf-orch is not available
model: __auto__
api_key: any
supports_images: false
cf_orch:
service: cf-text
# model_candidates: leave empty to use the service's default_model,
# or specify an alias from the node's catalog (e.g. "qwen2.5-3b").
model_candidates: []
ttl_s: 3600
cf_voice:
type: openai_compat
enabled: false
base_url: http://localhost:8009/v1 # fallback when cf-orch is not available
model: __auto__
api_key: any
supports_images: false
cf_orch:
service: cf-voice
model_candidates: []
ttl_s: 3600
fallback_order:
- ollama
- claude_code

View file

@ -26,7 +26,7 @@ import yaml
from bs4 import BeautifulSoup
from contextlib import asynccontextmanager
from fastapi import FastAPI, HTTPException, Request, Response, UploadFile
from fastapi import FastAPI, HTTPException, Query, Request, Response, UploadFile
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
@ -58,6 +58,20 @@ async def lifespan(app: FastAPI):
_load_env(PEREGRINE_ROOT / ".env")
from scripts.db_migrate import migrate_db
migrate_db(Path(DB_PATH))
# Cloud mode: sweep all known user DBs at startup so schema changes land
# for every user on deploy, not only on their next request.
if _CLOUD_MODE and _CLOUD_DATA_ROOT.is_dir():
import logging as _log
_sweep_log = _log.getLogger("peregrine.startup")
for user_db in _CLOUD_DATA_ROOT.glob("*/peregrine/staging.db"):
try:
migrate_db(user_db)
_migrated_db_paths.add(str(user_db))
_sweep_log.info("Migrated user DB: %s", user_db)
except Exception as exc:
_sweep_log.warning("Migration failed for %s: %s", user_db, exc)
yield
@ -759,32 +773,17 @@ async def import_resume_endpoint(file: UploadFile, name: str = ""):
text = content.decode("utf-8", errors="replace")
elif ext in (".pdf", ".docx", ".odt"):
with tempfile.NamedTemporaryFile(suffix=ext, delete=False) as tmp:
tmp.write(content)
tmp_path = tmp.name
try:
if ext == ".pdf":
import pdfplumber
with pdfplumber.open(tmp_path) as pdf:
text = "\n".join(p.extract_text() or "" for p in pdf.pages)
elif ext == ".docx":
from docx import Document
doc = Document(tmp_path)
text = "\n".join(p.text for p in doc.paragraphs)
else:
import zipfile
from xml.etree import ElementTree as ET
with zipfile.ZipFile(tmp_path) as z:
xml = z.read("content.xml")
ET_root = ET.fromstring(xml)
text = "\n".join(
el.text or ""
for el in ET_root.iter(
"{urn:oasis:names:tc:opendocument:xmlns:text:1.0}p"
)
)
finally:
os.unlink(tmp_path)
from scripts.resume_parser import (
extract_text_from_pdf as _extract_pdf,
extract_text_from_docx as _extract_docx,
extract_text_from_odt as _extract_odt,
)
if ext == ".pdf":
text = _extract_pdf(content)
elif ext == ".docx":
text = _extract_docx(content)
else:
text = _extract_odt(content)
elif ext in (".yaml", ".yml"):
import yaml as _yaml
@ -859,6 +858,80 @@ def set_default_resume_endpoint(resume_id: int):
return {"ok": True}
@app.post("/api/resumes/{resume_id}/apply-to-profile")
def apply_resume_to_profile(resume_id: int):
"""Sync a library resume entry to the active profile (library→profile direction).
Workflow:
1. Load the library entry (must have struct_json).
2. Load current profile to preserve metadata fields.
3. Backup current profile content as a new auto-named library entry.
4. Merge content fields from the library entry into the profile.
5. Write updated plain_text_resume.yaml.
6. Mark the library entry synced_at.
7. Return backup details for the frontend notification.
"""
import json as _json
from scripts.resume_sync import (
library_to_profile_content,
profile_to_library,
make_auto_backup_name,
)
from scripts.db import get_resume as _get, create_resume as _create
db_path = Path(_request_db.get() or DB_PATH)
entry = _get(db_path, resume_id)
if not entry:
raise HTTPException(404, "Resume not found")
struct_json: dict = {}
if entry.get("struct_json"):
try:
struct_json = _json.loads(entry["struct_json"])
except Exception:
raise HTTPException(422, "Library entry has malformed struct_json — re-import the resume to repair it.")
resume_path = _resume_path()
current_profile: dict = {}
if resume_path.exists():
with open(resume_path, encoding="utf-8") as f:
current_profile = yaml.safe_load(f) or {}
# Backup current content to library before overwriting
backup_text, backup_struct = profile_to_library(current_profile)
backup_name = make_auto_backup_name(entry["name"])
backup = _create(
db_path,
name=backup_name,
text=backup_text,
source="auto_backup",
struct_json=_json.dumps(backup_struct),
)
# Merge: overwrite content fields, preserve metadata
content = library_to_profile_content(struct_json)
CONTENT_FIELDS = {
"name", "surname", "email", "phone", "career_summary",
"experience", "skills", "education", "achievements",
}
for field in CONTENT_FIELDS:
current_profile[field] = content[field]
resume_path.parent.mkdir(parents=True, exist_ok=True)
with open(resume_path, "w", encoding="utf-8") as f:
yaml.dump(current_profile, f, allow_unicode=True, default_flow_style=False)
from scripts.db import update_resume_synced_at as _mark_synced
_mark_synced(db_path, resume_id)
return {
"ok": True,
"backup_id": backup["id"],
"backup_name": backup_name,
"fields_updated": sorted(CONTENT_FIELDS),
}
# ── Per-job resume endpoints ───────────────────────────────────────────────────
@app.get("/api/jobs/{job_id}/resume")
@ -1224,43 +1297,13 @@ def calendar_push(job_id: int):
from scripts.llm_router import LLMRouter
from scripts.db import insert_survey_response, get_survey_responses
_SURVEY_SYSTEM = (
"You are a job application advisor helping a candidate answer a culture-fit survey. "
"The candidate values collaborative teamwork, clear communication, growth, and impact. "
"Choose answers that present them in the best professional light."
from scripts.survey_assistant import (
SURVEY_SYSTEM as _SURVEY_SYSTEM,
build_text_prompt as _build_text_prompt,
build_image_prompt as _build_image_prompt,
)
def _build_text_prompt(text: str, mode: str) -> str:
if mode == "quick":
return (
"Answer each survey question below. For each, give ONLY the letter of the best "
"option and a single-sentence reason. Format exactly as:\n"
"1. B — reason here\n2. A — reason here\n\n"
f"Survey:\n{text}"
)
return (
"Analyze each survey question below. For each question:\n"
"- Briefly evaluate each option (1 sentence each)\n"
"- State your recommendation with reasoning\n\n"
f"Survey:\n{text}"
)
def _build_image_prompt(mode: str) -> str:
if mode == "quick":
return (
"This is a screenshot of a culture-fit survey. Read all questions and answer each "
"with the letter of the best option for a collaborative, growth-oriented candidate. "
"Format: '1. B — brief reason' on separate lines."
)
return (
"This is a screenshot of a culture-fit survey. For each question, evaluate each option "
"and recommend the best choice for a collaborative, growth-oriented candidate. "
"Include a brief breakdown per option and a clear recommendation."
)
@app.get("/api/vision/health")
def vision_health():
try:
@ -1280,29 +1323,62 @@ class SurveyAnalyzeBody(BaseModel):
def survey_analyze(job_id: int, body: SurveyAnalyzeBody):
if body.mode not in ("quick", "detailed"):
raise HTTPException(400, f"Invalid mode: {body.mode!r}")
import json as _json
from scripts.task_runner import submit_task
params = _json.dumps({
"text": body.text,
"image_b64": body.image_b64,
"mode": body.mode,
})
try:
router = LLMRouter()
if body.image_b64:
prompt = _build_image_prompt(body.mode)
output = router.complete(
prompt,
images=[body.image_b64],
fallback_order=router.config.get("vision_fallback_order"),
)
source = "screenshot"
else:
prompt = _build_text_prompt(body.text or "", body.mode)
output = router.complete(
prompt,
system=_SURVEY_SYSTEM,
fallback_order=router.config.get("research_fallback_order"),
)
source = "text_paste"
return {"output": output, "source": source}
task_id, is_new = submit_task(
db_path=Path(_request_db.get() or DB_PATH),
task_type="survey_analyze",
job_id=job_id,
params=params,
)
return {"task_id": task_id, "is_new": is_new}
except Exception as e:
raise HTTPException(500, str(e))
# ── GET /api/jobs/:id/survey/analyze/task ────────────────────────────────────
@app.get("/api/jobs/{job_id}/survey/analyze/task")
def survey_analyze_task(job_id: int, task_id: Optional[int] = None):
import json as _json
db = _get_db()
if task_id is not None:
row = db.execute(
"SELECT status, stage, error FROM background_tasks WHERE id = ? AND job_id = ?",
(task_id, job_id),
).fetchone()
else:
row = db.execute(
"SELECT status, stage, error FROM background_tasks "
"WHERE task_type = 'survey_analyze' AND job_id = ? "
"ORDER BY id DESC LIMIT 1",
(job_id,),
).fetchone()
db.close()
if not row:
return {"status": "none", "stage": None, "result": None, "message": None}
result = None
message = row["error"]
if row["status"] == "completed" and row["error"]:
try:
result = _json.loads(row["error"])
message = None
except (ValueError, TypeError):
pass
return {
"status": row["status"],
"stage": row["stage"],
"result": result,
"message": message,
}
class SurveySaveBody(BaseModel):
survey_name: Optional[str] = None
mode: str
@ -2692,10 +2768,17 @@ class WorkEntry(BaseModel):
title: str = ""; company: str = ""; period: str = ""; location: str = ""
industry: str = ""; responsibilities: str = ""; skills: List[str] = []
class EducationEntry(BaseModel):
institution: str = ""; degree: str = ""; field: str = ""
start_date: str = ""; end_date: str = ""
class ResumePayload(BaseModel):
name: str = ""; email: str = ""; phone: str = ""; linkedin_url: str = ""
surname: str = ""; address: str = ""; city: str = ""; zip_code: str = ""; date_of_birth: str = ""
career_summary: str = ""
experience: List[WorkEntry] = []
education: List[EducationEntry] = []
achievements: List[str] = []
salary_min: int = 0; salary_max: int = 0; notice_period: str = ""
remote: bool = False; relocation: bool = False
assessment: bool = False; background_check: bool = False
@ -2723,32 +2806,46 @@ def _tokens_path() -> Path:
def _normalize_experience(raw: list) -> list:
"""Normalize AIHawk-style experience entries to the Vue WorkEntry schema.
Parser / AIHawk stores: bullets (list[str]), start_date, end_date
Vue WorkEntry expects: responsibilities (str), period (str)
AIHawk stores: key_responsibilities (numbered dicts), employment_period, skills_acquired
Vue WorkEntry: responsibilities (str), period (str), skills (list)
If already in Vue format (has 'period' key or 'responsibilities' key), pass through unchanged.
"""
out = []
for e in raw:
if not isinstance(e, dict):
continue
entry = dict(e)
# bullets → responsibilities
if "responsibilities" not in entry or not entry["responsibilities"]:
bullets = entry.pop("bullets", None) or []
if isinstance(bullets, list):
entry["responsibilities"] = "\n".join(b for b in bullets if b)
elif isinstance(bullets, str):
entry["responsibilities"] = bullets
# Already in Vue WorkEntry format — pass through
if "period" in e or "responsibilities" in e:
out.append({
"title": e.get("title", ""),
"company": e.get("company", ""),
"period": e.get("period", ""),
"location": e.get("location", ""),
"industry": e.get("industry", ""),
"responsibilities": e.get("responsibilities", ""),
"skills": e.get("skills") or [],
})
continue
# AIHawk format
resps = e.get("key_responsibilities", {})
if isinstance(resps, dict):
resp_text = "\n".join(v for v in resps.values() if isinstance(v, str))
elif isinstance(resps, list):
resp_text = "\n".join(str(r) for r in resps)
else:
entry.pop("bullets", None)
# start_date + end_date → period
if "period" not in entry or not entry["period"]:
start = entry.pop("start_date", "") or ""
end = entry.pop("end_date", "") or ""
entry["period"] = f"{start} {end}".strip(" ") if (start or end) else ""
else:
entry.pop("start_date", None)
entry.pop("end_date", None)
out.append(entry)
resp_text = str(resps)
period = e.get("employment_period", "")
skills_raw = e.get("skills_acquired", [])
skills = skills_raw if isinstance(skills_raw, list) else []
out.append({
"title": e.get("position", ""),
"company": e.get("company", ""),
"period": period,
"location": e.get("location", ""),
"industry": e.get("industry", ""),
"responsibilities": resp_text,
"skills": skills,
})
return out
@ -2757,24 +2854,58 @@ def get_resume():
try:
resume_path = _resume_path()
if not resume_path.exists():
# Backward compat: check user.yaml for career_summary
_uy = Path(_user_yaml_path())
if _uy.exists():
uy = yaml.safe_load(_uy.read_text(encoding="utf-8")) or {}
if uy.get("career_summary"):
return {"exists": False, "legacy_career_summary": uy["career_summary"]}
return {"exists": False}
with open(resume_path) as f:
with open(resume_path, encoding="utf-8") as f:
data = yaml.safe_load(f) or {}
data["exists"] = True
if "experience" in data and isinstance(data["experience"], list):
data["experience"] = _normalize_experience(data["experience"])
# Backward compat: if career_summary missing from YAML, try user.yaml
if not data.get("career_summary"):
_uy = Path(_user_yaml_path())
if _uy.exists():
uy = yaml.safe_load(_uy.read_text(encoding="utf-8")) or {}
data["career_summary"] = uy.get("career_summary", "")
return data
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.put("/api/settings/resume")
def save_resume(payload: ResumePayload):
"""Save resume profile. If a default library entry exists, sync content back to it."""
import json as _json
from scripts.db import (
get_resume as _get_resume,
update_resume_content as _update_content,
)
from scripts.resume_sync import profile_to_library
try:
resume_path = _resume_path()
resume_path.parent.mkdir(parents=True, exist_ok=True)
with open(resume_path, "w") as f:
with open(resume_path, "w", encoding="utf-8") as f:
yaml.dump(payload.model_dump(), f, allow_unicode=True, default_flow_style=False)
return {"ok": True}
# Profile→library sync: if a default resume exists, update it
synced_id: int | None = None
db_path = Path(_request_db.get() or DB_PATH)
_uy = Path(_user_yaml_path())
if _uy.exists():
profile_meta = yaml.safe_load(_uy.read_text(encoding="utf-8")) or {}
default_id = profile_meta.get("default_resume_id")
if default_id:
entry = _get_resume(db_path, int(default_id))
if entry:
text, struct = profile_to_library(payload.model_dump())
_update_content(db_path, int(default_id), text=text, struct_json=_json.dumps(struct))
synced_id = int(default_id)
return {"ok": True, "synced_library_entry_id": synced_id}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@ -4047,3 +4178,183 @@ def wizard_complete():
return {"ok": True}
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
# ── Messaging models ──────────────────────────────────────────────────────────
class MessageCreateBody(BaseModel):
job_id: Optional[int] = None
job_contact_id: Optional[int] = None
type: str = "email"
direction: Optional[str] = None
subject: Optional[str] = None
body: Optional[str] = None
from_addr: Optional[str] = None
to_addr: Optional[str] = None
template_id: Optional[int] = None
logged_at: Optional[str] = None
class MessageUpdateBody(BaseModel):
body: str
class TemplateCreateBody(BaseModel):
title: str
category: str = "custom"
subject_template: Optional[str] = None
body_template: str
class TemplateUpdateBody(BaseModel):
title: Optional[str] = None
category: Optional[str] = None
subject_template: Optional[str] = None
body_template: Optional[str] = None
# ── Messaging (MIT) ───────────────────────────────────────────────────────────
@app.get("/api/messages")
def get_messages(
job_id: Optional[int] = None,
type: Optional[str] = None,
direction: Optional[str] = None,
limit: int = Query(default=100, ge=1, le=1000),
):
from scripts.messaging import list_messages
return list_messages(
Path(_request_db.get() or DB_PATH),
job_id=job_id, type=type, direction=direction, limit=limit,
)
@app.post("/api/messages")
def post_message(body: MessageCreateBody):
from scripts.messaging import create_message
return create_message(Path(_request_db.get() or DB_PATH), **body.model_dump())
@app.delete("/api/messages/{message_id}")
def del_message(message_id: int):
from scripts.messaging import delete_message
try:
delete_message(Path(_request_db.get() or DB_PATH), message_id)
return {"ok": True}
except KeyError:
raise HTTPException(404, "message not found")
@app.put("/api/messages/{message_id}")
def put_message(message_id: int, body: MessageUpdateBody):
from scripts.messaging import update_message_body
try:
return update_message_body(Path(_request_db.get() or DB_PATH), message_id, body.body)
except KeyError:
raise HTTPException(404, "message not found")
@app.get("/api/message-templates")
def get_templates():
from scripts.messaging import list_templates
return list_templates(Path(_request_db.get() or DB_PATH))
@app.post("/api/message-templates")
def post_template(body: TemplateCreateBody):
from scripts.messaging import create_template
return create_template(Path(_request_db.get() or DB_PATH), **body.model_dump())
@app.put("/api/message-templates/{template_id}")
def put_template(template_id: int, body: TemplateUpdateBody):
from scripts.messaging import update_template
try:
return update_template(
Path(_request_db.get() or DB_PATH),
template_id,
**body.model_dump(exclude_none=True),
)
except PermissionError:
raise HTTPException(403, "cannot modify built-in templates")
except KeyError:
raise HTTPException(404, "template not found")
@app.delete("/api/message-templates/{template_id}")
def del_template(template_id: int):
from scripts.messaging import delete_template
try:
delete_template(Path(_request_db.get() or DB_PATH), template_id)
return {"ok": True}
except PermissionError:
raise HTTPException(403, "cannot delete built-in templates")
except KeyError:
raise HTTPException(404, "template not found")
# ── LLM Reply Draft (BSL 1.1) ─────────────────────────────────────────────────
def _get_effective_tier() -> str:
"""Resolve effective tier: Heimdall in cloud mode, APP_TIER env var in single-tenant."""
if _CLOUD_MODE:
return _resolve_cloud_tier()
from app.wizard.tiers import effective_tier
return effective_tier()
@app.post("/api/contacts/{contact_id}/draft-reply")
def draft_reply(contact_id: int):
"""Generate an LLM draft reply for an inbound job_contacts row. Tier-gated."""
from app.wizard.tiers import can_use, has_configured_llm
from scripts.messaging import create_message
from scripts.llm_reply_draft import generate_draft_reply
db_path = Path(_request_db.get() or DB_PATH)
tier = _get_effective_tier()
if not can_use(tier, "llm_reply_draft", has_byok=has_configured_llm()):
raise HTTPException(402, detail={"error": "tier_required", "min_tier": "free+byok"})
con = _get_db()
row = con.execute("SELECT * FROM job_contacts WHERE id=?", (contact_id,)).fetchone()
con.close()
if not row:
raise HTTPException(404, "contact not found")
profile = _imitate_load_profile()
user_name = getattr(profile, "name", "") or ""
target_role = getattr(profile, "target_role", "") or ""
cfg_path = db_path.parent / "config" / "llm.yaml"
draft_body = generate_draft_reply(
subject=row["subject"] or "",
from_addr=row["from_addr"] or "",
body=row["body"] or "",
user_name=user_name,
target_role=target_role,
config_path=cfg_path if cfg_path.exists() else None,
)
msg = create_message(
db_path,
job_id=row["job_id"],
job_contact_id=contact_id,
type="draft",
direction="outbound",
subject=f"Re: {row['subject'] or ''}".strip(),
body=draft_body,
to_addr=row["from_addr"],
template_id=None,
from_addr=None,
)
return {"message_id": msg["id"]}
@app.post("/api/messages/{message_id}/approve")
def approve_message_endpoint(message_id: int):
"""Set approved_at=now(). Returns approved body for copy-to-clipboard."""
from scripts.messaging import approve_message
try:
msg = approve_message(Path(_request_db.get() or DB_PATH), message_id)
return {"body": msg["body"], "approved_at": msg["approved_at"]}
except KeyError:
raise HTTPException(404, "message not found")

View file

@ -22,6 +22,19 @@ server {
add_header Cache-Control "public, immutable";
}
# Handle /peregrine/ base path used when accessed directly (no Caddy prefix stripping).
# ^~ blocks regex location matches so assets at /peregrine/assets/... are served correctly.
location ^~ /peregrine/assets/ {
alias /usr/share/nginx/html/assets/;
expires 1y;
add_header Cache-Control "public, immutable";
}
location /peregrine/ {
alias /usr/share/nginx/html/;
try_files $uri $uri/ /index.html;
}
# SPA fallback must come after API and assets
location / {
try_files $uri $uri/ /index.html;

1
docs/plausible.js Normal file
View file

@ -0,0 +1 @@
(function(){var s=document.createElement("script");s.defer=true;s.dataset.domain="docs.circuitforge.tech,circuitforge.tech";s.dataset.api="https://analytics.circuitforge.tech/api/event";s.src="https://analytics.circuitforge.tech/js/script.js";document.head.appendChild(s);})();

View file

@ -0,0 +1,6 @@
-- 006_date_posted.sql
-- Add date_posted column for shadow listing detection (stale/shadow score feature).
-- New DBs already have this column from the CREATE TABLE statement in db.py;
-- this migration adds it to existing user DBs.
ALTER TABLE jobs ADD COLUMN date_posted TEXT;

View file

@ -0,0 +1,3 @@
-- 007_resume_sync.sql
-- Add synced_at to resumes: ISO datetime of last library↔profile sync, null = never synced.
ALTER TABLE resumes ADD COLUMN synced_at TEXT;

View file

@ -0,0 +1,97 @@
-- messages: manual log entries and LLM drafts
CREATE TABLE IF NOT EXISTS messages (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER REFERENCES jobs(id) ON DELETE SET NULL,
job_contact_id INTEGER REFERENCES job_contacts(id) ON DELETE SET NULL,
type TEXT NOT NULL DEFAULT 'email',
direction TEXT,
subject TEXT,
body TEXT,
from_addr TEXT,
to_addr TEXT,
logged_at TEXT NOT NULL DEFAULT (datetime('now')),
approved_at TEXT,
template_id INTEGER REFERENCES message_templates(id) ON DELETE SET NULL,
osprey_call_id TEXT
);
-- message_templates: built-in seeds and user-created templates
CREATE TABLE IF NOT EXISTS message_templates (
id INTEGER PRIMARY KEY AUTOINCREMENT,
key TEXT UNIQUE,
title TEXT NOT NULL,
category TEXT NOT NULL DEFAULT 'custom',
subject_template TEXT,
body_template TEXT NOT NULL,
is_builtin INTEGER NOT NULL DEFAULT 0,
is_community INTEGER NOT NULL DEFAULT 0,
community_source TEXT,
created_at TEXT NOT NULL DEFAULT (datetime('now')),
updated_at TEXT NOT NULL DEFAULT (datetime('now'))
);
INSERT OR IGNORE INTO message_templates
(key, title, category, subject_template, body_template, is_builtin)
VALUES
(
'follow_up',
'Following up on my application',
'follow_up',
'Following up — {{role}} application',
'Hi {{recruiter_name}},
I wanted to follow up on my application for the {{role}} position at {{company}}. I remain very interested in the opportunity and would welcome the chance to discuss my background further.
Please let me know if there is anything else you need from me.
Best regards,
{{name}}',
1
),
(
'thank_you',
'Thank you for the interview',
'thank_you',
'Thank you — {{role}} interview',
'Hi {{recruiter_name}},
Thank you for taking the time to speak with me about the {{role}} role at {{company}}. I enjoyed learning more about the team and the work you are doing.
I am very excited about this opportunity and look forward to hearing about the next steps.
Best regards,
{{name}}',
1
),
(
'accommodation_request',
'Accommodation request',
'accommodation',
'Accommodation request — {{role}} interview',
'Hi {{recruiter_name}},
I am writing to request a reasonable accommodation for my upcoming interview for the {{role}} position. Specifically, I would appreciate:
{{accommodation_details}}
Please let me know if you need any additional information. I am happy to discuss this further.
Thank you,
{{name}}',
1
),
(
'withdrawal',
'Withdrawing my application',
'withdrawal',
'Application withdrawal — {{role}}',
'Hi {{recruiter_name}},
I am writing to let you know that I would like to withdraw my application for the {{role}} position at {{company}}.
Thank you for your time and consideration. I wish you and the team all the best.
Best regards,
{{name}}',
1
)

View file

@ -70,3 +70,6 @@ nav:
- Tier System: reference/tier-system.md
- LLM Router: reference/llm-router.md
- Config Files: reference/config-files.md
extra_javascript:
- plausible.js

View file

@ -973,6 +973,7 @@ def _resume_as_dict(row) -> dict:
"is_default": row["is_default"],
"created_at": row["created_at"],
"updated_at": row["updated_at"],
"synced_at": row["synced_at"] if "synced_at" in row.keys() else None,
}
@ -1074,6 +1075,44 @@ def set_default_resume(db_path: Path = DEFAULT_DB, resume_id: int = 0) -> None:
conn.close()
def update_resume_synced_at(db_path: Path = DEFAULT_DB, resume_id: int = 0) -> None:
"""Mark a library entry as synced to the profile (library→profile direction)."""
conn = sqlite3.connect(db_path)
try:
conn.execute(
"UPDATE resumes SET synced_at=datetime('now') WHERE id=?",
(resume_id,),
)
conn.commit()
finally:
conn.close()
def update_resume_content(
db_path: Path = DEFAULT_DB,
resume_id: int = 0,
text: str = "",
struct_json: str | None = None,
) -> None:
"""Update text, struct_json, and synced_at for a library entry.
Called by the profilelibrary sync path (PUT /api/settings/resume).
"""
word_count = len(text.split()) if text else 0
conn = sqlite3.connect(db_path)
try:
conn.execute(
"""UPDATE resumes
SET text=?, struct_json=?, word_count=?,
synced_at=datetime('now'), updated_at=datetime('now')
WHERE id=?""",
(text, struct_json, word_count, resume_id),
)
conn.commit()
finally:
conn.close()
def get_job_resume(db_path: Path = DEFAULT_DB, job_id: int = 0) -> dict | None:
"""Return the resume for a job: job-specific first, then default, then None."""
conn = sqlite3.connect(db_path)

View file

@ -0,0 +1,42 @@
# BSL 1.1 — see LICENSE-BSL
"""LLM-assisted reply draft generation for inbound job contacts (BSL 1.1)."""
from __future__ import annotations
from pathlib import Path
from typing import Optional
_SYSTEM = (
"You are drafting a professional email reply on behalf of a job seeker. "
"Be concise and professional. Do not fabricate facts. If you are uncertain "
"about a detail, leave a [TODO: fill in] placeholder. "
"Output the reply body only — no subject line, no salutation preamble."
)
def _build_prompt(subject: str, from_addr: str, body: str, user_name: str, target_role: str) -> str:
return (
f"ORIGINAL EMAIL:\n"
f"Subject: {subject}\n"
f"From: {from_addr}\n"
f"Body:\n{body}\n\n"
f"USER PROFILE CONTEXT:\n"
f"Name: {user_name}\n"
f"Target role: {target_role}\n\n"
"Write a concise, professional reply to this email."
)
def generate_draft_reply(
subject: str,
from_addr: str,
body: str,
user_name: str,
target_role: str,
config_path: Optional[Path] = None,
) -> str:
"""Return a draft reply body string."""
from scripts.llm_router import LLMRouter
router = LLMRouter(config_path=config_path)
prompt = _build_prompt(subject, from_addr, body, user_name, target_role)
return router.complete(system=_SYSTEM, user=prompt).strip()

285
scripts/messaging.py Normal file
View file

@ -0,0 +1,285 @@
"""
DB helpers for the messaging feature.
Messages table: manual log entries and LLM drafts (one row per message).
Message templates table: built-in seeds and user-created templates.
Conventions (match scripts/db.py):
- All functions take db_path: Path as first argument.
- sqlite3.connect(db_path), row_factory = sqlite3.Row
- Return plain dicts (dict(row))
- Always close connection in finally
"""
import sqlite3
from datetime import datetime, timezone
from pathlib import Path
from typing import Optional
# ---------------------------------------------------------------------------
# Internal helpers
# ---------------------------------------------------------------------------
def _connect(db_path: Path) -> sqlite3.Connection:
con = sqlite3.connect(db_path)
con.row_factory = sqlite3.Row
return con
def _now_utc() -> str:
"""Return current UTC time as ISO 8601 string."""
return datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
# ---------------------------------------------------------------------------
# Messages
# ---------------------------------------------------------------------------
def create_message(
db_path: Path,
*,
job_id: Optional[int],
job_contact_id: Optional[int],
type: str,
direction: str,
subject: Optional[str],
body: Optional[str],
from_addr: Optional[str],
to_addr: Optional[str],
template_id: Optional[int],
logged_at: Optional[str] = None,
) -> dict:
"""Insert a new message row and return it as a dict."""
con = _connect(db_path)
try:
cur = con.execute(
"""
INSERT INTO messages
(job_id, job_contact_id, type, direction, subject, body,
from_addr, to_addr, logged_at, template_id)
VALUES
(?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""",
(job_id, job_contact_id, type, direction, subject, body,
from_addr, to_addr, logged_at or _now_utc(), template_id),
)
con.commit()
row = con.execute(
"SELECT * FROM messages WHERE id = ?", (cur.lastrowid,)
).fetchone()
return dict(row)
finally:
con.close()
def list_messages(
db_path: Path,
*,
job_id: Optional[int] = None,
type: Optional[str] = None,
direction: Optional[str] = None,
limit: int = 100,
) -> list[dict]:
"""Return messages, optionally filtered. Ordered by logged_at DESC."""
conditions: list[str] = []
params: list = []
if job_id is not None:
conditions.append("job_id = ?")
params.append(job_id)
if type is not None:
conditions.append("type = ?")
params.append(type)
if direction is not None:
conditions.append("direction = ?")
params.append(direction)
where = ("WHERE " + " AND ".join(conditions)) if conditions else ""
params.append(limit)
con = _connect(db_path)
try:
rows = con.execute(
f"SELECT * FROM messages {where} ORDER BY logged_at DESC LIMIT ?",
params,
).fetchall()
return [dict(r) for r in rows]
finally:
con.close()
def delete_message(db_path: Path, message_id: int) -> None:
"""Delete a message by id. Raises KeyError if not found."""
con = _connect(db_path)
try:
row = con.execute(
"SELECT id FROM messages WHERE id = ?", (message_id,)
).fetchone()
if row is None:
raise KeyError(f"Message {message_id} not found")
con.execute("DELETE FROM messages WHERE id = ?", (message_id,))
con.commit()
finally:
con.close()
def approve_message(db_path: Path, message_id: int) -> dict:
"""Set approved_at to now for the given message. Raises KeyError if not found."""
con = _connect(db_path)
try:
row = con.execute(
"SELECT id FROM messages WHERE id = ?", (message_id,)
).fetchone()
if row is None:
raise KeyError(f"Message {message_id} not found")
con.execute(
"UPDATE messages SET approved_at = ? WHERE id = ?",
(_now_utc(), message_id),
)
con.commit()
updated = con.execute(
"SELECT * FROM messages WHERE id = ?", (message_id,)
).fetchone()
return dict(updated)
finally:
con.close()
# ---------------------------------------------------------------------------
# Templates
# ---------------------------------------------------------------------------
def list_templates(db_path: Path) -> list[dict]:
"""Return all templates ordered by is_builtin DESC, then title ASC."""
con = _connect(db_path)
try:
rows = con.execute(
"SELECT * FROM message_templates ORDER BY is_builtin DESC, title ASC"
).fetchall()
return [dict(r) for r in rows]
finally:
con.close()
def create_template(
db_path: Path,
*,
title: str,
category: str = "custom",
subject_template: Optional[str] = None,
body_template: str,
) -> dict:
"""Insert a new user-defined template and return it as a dict."""
con = _connect(db_path)
try:
cur = con.execute(
"""
INSERT INTO message_templates
(title, category, subject_template, body_template, is_builtin)
VALUES
(?, ?, ?, ?, 0)
""",
(title, category, subject_template, body_template),
)
con.commit()
row = con.execute(
"SELECT * FROM message_templates WHERE id = ?", (cur.lastrowid,)
).fetchone()
return dict(row)
finally:
con.close()
def update_template(db_path: Path, template_id: int, **fields) -> dict:
"""
Update allowed fields on a user-defined template.
Raises PermissionError if the template is a built-in (is_builtin=1).
Raises KeyError if the template is not found.
"""
if not fields:
# Nothing to update — just return current state
con = _connect(db_path)
try:
row = con.execute(
"SELECT * FROM message_templates WHERE id = ?", (template_id,)
).fetchone()
if row is None:
raise KeyError(f"Template {template_id} not found")
return dict(row)
finally:
con.close()
_ALLOWED_FIELDS = {
"title", "category", "subject_template", "body_template",
}
invalid = set(fields) - _ALLOWED_FIELDS
if invalid:
raise ValueError(f"Cannot update field(s): {invalid}")
con = _connect(db_path)
try:
row = con.execute(
"SELECT id, is_builtin FROM message_templates WHERE id = ?",
(template_id,),
).fetchone()
if row is None:
raise KeyError(f"Template {template_id} not found")
if row["is_builtin"]:
raise PermissionError(
f"Template {template_id} is a built-in and cannot be modified"
)
set_clause = ", ".join(f"{col} = ?" for col in fields)
values = list(fields.values()) + [_now_utc(), template_id]
con.execute(
f"UPDATE message_templates SET {set_clause}, updated_at = ? WHERE id = ?",
values,
)
con.commit()
updated = con.execute(
"SELECT * FROM message_templates WHERE id = ?", (template_id,)
).fetchone()
return dict(updated)
finally:
con.close()
def delete_template(db_path: Path, template_id: int) -> None:
"""
Delete a user-defined template.
Raises PermissionError if the template is a built-in (is_builtin=1).
Raises KeyError if the template is not found.
"""
con = _connect(db_path)
try:
row = con.execute(
"SELECT id, is_builtin FROM message_templates WHERE id = ?",
(template_id,),
).fetchone()
if row is None:
raise KeyError(f"Template {template_id} not found")
if row["is_builtin"]:
raise PermissionError(
f"Template {template_id} is a built-in and cannot be deleted"
)
con.execute("DELETE FROM message_templates WHERE id = ?", (template_id,))
con.commit()
finally:
con.close()
def update_message_body(db_path: Path, message_id: int, body: str) -> dict:
"""Update the body text of a draft message before approval. Returns updated row."""
con = _connect(db_path)
try:
row = con.execute("SELECT id FROM messages WHERE id=?", (message_id,)).fetchone()
if not row:
raise KeyError(f"message {message_id} not found")
con.execute("UPDATE messages SET body=? WHERE id=?", (body, message_id))
con.commit()
updated = con.execute("SELECT * FROM messages WHERE id=?", (message_id,)).fetchone()
return dict(updated)
finally:
con.close()

View file

@ -70,7 +70,12 @@ def extract_jd_signals(description: str, resume_text: str = "") -> list[str]:
# Extract JSON array from response (LLM may wrap it in markdown)
match = re.search(r"\[.*\]", raw, re.DOTALL)
if match:
llm_signals = json.loads(match.group(0))
json_str = match.group(0)
# LLMs occasionally emit invalid JSON escape sequences (e.g. \s, \d, \p)
# that are valid regex but not valid JSON. Replace bare backslashes that
# aren't followed by a recognised JSON escape character.
json_str = re.sub(r'\\([^"\\/bfnrtu])', r'\1', json_str)
llm_signals = json.loads(json_str)
llm_signals = [s.strip() for s in llm_signals if isinstance(s, str) and s.strip()]
except Exception:
log.warning("[resume_optimizer] LLM signal extraction failed", exc_info=True)

217
scripts/resume_sync.py Normal file
View file

@ -0,0 +1,217 @@
"""
Resume format transform library profile.
Converts between:
- Library format: struct_json produced by resume_parser.parse_resume()
{name, email, phone, career_summary, experience[{title,company,start_date,end_date,location,bullets[]}],
education[{institution,degree,field,start_date,end_date}], skills[], achievements[]}
- Profile content format: ResumePayload content fields (plain_text_resume.yaml)
{name, surname, email, phone, career_summary,
experience[{title,company,period,location,industry,responsibilities,skills[]}],
education[{institution,degree,field,start_date,end_date}],
skills[], achievements[]}
Profile metadata fields (salary, work prefs, self-ID, PII) are never touched here.
License: MIT
"""
from __future__ import annotations
from datetime import date
from typing import Any
_CONTENT_FIELDS = frozenset({
"name", "surname", "email", "phone", "career_summary",
"experience", "skills", "education", "achievements",
})
def library_to_profile_content(struct_json: dict[str, Any]) -> dict[str, Any]:
"""Transform a library struct_json to ResumePayload content fields.
Returns only content fields. Caller is responsible for merging with existing
metadata fields (salary, preferences, self-ID) so they are not overwritten.
Lossy for experience[].industry (always blank parser does not capture it).
name is split on first space into name/surname.
"""
full_name: str = struct_json.get("name") or ""
parts = full_name.split(" ", 1)
name = parts[0]
surname = parts[1] if len(parts) > 1 else ""
experience = []
for exp in struct_json.get("experience") or []:
start = (exp.get("start_date") or "").strip()
end = (exp.get("end_date") or "").strip()
if start and end:
period = f"{start} \u2013 {end}"
elif start:
period = start
elif end:
period = end
else:
period = ""
bullets: list[str] = exp.get("bullets") or []
responsibilities = "\n".join(b for b in bullets if b)
experience.append({
"title": exp.get("title") or "",
"company": exp.get("company") or "",
"period": period,
"location": exp.get("location") or "",
"industry": "", # not captured by parser
"responsibilities": responsibilities,
"skills": [],
})
education = []
for edu in struct_json.get("education") or []:
education.append({
"institution": edu.get("institution") or "",
"degree": edu.get("degree") or "",
"field": edu.get("field") or "",
"start_date": edu.get("start_date") or "",
"end_date": edu.get("end_date") or "",
})
return {
"name": name,
"surname": surname,
"email": struct_json.get("email") or "",
"phone": struct_json.get("phone") or "",
"career_summary": struct_json.get("career_summary") or "",
"experience": experience,
"skills": list(struct_json.get("skills") or []),
"education": education,
"achievements": list(struct_json.get("achievements") or []),
}
def profile_to_library(payload: dict[str, Any]) -> tuple[str, dict[str, Any]]:
"""Transform ResumePayload content fields to (plain_text, struct_json).
Inverse of library_to_profile_content. The plain_text is a best-effort
reconstruction for display and re-parsing. struct_json is the canonical
structured representation stored in the resumes table.
"""
name_parts = [payload.get("name") or "", payload.get("surname") or ""]
full_name = " ".join(p for p in name_parts if p).strip()
career_summary = (payload.get("career_summary") or "").strip()
lines: list[str] = []
if full_name:
lines.append(full_name)
email = payload.get("email") or ""
phone = payload.get("phone") or ""
if email:
lines.append(email)
if phone:
lines.append(phone)
if career_summary:
lines += ["", "SUMMARY", career_summary]
experience_structs = []
for exp in payload.get("experience") or []:
title = (exp.get("title") or "").strip()
company = (exp.get("company") or "").strip()
period = (exp.get("period") or "").strip()
location = (exp.get("location") or "").strip()
# Split period back to start_date / end_date.
# Split on the dash/dash separator BEFORE normalising to plain hyphens
# so that ISO dates like "2023-01 2025-03" round-trip correctly.
if "\u2013" in period: # en-dash
date_parts = [p.strip() for p in period.split("\u2013", 1)]
elif "\u2014" in period: # em-dash
date_parts = [p.strip() for p in period.split("\u2014", 1)]
else:
date_parts = [period.strip()] if period.strip() else []
start_date = date_parts[0] if date_parts else ""
end_date = date_parts[1] if len(date_parts) > 1 else ""
resp = (exp.get("responsibilities") or "").strip()
bullets = [b.strip() for b in resp.split("\n") if b.strip()]
if title or company:
header = " | ".join(p for p in [title, company, period] if p)
lines += ["", header]
if location:
lines.append(location)
for b in bullets:
lines.append(f"\u2022 {b}")
experience_structs.append({
"title": title,
"company": company,
"start_date": start_date,
"end_date": end_date,
"location": location,
"bullets": bullets,
})
skills: list[str] = list(payload.get("skills") or [])
if skills:
lines += ["", "SKILLS", ", ".join(skills)]
education_structs = []
for edu in payload.get("education") or []:
institution = (edu.get("institution") or "").strip()
degree = (edu.get("degree") or "").strip()
field = (edu.get("field") or "").strip()
start_date = (edu.get("start_date") or "").strip()
end_date = (edu.get("end_date") or "").strip()
if institution or degree:
label = " ".join(p for p in [degree, field] if p)
lines.append(f"{label} \u2014 {institution}" if institution else label)
education_structs.append({
"institution": institution,
"degree": degree,
"field": field,
"start_date": start_date,
"end_date": end_date,
})
achievements: list[str] = list(payload.get("achievements") or [])
struct_json: dict[str, Any] = {
"name": full_name,
"email": email,
"phone": phone,
"career_summary": career_summary,
"experience": experience_structs,
"skills": skills,
"education": education_structs,
"achievements": achievements,
}
plain_text = "\n".join(lines).strip()
return plain_text, struct_json
def make_auto_backup_name(source_name: str) -> str:
"""Generate a timestamped auto-backup name.
Example: "Auto-backup before Senior Engineer Resume — 2026-04-16"
"""
today = date.today().isoformat()
return f"Auto-backup before {source_name} \u2014 {today}"
def blank_fields_on_import(struct_json: dict[str, Any]) -> list[str]:
"""Return content field names that will be blank after a library→profile import.
Used to warn the user in the confirmation modal so they know what to fill in.
"""
blank: list[str] = []
if struct_json.get("experience"):
# industry is always blank — parser never captures it
blank.append("experience[].industry")
# location may be blank for some entries
if any(not (e.get("location") or "").strip() for e in struct_json["experience"]):
blank.append("experience[].location")
return blank

View file

@ -0,0 +1,86 @@
# MIT License — see LICENSE
"""Survey assistant: prompt builders and LLM inference for culture-fit survey analysis.
Extracted from dev-api.py so task_runner can import this without importing the
FastAPI application. Callable directly or via the survey_analyze background task.
"""
from __future__ import annotations
import json
import logging
from pathlib import Path
from typing import Optional
log = logging.getLogger(__name__)
SURVEY_SYSTEM = (
"You are a job application advisor helping a candidate answer a culture-fit survey. "
"The candidate values collaborative teamwork, clear communication, growth, and impact. "
"Choose answers that present them in the best professional light."
)
def build_text_prompt(text: str, mode: str) -> str:
if mode == "quick":
return (
"Answer each survey question below. For each, give ONLY the letter of the best "
"option and a single-sentence reason. Format exactly as:\n"
"1. B — reason here\n2. A — reason here\n\n"
f"Survey:\n{text}"
)
return (
"Analyze each survey question below. For each question:\n"
"- Briefly evaluate each option (1 sentence each)\n"
"- State your recommendation with reasoning\n\n"
f"Survey:\n{text}"
)
def build_image_prompt(mode: str) -> str:
if mode == "quick":
return (
"This is a screenshot of a culture-fit survey. Read all questions and answer each "
"with the letter of the best option for a collaborative, growth-oriented candidate. "
"Format: '1. B — brief reason' on separate lines."
)
return (
"This is a screenshot of a culture-fit survey. For each question, evaluate each option "
"and recommend the best choice for a collaborative, growth-oriented candidate. "
"Include a brief breakdown per option and a clear recommendation."
)
def run_survey_analyze(
text: Optional[str],
image_b64: Optional[str],
mode: str,
config_path: Optional[Path] = None,
) -> dict:
"""Run LLM inference for survey analysis.
Returns {"output": str, "source": "text_paste" | "screenshot"}.
Raises on LLM failure caller is responsible for error handling.
"""
from scripts.llm_router import LLMRouter
router = LLMRouter(config_path=config_path) if config_path else LLMRouter()
if image_b64:
prompt = build_image_prompt(mode)
output = router.complete(
prompt,
images=[image_b64],
fallback_order=router.config.get("vision_fallback_order"),
)
source = "screenshot"
else:
prompt = build_text_prompt(text or "", mode)
output = router.complete(
prompt,
system=SURVEY_SYSTEM,
fallback_order=router.config.get("research_fallback_order"),
)
source = "text_paste"
return {"output": output, "source": source}

View file

@ -404,6 +404,24 @@ def _run_task(db_path: Path, task_id: int, task_type: str, job_id: int,
save_optimized_resume(db_path, job_id=job_id,
text="", gap_report=gap_report)
elif task_type == "survey_analyze":
import json as _json
from scripts.survey_assistant import run_survey_analyze
p = _json.loads(params or "{}")
_cfg_path = Path(db_path).parent / "config" / "llm.yaml"
update_task_stage(db_path, task_id, "analyzing survey")
result = run_survey_analyze(
text=p.get("text"),
image_b64=p.get("image_b64"),
mode=p.get("mode", "quick"),
config_path=_cfg_path if _cfg_path.exists() else None,
)
update_task_status(
db_path, task_id, "completed",
error=_json.dumps(result),
)
return
elif task_type == "prepare_training":
from scripts.prepare_training_data import build_records, write_jsonl, DEFAULT_OUTPUT
records = build_records()

View file

@ -34,6 +34,7 @@ LLM_TASK_TYPES: frozenset[str] = frozenset({
"company_research",
"wizard_generate",
"resume_optimize",
"survey_analyze",
})
# Conservative peak VRAM estimates (GB) per task type.
@ -43,6 +44,7 @@ DEFAULT_VRAM_BUDGETS: dict[str, float] = {
"company_research": 5.0, # llama3.1:8b or vllm model
"wizard_generate": 2.5, # same model family as cover_letter
"resume_optimize": 5.0, # section-by-section rewrite; same budget as research
"survey_analyze": 2.5, # text: phi3:mini; visual: vision service (own VRAM pool)
}
_DEFAULT_MAX_QUEUE_DEPTH = 500

View file

@ -19,7 +19,8 @@ def tmp_db(tmp_path):
match_score REAL, keyword_gaps TEXT, status TEXT,
interview_date TEXT, rejection_stage TEXT,
applied_at TEXT, phone_screen_at TEXT, interviewing_at TEXT,
offer_at TEXT, hired_at TEXT, survey_at TEXT
offer_at TEXT, hired_at TEXT, survey_at TEXT,
hired_feedback TEXT
);
CREATE TABLE job_contacts (
id INTEGER PRIMARY KEY,

View file

@ -1,18 +1,36 @@
"""Tests for survey endpoints: vision health, analyze, save response, get history."""
"""Tests for survey endpoints: vision health, async analyze task queue, save response, history."""
import json
import sqlite3
import pytest
from unittest.mock import patch, MagicMock
from fastapi.testclient import TestClient
from scripts.db_migrate import migrate_db
@pytest.fixture
def client():
import sys
sys.path.insert(0, "/Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa")
from dev_api import app
return TestClient(app)
def fresh_db(tmp_path, monkeypatch):
"""Isolated DB + dev_api wired to it via _request_db and DB_PATH."""
db = tmp_path / "test.db"
migrate_db(db)
monkeypatch.setenv("STAGING_DB", str(db))
import dev_api
monkeypatch.setattr(dev_api, "DB_PATH", str(db))
monkeypatch.setattr(
dev_api,
"_request_db",
type("CV", (), {"get": lambda self: str(db), "set": lambda *a: None})(),
)
return db
# ── GET /api/vision/health ───────────────────────────────────────────────────
@pytest.fixture
def client(fresh_db):
import dev_api
return TestClient(dev_api.app)
# ── GET /api/vision/health ────────────────────────────────────────────────────
def test_vision_health_available(client):
"""Returns available=true when vision service responds 200."""
@ -32,133 +50,182 @@ def test_vision_health_unavailable(client):
assert resp.json() == {"available": False}
# ── POST /api/jobs/{id}/survey/analyze ──────────────────────────────────────
# ── POST /api/jobs/{id}/survey/analyze ──────────────────────────────────────
def test_analyze_text_quick(client):
"""Text mode quick analysis returns output and source=text_paste."""
mock_router = MagicMock()
mock_router.complete.return_value = "1. B — best option"
mock_router.config.get.return_value = ["claude_code", "vllm"]
with patch("dev_api.LLMRouter", return_value=mock_router):
def test_analyze_queues_task_and_returns_task_id(client):
"""POST analyze queues a background task and returns task_id + is_new."""
with patch("scripts.task_runner.submit_task", return_value=(42, True)) as mock_submit:
resp = client.post("/api/jobs/1/survey/analyze", json={
"text": "Q1: Do you prefer teamwork?\nA. Solo B. Together",
"mode": "quick",
})
assert resp.status_code == 200
data = resp.json()
assert data["source"] == "text_paste"
assert "B" in data["output"]
# System prompt must be passed for text path
call_kwargs = mock_router.complete.call_args[1]
assert "system" in call_kwargs
assert "culture-fit survey" in call_kwargs["system"]
assert data["task_id"] == 42
assert data["is_new"] is True
# submit_task called with survey_analyze type
call_kwargs = mock_submit.call_args
assert call_kwargs.kwargs["task_type"] == "survey_analyze"
assert call_kwargs.kwargs["job_id"] == 1
params = json.loads(call_kwargs.kwargs["params"])
assert params["mode"] == "quick"
assert params["text"] == "Q1: Do you prefer teamwork?\nA. Solo B. Together"
def test_analyze_text_detailed(client):
"""Text mode detailed analysis passes correct prompt."""
mock_router = MagicMock()
mock_router.complete.return_value = "Option A: good for... Option B: better because..."
mock_router.config.get.return_value = []
with patch("dev_api.LLMRouter", return_value=mock_router):
def test_analyze_silently_attaches_to_existing_task(client):
"""is_new=False when task already running for same input."""
with patch("scripts.task_runner.submit_task", return_value=(7, False)):
resp = client.post("/api/jobs/1/survey/analyze", json={
"text": "Q1: Describe your work style.",
"mode": "detailed",
"text": "Q1: test", "mode": "quick",
})
assert resp.status_code == 200
assert resp.json()["source"] == "text_paste"
assert resp.json()["is_new"] is False
def test_analyze_image(client):
"""Image mode routes through vision path with NO system prompt."""
mock_router = MagicMock()
mock_router.complete.return_value = "1. C — collaborative choice"
mock_router.config.get.return_value = ["vision_service", "claude_code"]
with patch("dev_api.LLMRouter", return_value=mock_router):
def test_analyze_invalid_mode_returns_400(client):
resp = client.post("/api/jobs/1/survey/analyze", json={"text": "Q1: test", "mode": "wrong"})
assert resp.status_code == 400
def test_analyze_image_mode_passes_image_in_params(client):
"""Image payload is forwarded in task params."""
with patch("scripts.task_runner.submit_task", return_value=(1, True)) as mock_submit:
resp = client.post("/api/jobs/1/survey/analyze", json={
"image_b64": "aGVsbG8=",
"mode": "quick",
})
assert resp.status_code == 200
params = json.loads(mock_submit.call_args.kwargs["params"])
assert params["image_b64"] == "aGVsbG8="
assert params["text"] is None
# ── GET /api/jobs/{id}/survey/analyze/task ────────────────────────────────────
def test_task_poll_completed_text(client, fresh_db):
"""Completed task with text result returns parsed source + output."""
result_json = json.dumps({"output": "1. B — best option", "source": "text_paste"})
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, error) VALUES (?,?,?,?)",
("survey_analyze", 1, "completed", result_json),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
data = resp.json()
assert data["source"] == "screenshot"
# No system prompt on vision path
call_kwargs = mock_router.complete.call_args[1]
assert "system" not in call_kwargs
assert data["status"] == "completed"
assert data["result"]["source"] == "text_paste"
assert "B" in data["result"]["output"]
assert data["message"] is None
def test_analyze_llm_failure(client):
"""Returns 500 when LLM raises an exception."""
mock_router = MagicMock()
mock_router.complete.side_effect = Exception("LLM unavailable")
mock_router.config.get.return_value = []
with patch("dev_api.LLMRouter", return_value=mock_router):
resp = client.post("/api/jobs/1/survey/analyze", json={
"text": "Q1: test",
"mode": "quick",
})
assert resp.status_code == 500
def test_task_poll_completed_screenshot(client, fresh_db):
"""Completed task with image result returns source=screenshot."""
result_json = json.dumps({"output": "1. C — collaborative", "source": "screenshot"})
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, error) VALUES (?,?,?,?)",
("survey_analyze", 1, "completed", result_json),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
assert resp.json()["result"]["source"] == "screenshot"
# ── POST /api/jobs/{id}/survey/responses ────────────────────────────────────
def test_task_poll_failed_returns_message(client, fresh_db):
"""Failed task returns status=failed with error message."""
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, error) VALUES (?,?,?,?)",
("survey_analyze", 1, "failed", "LLM unavailable"),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
data = resp.json()
assert data["status"] == "failed"
assert data["message"] == "LLM unavailable"
assert data["result"] is None
def test_task_poll_running_returns_stage(client, fresh_db):
"""Running task returns status=running with current stage."""
con = sqlite3.connect(fresh_db)
con.execute(
"INSERT INTO background_tasks (task_type, job_id, status, stage) VALUES (?,?,?,?)",
("survey_analyze", 1, "running", "analyzing survey"),
)
task_id = con.execute("SELECT last_insert_rowid()").fetchone()[0]
con.commit(); con.close()
resp = client.get(f"/api/jobs/1/survey/analyze/task?task_id={task_id}")
assert resp.status_code == 200
data = resp.json()
assert data["status"] == "running"
assert data["stage"] == "analyzing survey"
def test_task_poll_none_when_no_task(client):
"""Returns status=none when no task exists for the job."""
resp = client.get("/api/jobs/999/survey/analyze/task")
assert resp.status_code == 200
assert resp.json()["status"] == "none"
# ── POST /api/jobs/{id}/survey/responses ─────────────────────────────────────
def test_save_response_text(client):
"""Save text response writes to DB and returns id."""
mock_db = MagicMock()
with patch("dev_api._get_db", return_value=mock_db):
with patch("dev_api.insert_survey_response", return_value=42) as mock_insert:
resp = client.post("/api/jobs/1/survey/responses", json={
"mode": "quick",
"source": "text_paste",
"raw_input": "Q1: test question",
"llm_output": "1. B — good reason",
})
"""Save a text-mode survey response returns an id."""
resp = client.post("/api/jobs/1/survey/responses", json={
"survey_name": "Culture Fit",
"mode": "quick",
"source": "text_paste",
"raw_input": "Q1: Teamwork?",
"llm_output": "1. B is best",
"reported_score": "85",
})
assert resp.status_code == 200
assert resp.json()["id"] == 42
# received_at generated by backend — not None
call_args = mock_insert.call_args
assert call_args[1]["received_at"] is not None or call_args[0][3] is not None
assert "id" in resp.json()
def test_save_response_with_image(client, tmp_path, monkeypatch):
"""Save image response writes PNG file and stores path in DB."""
monkeypatch.setenv("STAGING_DB", str(tmp_path / "test.db"))
with patch("dev_api.insert_survey_response", return_value=7) as mock_insert:
with patch("dev_api.Path") as mock_path_cls:
mock_path_cls.return_value.__truediv__ = lambda s, o: tmp_path / o
resp = client.post("/api/jobs/1/survey/responses", json={
"mode": "quick",
"source": "screenshot",
"image_b64": "aGVsbG8=", # valid base64
"llm_output": "1. B — reason",
})
def test_save_response_with_image(client):
"""Save a screenshot-mode survey response returns an id."""
resp = client.post("/api/jobs/1/survey/responses", json={
"survey_name": None,
"mode": "quick",
"source": "screenshot",
"image_b64": "aGVsbG8=",
"llm_output": "1. C collaborative",
"reported_score": None,
})
assert resp.status_code == 200
assert resp.json()["id"] == 7
assert "id" in resp.json()
# ── GET /api/jobs/{id}/survey/responses ─────────────────────────────────────
def test_get_history_empty(client):
"""Returns empty list when no history exists."""
with patch("dev_api.get_survey_responses", return_value=[]):
resp = client.get("/api/jobs/1/survey/responses")
"""History is empty for a fresh job."""
resp = client.get("/api/jobs/1/survey/responses")
assert resp.status_code == 200
assert resp.json() == []
def test_get_history_populated(client):
"""Returns history rows newest first."""
rows = [
{"id": 2, "survey_name": "Round 2", "mode": "detailed", "source": "text_paste",
"raw_input": None, "image_path": None, "llm_output": "Option A is best",
"reported_score": "90%", "received_at": "2026-03-21T14:00:00", "created_at": "2026-03-21T14:00:01"},
{"id": 1, "survey_name": "Round 1", "mode": "quick", "source": "text_paste",
"raw_input": "Q1: test", "image_path": None, "llm_output": "1. B",
"reported_score": None, "received_at": "2026-03-21T12:00:00", "created_at": "2026-03-21T12:00:01"},
]
with patch("dev_api.get_survey_responses", return_value=rows):
resp = client.get("/api/jobs/1/survey/responses")
"""History returns all saved responses for a job in reverse order."""
for i in range(2):
client.post("/api/jobs/1/survey/responses", json={
"survey_name": f"Survey {i}",
"mode": "quick",
"source": "text_paste",
"llm_output": f"Output {i}",
})
resp = client.get("/api/jobs/1/survey/responses")
assert resp.status_code == 200
data = resp.json()
assert len(data) == 2
assert data[0]["id"] == 2
assert data[0]["survey_name"] == "Round 2"
assert len(resp.json()) == 2

399
tests/test_messaging.py Normal file
View file

@ -0,0 +1,399 @@
"""
Unit tests for scripts/messaging.py DB helpers for messages and message_templates.
TDD approach: tests written before implementation.
"""
import sqlite3
from pathlib import Path
import pytest
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
def _apply_migration_008(db_path: Path) -> None:
"""Apply migration 008 directly so tests run without the full migrate_db stack."""
migration = (
Path(__file__).parent.parent / "migrations" / "008_messaging.sql"
)
sql = migration.read_text(encoding="utf-8")
con = sqlite3.connect(db_path)
try:
# Create jobs table stub so FK references don't break
con.execute("""
CREATE TABLE IF NOT EXISTS jobs (
id INTEGER PRIMARY KEY AUTOINCREMENT,
title TEXT
)
""")
con.execute("""
CREATE TABLE IF NOT EXISTS job_contacts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
job_id INTEGER
)
""")
# Execute migration statements
statements = [s.strip() for s in sql.split(";") if s.strip()]
for stmt in statements:
stripped = "\n".join(
ln for ln in stmt.splitlines() if not ln.strip().startswith("--")
).strip()
if stripped:
con.execute(stripped)
con.commit()
finally:
con.close()
@pytest.fixture()
def db_path(tmp_path: Path) -> Path:
"""Temporary SQLite DB with migration 008 applied."""
path = tmp_path / "test.db"
_apply_migration_008(path)
return path
@pytest.fixture()
def job_id(db_path: Path) -> int:
"""Insert a dummy job and return its id."""
con = sqlite3.connect(db_path)
try:
cur = con.execute("INSERT INTO jobs (title) VALUES ('Test Job')")
con.commit()
return cur.lastrowid
finally:
con.close()
# ---------------------------------------------------------------------------
# Message tests
# ---------------------------------------------------------------------------
class TestCreateMessage:
def test_create_returns_dict(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import create_message
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Hello",
body="Body text",
from_addr="me@example.com",
to_addr="them@example.com",
template_id=None,
)
assert isinstance(msg, dict)
assert msg["subject"] == "Hello"
assert msg["body"] == "Body text"
assert msg["direction"] == "outbound"
assert msg["type"] == "email"
assert "id" in msg
assert msg["id"] > 0
def test_create_persists_to_db(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import create_message
create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Persisted",
body="Stored body",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
con = sqlite3.connect(db_path)
try:
row = con.execute(
"SELECT subject FROM messages WHERE subject='Persisted'"
).fetchone()
assert row is not None
finally:
con.close()
class TestListMessages:
def _make_message(
self,
db_path: Path,
job_id: int,
*,
type: str = "email",
direction: str = "outbound",
subject: str = "Subject",
) -> dict:
from scripts.messaging import create_message
return create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type=type,
direction=direction,
subject=subject,
body="body",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
def test_list_returns_all_messages(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
self._make_message(db_path, job_id, subject="First")
self._make_message(db_path, job_id, subject="Second")
result = list_messages(db_path)
assert len(result) == 2
def test_list_filtered_by_job_id(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
# Create a second job
con = sqlite3.connect(db_path)
try:
cur = con.execute("INSERT INTO jobs (title) VALUES ('Other Job')")
con.commit()
other_job_id = cur.lastrowid
finally:
con.close()
self._make_message(db_path, job_id, subject="For job 1")
self._make_message(db_path, other_job_id, subject="For job 2")
result = list_messages(db_path, job_id=job_id)
assert len(result) == 1
assert result[0]["subject"] == "For job 1"
def test_list_filtered_by_type(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
self._make_message(db_path, job_id, type="email", subject="Email msg")
self._make_message(db_path, job_id, type="sms", subject="SMS msg")
emails = list_messages(db_path, type="email")
assert len(emails) == 1
assert emails[0]["type"] == "email"
def test_list_filtered_by_direction(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
self._make_message(db_path, job_id, direction="outbound")
self._make_message(db_path, job_id, direction="inbound")
outbound = list_messages(db_path, direction="outbound")
assert len(outbound) == 1
assert outbound[0]["direction"] == "outbound"
def test_list_respects_limit(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import list_messages
for i in range(5):
self._make_message(db_path, job_id, subject=f"Msg {i}")
result = list_messages(db_path, limit=3)
assert len(result) == 3
class TestDeleteMessage:
def test_delete_removes_message(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import create_message, delete_message, list_messages
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="To delete",
body="bye",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
delete_message(db_path, msg["id"])
assert list_messages(db_path) == []
def test_delete_raises_key_error_when_not_found(self, db_path: Path) -> None:
from scripts.messaging import delete_message
with pytest.raises(KeyError):
delete_message(db_path, 99999)
class TestApproveMessage:
def test_approve_sets_approved_at(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import approve_message, create_message
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Draft",
body="Draft body",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
assert msg.get("approved_at") is None
updated = approve_message(db_path, msg["id"])
assert updated["approved_at"] is not None
assert updated["id"] == msg["id"]
def test_approve_returns_full_dict(self, db_path: Path, job_id: int) -> None:
from scripts.messaging import approve_message, create_message
msg = create_message(
db_path,
job_id=job_id,
job_contact_id=None,
type="email",
direction="outbound",
subject="Draft",
body="Body here",
from_addr="a@b.com",
to_addr="c@d.com",
template_id=None,
)
updated = approve_message(db_path, msg["id"])
assert updated["body"] == "Body here"
assert updated["subject"] == "Draft"
def test_approve_raises_key_error_when_not_found(self, db_path: Path) -> None:
from scripts.messaging import approve_message
with pytest.raises(KeyError):
approve_message(db_path, 99999)
# ---------------------------------------------------------------------------
# Template tests
# ---------------------------------------------------------------------------
class TestListTemplates:
def test_includes_four_builtins(self, db_path: Path) -> None:
from scripts.messaging import list_templates
templates = list_templates(db_path)
builtin_keys = {t["key"] for t in templates if t["is_builtin"]}
assert builtin_keys == {
"follow_up",
"thank_you",
"accommodation_request",
"withdrawal",
}
def test_returns_list_of_dicts(self, db_path: Path) -> None:
from scripts.messaging import list_templates
templates = list_templates(db_path)
assert isinstance(templates, list)
assert all(isinstance(t, dict) for t in templates)
class TestCreateTemplate:
def test_create_returns_dict(self, db_path: Path) -> None:
from scripts.messaging import create_template
tmpl = create_template(
db_path,
title="My Template",
category="custom",
subject_template="Hello {{name}}",
body_template="Dear {{name}}, ...",
)
assert isinstance(tmpl, dict)
assert tmpl["title"] == "My Template"
assert tmpl["category"] == "custom"
assert tmpl["is_builtin"] == 0
assert "id" in tmpl
def test_create_default_category(self, db_path: Path) -> None:
from scripts.messaging import create_template
tmpl = create_template(
db_path,
title="No Category",
body_template="Body",
)
assert tmpl["category"] == "custom"
def test_create_appears_in_list(self, db_path: Path) -> None:
from scripts.messaging import create_template, list_templates
create_template(db_path, title="Listed", body_template="Body")
titles = [t["title"] for t in list_templates(db_path)]
assert "Listed" in titles
class TestUpdateTemplate:
def test_update_user_template(self, db_path: Path) -> None:
from scripts.messaging import create_template, update_template
tmpl = create_template(db_path, title="Original", body_template="Old body")
updated = update_template(db_path, tmpl["id"], title="Updated", body_template="New body")
assert updated["title"] == "Updated"
assert updated["body_template"] == "New body"
def test_update_returns_persisted_values(self, db_path: Path) -> None:
from scripts.messaging import create_template, list_templates, update_template
tmpl = create_template(db_path, title="Before", body_template="x")
update_template(db_path, tmpl["id"], title="After")
templates = list_templates(db_path)
titles = [t["title"] for t in templates]
assert "After" in titles
assert "Before" not in titles
def test_update_builtin_raises_permission_error(self, db_path: Path) -> None:
from scripts.messaging import list_templates, update_template
builtin = next(t for t in list_templates(db_path) if t["is_builtin"])
with pytest.raises(PermissionError):
update_template(db_path, builtin["id"], title="Hacked")
def test_update_missing_raises_key_error(self, db_path):
from scripts.messaging import update_template
with pytest.raises(KeyError):
update_template(db_path, 9999, title="Ghost")
class TestDeleteTemplate:
def test_delete_user_template(self, db_path: Path) -> None:
from scripts.messaging import create_template, delete_template, list_templates
tmpl = create_template(db_path, title="To Delete", body_template="bye")
initial_count = len(list_templates(db_path))
delete_template(db_path, tmpl["id"])
assert len(list_templates(db_path)) == initial_count - 1
def test_delete_builtin_raises_permission_error(self, db_path: Path) -> None:
from scripts.messaging import delete_template, list_templates
builtin = next(t for t in list_templates(db_path) if t["is_builtin"])
with pytest.raises(PermissionError):
delete_template(db_path, builtin["id"])
def test_delete_missing_raises_key_error(self, db_path: Path) -> None:
from scripts.messaging import delete_template
with pytest.raises(KeyError):
delete_template(db_path, 99999)

View file

@ -0,0 +1,195 @@
"""Integration tests for messaging endpoints."""
import os
from pathlib import Path
import pytest
from fastapi.testclient import TestClient
from scripts.db_migrate import migrate_db
@pytest.fixture
def fresh_db(tmp_path, monkeypatch):
"""Set up a fresh isolated DB wired to dev_api._request_db."""
db = tmp_path / "test.db"
monkeypatch.setenv("STAGING_DB", str(db))
migrate_db(db)
import dev_api
monkeypatch.setattr(
dev_api,
"_request_db",
type("CV", (), {"get": lambda self: str(db), "set": lambda *a: None})(),
)
monkeypatch.setattr(dev_api, "DB_PATH", str(db))
return db
@pytest.fixture
def client(fresh_db):
import dev_api
return TestClient(dev_api.app)
# ---------------------------------------------------------------------------
# Messages
# ---------------------------------------------------------------------------
def test_create_and_list_message(client):
"""POST /api/messages creates a row; GET /api/messages?job_id= returns it."""
payload = {
"job_id": 1,
"type": "email",
"direction": "outbound",
"subject": "Hello recruiter",
"body": "I am very interested in this role.",
"to_addr": "recruiter@example.com",
}
resp = client.post("/api/messages", json=payload)
assert resp.status_code == 200, resp.text
created = resp.json()
assert created["subject"] == "Hello recruiter"
assert created["job_id"] == 1
resp = client.get("/api/messages", params={"job_id": 1})
assert resp.status_code == 200
messages = resp.json()
assert any(m["id"] == created["id"] for m in messages)
def test_delete_message(client):
"""DELETE removes the message; subsequent GET no longer returns it."""
resp = client.post("/api/messages", json={"type": "email", "direction": "outbound", "body": "bye"})
assert resp.status_code == 200
msg_id = resp.json()["id"]
resp = client.delete(f"/api/messages/{msg_id}")
assert resp.status_code == 200
assert resp.json()["ok"] is True
resp = client.get("/api/messages")
assert resp.status_code == 200
ids = [m["id"] for m in resp.json()]
assert msg_id not in ids
def test_delete_message_not_found(client):
"""DELETE /api/messages/9999 returns 404."""
resp = client.delete("/api/messages/9999")
assert resp.status_code == 404
# ---------------------------------------------------------------------------
# Templates
# ---------------------------------------------------------------------------
def test_list_templates_has_builtins(client):
"""GET /api/message-templates includes the seeded built-in keys."""
resp = client.get("/api/message-templates")
assert resp.status_code == 200
templates = resp.json()
keys = {t["key"] for t in templates}
assert "follow_up" in keys
assert "thank_you" in keys
def test_template_create_update_delete(client):
"""Full lifecycle: create → update title → delete a user-defined template."""
# Create
resp = client.post("/api/message-templates", json={
"title": "My Template",
"category": "custom",
"body_template": "Hello {{name}}",
})
assert resp.status_code == 200
tmpl = resp.json()
assert tmpl["title"] == "My Template"
assert tmpl["is_builtin"] == 0
tmpl_id = tmpl["id"]
# Update title
resp = client.put(f"/api/message-templates/{tmpl_id}", json={"title": "Updated Title"})
assert resp.status_code == 200
assert resp.json()["title"] == "Updated Title"
# Delete
resp = client.delete(f"/api/message-templates/{tmpl_id}")
assert resp.status_code == 200
assert resp.json()["ok"] is True
# Confirm gone
resp = client.get("/api/message-templates")
ids = [t["id"] for t in resp.json()]
assert tmpl_id not in ids
def test_builtin_template_put_returns_403(client):
"""PUT on a built-in template returns 403."""
resp = client.get("/api/message-templates")
builtin = next(t for t in resp.json() if t["is_builtin"] == 1)
resp = client.put(f"/api/message-templates/{builtin['id']}", json={"title": "Hacked"})
assert resp.status_code == 403
def test_builtin_template_delete_returns_403(client):
"""DELETE on a built-in template returns 403."""
resp = client.get("/api/message-templates")
builtin = next(t for t in resp.json() if t["is_builtin"] == 1)
resp = client.delete(f"/api/message-templates/{builtin['id']}")
assert resp.status_code == 403
# ---------------------------------------------------------------------------
# Draft reply (tier gate)
# ---------------------------------------------------------------------------
def test_draft_without_llm_returns_402(fresh_db, monkeypatch):
"""POST /api/contacts/{id}/draft-reply with free tier + no LLM configured returns 402."""
import dev_api
from scripts.db import add_contact
# Insert a job_contacts row via the db helper so schema changes stay in sync
contact_id = add_contact(
fresh_db,
job_id=None,
direction="inbound",
subject="Test subject",
from_addr="hr@example.com",
body="We would like to schedule...",
)
# Ensure has_configured_llm returns False at both import locations
monkeypatch.setattr("app.wizard.tiers.has_configured_llm", lambda *a, **kw: False)
# Force free tier via the tiers module (not via header — header is no longer trusted)
monkeypatch.setattr("app.wizard.tiers.effective_tier", lambda: "free")
client = TestClient(dev_api.app)
resp = client.post(f"/api/contacts/{contact_id}/draft-reply")
assert resp.status_code == 402
# ---------------------------------------------------------------------------
# Approve
# ---------------------------------------------------------------------------
def test_approve_message(client):
"""POST /api/messages then POST /api/messages/{id}/approve returns body + approved_at."""
resp = client.post("/api/messages", json={
"type": "draft",
"direction": "outbound",
"body": "This is my draft reply.",
})
assert resp.status_code == 200
msg_id = resp.json()["id"]
assert resp.json()["approved_at"] is None
resp = client.post(f"/api/messages/{msg_id}/approve")
assert resp.status_code == 200
data = resp.json()
assert data["body"] == "This is my draft reply."
assert data["approved_at"] is not None
def test_approve_message_not_found(client):
"""POST /api/messages/9999/approve returns 404."""
resp = client.post("/api/messages/9999/approve")
assert resp.status_code == 404

207
tests/test_resume_sync.py Normal file
View file

@ -0,0 +1,207 @@
"""Unit tests for scripts.resume_sync — format transform between library and profile."""
import json
import pytest
from scripts.resume_sync import (
library_to_profile_content,
profile_to_library,
make_auto_backup_name,
blank_fields_on_import,
)
# ── Fixtures ──────────────────────────────────────────────────────────────────
STRUCT_JSON = {
"name": "Alex Rivera",
"email": "alex@example.com",
"phone": "555-0100",
"career_summary": "Senior UX Designer with 6 years experience.",
"experience": [
{
"title": "Senior UX Designer",
"company": "StreamNote",
"start_date": "2023",
"end_date": "present",
"location": "Remote",
"bullets": ["Led queue redesign", "Built component library"],
}
],
"education": [
{
"institution": "State University",
"degree": "B.F.A.",
"field": "Graphic Design",
"start_date": "2015",
"end_date": "2019",
}
],
"skills": ["Figma", "User Research"],
"achievements": ["Design award 2024"],
}
PROFILE_PAYLOAD = {
"name": "Alex",
"surname": "Rivera",
"email": "alex@example.com",
"phone": "555-0100",
"career_summary": "Senior UX Designer with 6 years experience.",
"experience": [
{
"title": "Senior UX Designer",
"company": "StreamNote",
"period": "2023 present",
"location": "Remote",
"industry": "",
"responsibilities": "Led queue redesign\nBuilt component library",
"skills": [],
}
],
"education": [
{
"institution": "State University",
"degree": "B.F.A.",
"field": "Graphic Design",
"start_date": "2015",
"end_date": "2019",
}
],
"skills": ["Figma", "User Research"],
"achievements": ["Design award 2024"],
}
# ── library_to_profile_content ────────────────────────────────────────────────
def test_library_to_profile_splits_name():
result = library_to_profile_content(STRUCT_JSON)
assert result["name"] == "Alex"
assert result["surname"] == "Rivera"
def test_library_to_profile_single_word_name():
result = library_to_profile_content({**STRUCT_JSON, "name": "Cher"})
assert result["name"] == "Cher"
assert result["surname"] == ""
def test_library_to_profile_email_phone():
result = library_to_profile_content(STRUCT_JSON)
assert result["email"] == "alex@example.com"
assert result["phone"] == "555-0100"
def test_library_to_profile_career_summary():
result = library_to_profile_content(STRUCT_JSON)
assert result["career_summary"] == "Senior UX Designer with 6 years experience."
def test_library_to_profile_experience_period():
result = library_to_profile_content(STRUCT_JSON)
assert result["experience"][0]["period"] == "2023 present"
def test_library_to_profile_experience_bullets_joined():
result = library_to_profile_content(STRUCT_JSON)
assert result["experience"][0]["responsibilities"] == "Led queue redesign\nBuilt component library"
def test_library_to_profile_experience_industry_blank():
result = library_to_profile_content(STRUCT_JSON)
assert result["experience"][0]["industry"] == ""
def test_library_to_profile_education():
result = library_to_profile_content(STRUCT_JSON)
assert result["education"][0]["institution"] == "State University"
assert result["education"][0]["degree"] == "B.F.A."
def test_library_to_profile_skills():
result = library_to_profile_content(STRUCT_JSON)
assert result["skills"] == ["Figma", "User Research"]
def test_library_to_profile_achievements():
result = library_to_profile_content(STRUCT_JSON)
assert result["achievements"] == ["Design award 2024"]
def test_library_to_profile_missing_fields_no_keyerror():
result = library_to_profile_content({})
assert result["name"] == ""
assert result["experience"] == []
assert result["education"] == []
assert result["skills"] == []
assert result["achievements"] == []
# ── profile_to_library ────────────────────────────────────────────────────────
def test_profile_to_library_full_name():
text, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["name"] == "Alex Rivera"
def test_profile_to_library_experience_bullets_reconstructed():
_, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["experience"][0]["bullets"] == ["Led queue redesign", "Built component library"]
def test_profile_to_library_period_split():
_, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["experience"][0]["start_date"] == "2023"
assert struct["experience"][0]["end_date"] == "present"
def test_profile_to_library_period_split_iso_dates():
"""ISO dates (with hyphens) must round-trip through the period field correctly."""
payload = {
**PROFILE_PAYLOAD,
"experience": [{
**PROFILE_PAYLOAD["experience"][0],
"period": "2023-01 \u2013 2025-03",
}],
}
_, struct = profile_to_library(payload)
assert struct["experience"][0]["start_date"] == "2023-01"
assert struct["experience"][0]["end_date"] == "2025-03"
def test_profile_to_library_period_split_em_dash():
"""Em-dash separator is also handled."""
payload = {
**PROFILE_PAYLOAD,
"experience": [{
**PROFILE_PAYLOAD["experience"][0],
"period": "2022-06 \u2014 2023-12",
}],
}
_, struct = profile_to_library(payload)
assert struct["experience"][0]["start_date"] == "2022-06"
assert struct["experience"][0]["end_date"] == "2023-12"
def test_profile_to_library_education_round_trip():
_, struct = profile_to_library(PROFILE_PAYLOAD)
assert struct["education"][0]["institution"] == "State University"
def test_profile_to_library_plain_text_contains_name():
text, _ = profile_to_library(PROFILE_PAYLOAD)
assert "Alex Rivera" in text
def test_profile_to_library_plain_text_contains_summary():
text, _ = profile_to_library(PROFILE_PAYLOAD)
assert "Senior UX Designer" in text
def test_profile_to_library_empty_payload_no_crash():
text, struct = profile_to_library({})
assert isinstance(text, str)
assert isinstance(struct, dict)
# ── make_auto_backup_name ─────────────────────────────────────────────────────
def test_backup_name_format():
name = make_auto_backup_name("Senior Engineer Resume")
import re
assert re.match(r"Auto-backup before Senior Engineer Resume — \d{4}-\d{2}-\d{2}", name)
# ── blank_fields_on_import ────────────────────────────────────────────────────
def test_blank_fields_industry_always_listed():
result = blank_fields_on_import(STRUCT_JSON)
assert "experience[].industry" in result
def test_blank_fields_location_listed_when_missing():
no_loc = {**STRUCT_JSON, "experience": [{**STRUCT_JSON["experience"][0], "location": ""}]}
result = blank_fields_on_import(no_loc)
assert "experience[].location" in result
def test_blank_fields_location_not_listed_when_present():
result = blank_fields_on_import(STRUCT_JSON)
assert "experience[].location" not in result

View file

@ -0,0 +1,134 @@
"""Integration tests for resume library<->profile sync endpoints."""
import json
import os
from pathlib import Path
import pytest
import yaml
from fastapi.testclient import TestClient
from scripts.db import create_resume, get_resume, list_resumes
from scripts.db_migrate import migrate_db
STRUCT_JSON = {
"name": "Alex Rivera",
"email": "alex@example.com",
"phone": "555-0100",
"career_summary": "Senior UX Designer.",
"experience": [{"title": "Designer", "company": "Acme", "start_date": "2022",
"end_date": "present", "location": "Remote", "bullets": ["Led redesign"]}],
"education": [{"institution": "State U", "degree": "B.A.", "field": "Design",
"start_date": "2016", "end_date": "2020"}],
"skills": ["Figma"],
"achievements": ["Design award"],
}
@pytest.fixture
def fresh_db(tmp_path, monkeypatch):
"""Set up a fresh isolated DB + config dir, wired to dev_api._request_db."""
db = tmp_path / "test.db"
cfg = tmp_path / "config"
cfg.mkdir()
# STAGING_DB drives _user_yaml_path() -> dirname(db)/config/user.yaml
monkeypatch.setenv("STAGING_DB", str(db))
migrate_db(db)
import dev_api
monkeypatch.setattr(
dev_api,
"_request_db",
type("CV", (), {"get": lambda self: str(db), "set": lambda *a: None})(),
)
return db, cfg
def test_apply_to_profile_updates_yaml(fresh_db, monkeypatch):
db, cfg = fresh_db
import dev_api
client = TestClient(dev_api.app)
entry = create_resume(db, name="Test Resume",
text="Alex Rivera\n", source="uploaded",
struct_json=json.dumps(STRUCT_JSON))
resp = client.post(f"/api/resumes/{entry['id']}/apply-to-profile")
assert resp.status_code == 200
data = resp.json()
assert data["ok"] is True
assert "backup_id" in data
assert "Auto-backup before Test Resume" in data["backup_name"]
profile_yaml = cfg / "plain_text_resume.yaml"
assert profile_yaml.exists()
profile = yaml.safe_load(profile_yaml.read_text())
assert profile["career_summary"] == "Senior UX Designer."
# Name split: "Alex Rivera" -> name="Alex", surname="Rivera"
assert profile["name"] == "Alex"
assert profile["surname"] == "Rivera"
assert profile["education"][0]["institution"] == "State U"
def test_apply_to_profile_creates_backup(fresh_db, monkeypatch):
db, cfg = fresh_db
profile_path = cfg / "plain_text_resume.yaml"
profile_path.write_text(yaml.dump({"name": "Old Name", "career_summary": "Old summary"}))
entry = create_resume(db, name="New Resume",
text="Alex Rivera\n", source="uploaded",
struct_json=json.dumps(STRUCT_JSON))
import dev_api
client = TestClient(dev_api.app)
client.post(f"/api/resumes/{entry['id']}/apply-to-profile")
resumes = list_resumes(db_path=db)
backup = next((r for r in resumes if r["source"] == "auto_backup"), None)
assert backup is not None
def test_apply_to_profile_preserves_metadata(fresh_db, monkeypatch):
db, cfg = fresh_db
profile_path = cfg / "plain_text_resume.yaml"
profile_path.write_text(yaml.dump({
"name": "Old", "salary_min": 80000, "salary_max": 120000,
"remote": True, "gender": "non-binary",
}))
entry = create_resume(db, name="New",
text="Alex\n", source="uploaded",
struct_json=json.dumps(STRUCT_JSON))
import dev_api
client = TestClient(dev_api.app)
client.post(f"/api/resumes/{entry['id']}/apply-to-profile")
profile = yaml.safe_load(profile_path.read_text())
assert profile["salary_min"] == 80000
assert profile["remote"] is True
assert profile["gender"] == "non-binary"
def test_save_resume_syncs_to_default_library_entry(fresh_db, monkeypatch):
db, cfg = fresh_db
entry = create_resume(db, name="My Resume",
text="Original", source="manual")
user_yaml = cfg / "user.yaml"
user_yaml.write_text(yaml.dump({"default_resume_id": entry["id"], "wizard_complete": True}))
import dev_api
client = TestClient(dev_api.app)
resp = client.put("/api/settings/resume", json={
"name": "Alex", "career_summary": "Updated summary",
"experience": [], "education": [], "achievements": [], "skills": [],
})
assert resp.status_code == 200
data = resp.json()
assert data["synced_library_entry_id"] == entry["id"]
updated = get_resume(db_path=db, resume_id=entry["id"])
assert updated["synced_at"] is not None
struct = json.loads(updated["struct_json"])
assert struct["career_summary"] == "Updated summary"
def test_save_resume_no_default_no_crash(fresh_db, monkeypatch):
db, cfg = fresh_db
user_yaml = cfg / "user.yaml"
user_yaml.write_text(yaml.dump({"wizard_complete": True}))
import dev_api
client = TestClient(dev_api.app)
resp = client.put("/api/settings/resume", json={
"name": "Alex", "career_summary": "", "experience": [],
"education": [], "achievements": [], "skills": [],
})
assert resp.status_code == 200
assert resp.json()["synced_library_entry_id"] is None

View file

@ -11,6 +11,9 @@
html, body { margin: 0; background: #eaeff8; min-height: 100vh; }
@media (prefers-color-scheme: dark) { html, body { background: #16202e; } }
</style>
<!-- Plausible analytics: cookie-free, GDPR-compliant, self-hosted.
Skips localhost/127.0.0.1. Reports to hostname + circuitforge.tech rollup. -->
<script>(function(){if(/localhost|127\.0\.0\.1/.test(location.hostname))return;var s=document.createElement('script');s.defer=true;s.dataset.domain=location.hostname+',circuitforge.tech';s.dataset.api='https://analytics.circuitforge.tech/api/event';s.src='https://analytics.circuitforge.tech/js/script.js';document.head.appendChild(s);})();</script>
</head>
<body>
<!-- Mount target only — App.vue root must NOT use id="app". Gotcha #1. -->

View file

@ -157,7 +157,7 @@ const navLinks = computed(() => [
{ to: '/apply', icon: PencilSquareIcon, label: 'Apply' },
{ to: '/resumes', icon: DocumentTextIcon, label: 'Resumes' },
{ to: '/interviews', icon: CalendarDaysIcon, label: 'Interviews' },
{ to: '/contacts', icon: UsersIcon, label: 'Contacts' },
{ to: '/messages', icon: UsersIcon, label: 'Messages' },
{ to: '/references', icon: IdentificationIcon, label: 'References' },
{ to: '/digest', icon: NewspaperIcon, label: 'Digest',
badge: digestStore.entries.length || undefined },

View file

@ -0,0 +1,200 @@
<!-- web/src/components/MessageLogModal.vue -->
<template>
<Teleport to="body">
<div
v-if="show"
class="modal-backdrop"
@click.self="emit('close')"
>
<div
ref="dialogEl"
class="modal-dialog"
role="dialog"
aria-modal="true"
:aria-label="title"
tabindex="-1"
@keydown.esc="emit('close')"
>
<header class="modal-header">
<h2 class="modal-title">{{ title }}</h2>
<button class="modal-close" @click="emit('close')" aria-label="Close"></button>
</header>
<form class="modal-body" @submit.prevent="handleSubmit">
<!-- Direction (not shown for pure notes) -->
<div v-if="type !== 'in_person'" class="field">
<label class="field-label" for="log-direction">Direction</label>
<select id="log-direction" v-model="form.direction" class="field-select">
<option value="">-- not specified --</option>
<option value="inbound">Inbound (they called me)</option>
<option value="outbound">Outbound (I called them)</option>
</select>
</div>
<div class="field">
<label class="field-label" for="log-subject">Subject (optional)</label>
<input id="log-subject" v-model="form.subject" type="text" class="field-input" />
</div>
<div class="field">
<label class="field-label" for="log-body">
Notes <span class="field-required" aria-hidden="true">*</span>
</label>
<textarea
id="log-body"
v-model="form.body"
class="field-textarea"
rows="5"
required
aria-required="true"
/>
</div>
<div class="field">
<label class="field-label" for="log-date">Date/time</label>
<input id="log-date" v-model="form.logged_at" type="datetime-local" class="field-input" />
</div>
<p v-if="error" class="modal-error" role="alert">{{ error }}</p>
<footer class="modal-footer">
<button type="button" class="btn btn--ghost" @click="emit('close')">Cancel</button>
<button type="submit" class="btn btn--primary" :disabled="saving">
{{ saving ? 'Saving…' : 'Save' }}
</button>
</footer>
</form>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
import { ref, computed, watch, nextTick } from 'vue'
import { useMessagingStore } from '../stores/messaging'
const props = defineProps<{
show: boolean
jobId: number
type: 'call_note' | 'in_person'
}>()
const emit = defineEmits<{
(e: 'close'): void
(e: 'saved'): void
}>()
const store = useMessagingStore()
const dialogEl = ref<HTMLElement | null>(null)
const saving = ref(false)
const error = ref<string | null>(null)
const title = computed(() =>
props.type === 'call_note' ? 'Log a call' : 'Log an in-person note'
)
const form = ref({
direction: '',
subject: '',
body: '',
logged_at: '',
})
// Focus the dialog when it opens; compute localNow fresh each time
watch(() => props.show, async (val) => {
if (val) {
const now = new Date()
const localNow = new Date(now.getTime() - now.getTimezoneOffset() * 60000)
.toISOString()
.slice(0, 16)
error.value = null
form.value = { direction: '', subject: '', body: '', logged_at: localNow }
await nextTick()
dialogEl.value?.focus()
}
})
async function handleSubmit() {
if (!form.value.body.trim()) { error.value = 'Notes are required.'; return }
saving.value = true
error.value = null
const result = await store.createMessage({
job_id: props.jobId,
job_contact_id: null,
type: props.type,
direction: form.value.direction || null,
subject: form.value.subject || null,
body: form.value.body,
from_addr: null,
to_addr: null,
template_id: null,
logged_at: form.value.logged_at || undefined,
})
saving.value = false
if (result) emit('saved')
else error.value = store.error ?? 'Save failed.'
}
</script>
<style scoped>
.modal-backdrop {
position: fixed;
inset: 0;
background: rgba(0,0,0,0.5);
display: flex;
align-items: center;
justify-content: center;
z-index: 200;
}
.modal-dialog {
background: var(--color-surface-raised);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
width: min(480px, 95vw);
max-height: 90vh;
overflow-y: auto;
outline: none;
}
.modal-header {
display: flex;
align-items: center;
justify-content: space-between;
padding: var(--space-4) var(--space-5);
border-bottom: 1px solid var(--color-border-light);
}
.modal-title { font-size: var(--text-lg); font-weight: 600; margin: 0; }
.modal-close {
background: none; border: none; cursor: pointer;
color: var(--color-text-muted); font-size: var(--text-lg);
padding: var(--space-1); border-radius: var(--radius-sm);
min-width: 32px; min-height: 32px;
}
.modal-close:hover { background: var(--color-surface-alt); }
.modal-body { padding: var(--space-4) var(--space-5); display: flex; flex-direction: column; gap: var(--space-4); }
.field { display: flex; flex-direction: column; gap: var(--space-1); }
.field-label { font-size: var(--text-sm); font-weight: 500; color: var(--color-text-muted); }
.field-required { color: var(--app-accent); }
.field-input, .field-select, .field-textarea {
padding: var(--space-2) var(--space-3);
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text);
font-size: var(--text-sm);
font-family: var(--font-body);
width: 100%;
}
.field-input:focus-visible, .field-select:focus-visible, .field-textarea:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.field-textarea { resize: vertical; }
.modal-error { color: var(--app-accent); font-size: var(--text-sm); margin: 0; }
.modal-footer { display: flex; justify-content: flex-end; gap: var(--space-3); padding-top: var(--space-2); }
.btn { padding: var(--space-2) var(--space-4); border-radius: var(--radius-md); font-size: var(--text-sm); font-weight: 500; cursor: pointer; min-height: 40px; }
.btn--primary { background: var(--app-primary); color: var(--color-surface); border: none; }
.btn--primary:hover:not(:disabled) { opacity: 0.9; }
.btn--primary:disabled { opacity: 0.5; cursor: not-allowed; }
.btn--ghost { background: none; border: 1px solid var(--color-border); color: var(--color-text); }
.btn--ghost:hover { background: var(--color-surface-alt); }
</style>

View file

@ -0,0 +1,289 @@
<!-- web/src/components/MessageTemplateModal.vue -->
<template>
<Teleport to="body">
<div
v-if="show"
class="modal-backdrop"
@click.self="emit('close')"
>
<div
ref="dialogEl"
class="modal-dialog modal-dialog--wide"
role="dialog"
aria-modal="true"
:aria-label="title"
tabindex="-1"
@keydown.esc="emit('close')"
>
<header class="modal-header">
<h2 class="modal-title">{{ title }}</h2>
<button class="modal-close" @click="emit('close')" aria-label="Close"></button>
</header>
<!-- APPLY MODE -->
<div v-if="mode === 'apply'" class="modal-body">
<div class="tpl-list" role="list" aria-label="Available templates">
<button
v-for="tpl in store.templates"
:key="tpl.id"
class="tpl-item"
:class="{ 'tpl-item--selected': selectedId === tpl.id }"
role="listitem"
@click="selectTemplate(tpl)"
>
<span class="tpl-item__icon" aria-hidden="true">
{{ tpl.is_builtin ? '🔒' : '📝' }}
</span>
<span class="tpl-item__title">{{ tpl.title }}</span>
<span class="tpl-item__cat">{{ tpl.category }}</span>
</button>
</div>
<div v-if="preview" class="tpl-preview">
<p class="tpl-preview__subject" v-if="preview.subject">
<strong>Subject:</strong> <span v-html="highlightTokens(preview.subject)" />
</p>
<pre class="tpl-preview__body" v-html="highlightTokens(preview.body)" />
<div class="tpl-preview__actions">
<button class="btn btn--primary" @click="copyPreview">Copy body</button>
<button class="btn btn--ghost" @click="emit('close')">Cancel</button>
</div>
</div>
<p v-else class="tpl-hint">Select a template to preview it with your job details.</p>
</div>
<!-- CREATE / EDIT MODE -->
<form v-else class="modal-body" @submit.prevent="handleSubmit">
<div class="field">
<label class="field-label" for="tpl-title">Title *</label>
<input id="tpl-title" v-model="form.title" type="text" class="field-input" required aria-required="true" />
</div>
<div class="field">
<label class="field-label" for="tpl-category">Category</label>
<select id="tpl-category" v-model="form.category" class="field-select">
<option value="follow_up">Follow-up</option>
<option value="thank_you">Thank you</option>
<option value="accommodation">Accommodation request</option>
<option value="withdrawal">Withdrawal</option>
<option value="custom">Custom</option>
</select>
</div>
<div class="field">
<label class="field-label" for="tpl-subject">Subject template (optional)</label>
<input id="tpl-subject" v-model="form.subject_template" type="text" class="field-input"
placeholder="e.g. Following up — {{role}} application" />
</div>
<div class="field">
<label class="field-label" for="tpl-body">Body template *</label>
<p class="field-hint">Use <code>{{name}}</code>, <code>{{company}}</code>, <code>{{role}}</code>, <code>{{recruiter_name}}</code>, <code>{{date}}</code>, <code>{{accommodation_details}}</code></p>
<textarea id="tpl-body" v-model="form.body_template" class="field-textarea" rows="8"
required aria-required="true" />
</div>
<p v-if="error" class="modal-error" role="alert">{{ error }}</p>
<footer class="modal-footer">
<button type="button" class="btn btn--ghost" @click="emit('close')">Cancel</button>
<button type="submit" class="btn btn--primary" :disabled="store.saving">
{{ store.saving ? 'Saving…' : (mode === 'create' ? 'Create template' : 'Save changes') }}
</button>
</footer>
</form>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
import { ref, computed, watch, nextTick } from 'vue'
import { useMessagingStore, type MessageTemplate } from '../stores/messaging'
const props = defineProps<{
show: boolean
mode: 'apply' | 'create' | 'edit'
jobTokens?: Record<string, string> // { name, company, role, recruiter_name, date }
editTemplate?: MessageTemplate // required when mode='edit'
}>()
const emit = defineEmits<{
(e: 'close'): void
(e: 'saved'): void
(e: 'applied', body: string): void
}>()
const store = useMessagingStore()
const dialogEl = ref<HTMLElement | null>(null)
const selectedId = ref<number | null>(null)
const error = ref<string | null>(null)
const form = ref({
title: '',
category: 'custom',
subject_template: '',
body_template: '',
})
const title = computed(() => ({
apply: 'Use a template',
create: 'Create template',
edit: 'Edit template',
}[props.mode]))
watch(() => props.show, async (val) => {
if (!val) return
error.value = null
selectedId.value = null
if (props.mode === 'edit' && props.editTemplate) {
form.value = {
title: props.editTemplate.title,
category: props.editTemplate.category,
subject_template: props.editTemplate.subject_template ?? '',
body_template: props.editTemplate.body_template,
}
} else {
form.value = { title: '', category: 'custom', subject_template: '', body_template: '' }
}
await nextTick()
dialogEl.value?.focus()
})
function substituteTokens(text: string): string {
const tokens = props.jobTokens ?? {}
return text.replace(/\{\{(\w+)\}\}/g, (_, key) => tokens[key] ?? `{{${key}}}`)
}
function highlightTokens(text: string): string {
// Remaining unresolved tokens are highlighted
const escaped = text.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;')
return escaped.replace(
/\{\{(\w+)\}\}/g,
'<mark class="token-unresolved">{{$1}}</mark>'
)
}
interface PreviewData { subject: string; body: string }
const preview = computed<PreviewData | null>(() => {
if (props.mode !== 'apply' || selectedId.value === null) return null
const tpl = store.templates.find(t => t.id === selectedId.value)
if (!tpl) return null
return {
subject: substituteTokens(tpl.subject_template ?? ''),
body: substituteTokens(tpl.body_template),
}
})
function selectTemplate(tpl: MessageTemplate) {
selectedId.value = tpl.id
}
function copyPreview() {
if (!preview.value) return
navigator.clipboard.writeText(preview.value.body)
emit('applied', preview.value.body)
emit('close')
}
async function handleSubmit() {
error.value = null
if (props.mode === 'create') {
const result = await store.createTemplate({
title: form.value.title,
category: form.value.category,
subject_template: form.value.subject_template || undefined,
body_template: form.value.body_template,
})
if (result) emit('saved')
else error.value = store.error ?? 'Save failed.'
} else if (props.mode === 'edit' && props.editTemplate) {
const result = await store.updateTemplate(props.editTemplate.id, {
title: form.value.title,
category: form.value.category,
subject_template: form.value.subject_template || undefined,
body_template: form.value.body_template,
})
if (result) emit('saved')
else error.value = store.error ?? 'Save failed.'
}
}
</script>
<style scoped>
.modal-backdrop {
position: fixed; inset: 0;
background: rgba(0,0,0,0.5);
display: flex; align-items: center; justify-content: center;
z-index: 200;
}
.modal-dialog {
background: var(--color-surface-raised);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
width: min(560px, 95vw);
max-height: 90vh;
overflow-y: auto;
outline: none;
}
.modal-dialog--wide { width: min(700px, 95vw); }
.modal-header {
display: flex; align-items: center; justify-content: space-between;
padding: var(--space-4) var(--space-5);
border-bottom: 1px solid var(--color-border-light);
}
.modal-title { font-size: var(--text-lg); font-weight: 600; margin: 0; }
.modal-close {
background: none; border: none; cursor: pointer;
color: var(--color-text-muted); font-size: var(--text-lg);
padding: var(--space-1); border-radius: var(--radius-sm);
min-width: 32px; min-height: 32px;
}
.modal-close:hover { background: var(--color-surface-alt); }
.modal-body { padding: var(--space-4) var(--space-5); display: flex; flex-direction: column; gap: var(--space-4); }
.tpl-list { display: flex; flex-direction: column; gap: var(--space-1); max-height: 220px; overflow-y: auto; }
.tpl-item {
display: flex; align-items: center; gap: var(--space-2);
padding: var(--space-2) var(--space-3);
border: 1px solid var(--color-border); border-radius: var(--radius-md);
background: var(--color-surface-alt); cursor: pointer;
text-align: left; width: 100%;
transition: border-color 150ms, background 150ms;
}
.tpl-item:hover { border-color: var(--app-primary); background: var(--app-primary-light); }
.tpl-item--selected { border-color: var(--app-primary); background: var(--app-primary-light); font-weight: 600; }
.tpl-item__title { flex: 1; font-size: var(--text-sm); }
.tpl-item__cat { font-size: var(--text-xs); color: var(--color-text-muted); text-transform: capitalize; }
.tpl-preview { border: 1px solid var(--color-border); border-radius: var(--radius-md); padding: var(--space-4); background: var(--color-surface); }
.tpl-preview__subject { margin: 0 0 var(--space-2); font-size: var(--text-sm); }
.tpl-preview__body {
font-size: var(--text-sm); white-space: pre-wrap; font-family: var(--font-body);
margin: 0 0 var(--space-3); max-height: 200px; overflow-y: auto;
}
.tpl-preview__actions { display: flex; gap: var(--space-2); }
.tpl-hint { color: var(--color-text-muted); font-size: var(--text-sm); margin: 0; }
:global(.token-unresolved) {
background: var(--app-accent-light, #fef3c7);
color: var(--app-accent, #d97706);
border-radius: 2px;
padding: 0 2px;
}
.field { display: flex; flex-direction: column; gap: var(--space-1); }
.field-label { font-size: var(--text-sm); font-weight: 500; color: var(--color-text-muted); }
.field-hint { font-size: var(--text-xs); color: var(--color-text-muted); margin: 0; }
.field-input, .field-select, .field-textarea {
padding: var(--space-2) var(--space-3);
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text); font-size: var(--text-sm); font-family: var(--font-body); width: 100%;
}
.field-input:focus-visible, .field-select:focus-visible, .field-textarea:focus-visible {
outline: 2px solid var(--app-primary); outline-offset: 2px;
}
.field-textarea { resize: vertical; }
.modal-error { color: var(--app-accent); font-size: var(--text-sm); margin: 0; }
.modal-footer { display: flex; justify-content: flex-end; gap: var(--space-3); padding-top: var(--space-2); }
.btn { padding: var(--space-2) var(--space-4); border-radius: var(--radius-md); font-size: var(--text-sm); font-weight: 500; cursor: pointer; min-height: 40px; }
.btn--primary { background: var(--app-primary); color: var(--color-surface); border: none; }
.btn--primary:hover:not(:disabled) { opacity: 0.9; }
.btn--primary:disabled { opacity: 0.5; cursor: not-allowed; }
.btn--ghost { background: none; border: 1px solid var(--color-border); color: var(--color-text); }
.btn--ghost:hover { background: var(--color-surface-alt); }
</style>

View file

@ -0,0 +1,146 @@
<template>
<Teleport to="body">
<div v-if="show" class="sync-modal__overlay" role="dialog" aria-modal="true"
aria-labelledby="sync-modal-title" @keydown.esc="$emit('cancel')">
<div class="sync-modal">
<h2 id="sync-modal-title" class="sync-modal__title">Replace profile content?</h2>
<div class="sync-modal__comparison">
<div class="sync-modal__col sync-modal__col--before">
<div class="sync-modal__col-label">Current profile</div>
<div class="sync-modal__col-name">{{ currentSummary.name || '(no name)' }}</div>
<div class="sync-modal__col-summary">{{ currentSummary.careerSummary || '(no summary)' }}</div>
<div class="sync-modal__col-role">{{ currentSummary.latestRole || '(no experience)' }}</div>
</div>
<div class="sync-modal__arrow" aria-hidden="true"></div>
<div class="sync-modal__col sync-modal__col--after">
<div class="sync-modal__col-label">Replacing with</div>
<div class="sync-modal__col-name">{{ sourceSummary.name || '(no name)' }}</div>
<div class="sync-modal__col-summary">{{ sourceSummary.careerSummary || '(no summary)' }}</div>
<div class="sync-modal__col-role">{{ sourceSummary.latestRole || '(no experience)' }}</div>
</div>
</div>
<div v-if="blankFields.length" class="sync-modal__blank-warning">
<strong>Fields that will be blank after import:</strong>
<ul>
<li v-for="f in blankFields" :key="f">{{ f }}</li>
</ul>
<p class="sync-modal__blank-note">You can fill these in after importing.</p>
</div>
<p class="sync-modal__preserve-note">
Your salary, work preferences, and contact details are not affected.
</p>
<div class="sync-modal__actions">
<button class="btn-secondary" @click="$emit('cancel')">Keep current profile</button>
<button class="btn-danger" @click="$emit('confirm')">Replace profile content</button>
</div>
</div>
</div>
</Teleport>
</template>
<script setup lang="ts">
interface ContentSummary {
name: string
careerSummary: string
latestRole: string
}
defineProps<{
show: boolean
currentSummary: ContentSummary
sourceSummary: ContentSummary
blankFields: string[]
}>()
defineEmits<{
confirm: []
cancel: []
}>()
</script>
<style scoped>
.sync-modal__overlay {
position: fixed; inset: 0; z-index: 1000;
background: rgba(0,0,0,0.5);
display: flex; align-items: center; justify-content: center;
padding: var(--space-4);
}
.sync-modal {
background: var(--color-surface-raised);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg, 0.75rem);
padding: var(--space-6);
max-width: 600px; width: 100%;
max-height: 90vh; overflow-y: auto;
}
.sync-modal__title {
font-size: 1.15rem; font-weight: 700;
margin-bottom: var(--space-5);
color: var(--color-text);
}
.sync-modal__comparison {
display: grid; grid-template-columns: 1fr auto 1fr; gap: var(--space-3);
align-items: start; margin-bottom: var(--space-5);
}
.sync-modal__arrow {
font-size: 1.5rem; color: var(--color-text-muted);
padding-top: var(--space-5);
}
.sync-modal__col {
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
padding: var(--space-3);
}
.sync-modal__col--after { border-color: var(--color-primary); }
.sync-modal__col-label {
font-size: 0.75rem; font-weight: 600; color: var(--color-text-muted);
text-transform: uppercase; letter-spacing: 0.05em;
margin-bottom: var(--space-2);
}
.sync-modal__col-name { font-weight: 600; color: var(--color-text); margin-bottom: var(--space-1); }
.sync-modal__col-summary {
font-size: 0.82rem; color: var(--color-text-muted);
overflow: hidden; display: -webkit-box;
-webkit-line-clamp: 2; -webkit-box-orient: vertical;
margin-bottom: var(--space-1);
}
.sync-modal__col-role { font-size: 0.82rem; color: var(--color-text-muted); font-style: italic; }
.sync-modal__blank-warning {
background: color-mix(in srgb, var(--color-warning, #d97706) 10%, var(--color-surface-alt));
border: 1px solid color-mix(in srgb, var(--color-warning, #d97706) 30%, var(--color-border));
border-radius: var(--radius-md); padding: var(--space-3);
margin-bottom: var(--space-4);
font-size: 0.85rem;
}
.sync-modal__blank-warning ul { margin: var(--space-2) 0 0 var(--space-4); }
.sync-modal__blank-note { margin-top: var(--space-2); color: var(--color-text-muted); }
.sync-modal__preserve-note {
font-size: 0.82rem; color: var(--color-text-muted);
margin-bottom: var(--space-5);
}
.sync-modal__actions {
display: flex; gap: var(--space-3); justify-content: flex-end; flex-wrap: wrap;
}
.btn-danger {
padding: var(--space-2) var(--space-4);
background: var(--color-error, #dc2626);
color: #fff; border: none;
border-radius: var(--radius-md); cursor: pointer;
font-size: var(--font-sm); font-weight: 600;
}
.btn-danger:hover { filter: brightness(1.1); }
.btn-secondary {
padding: var(--space-2) var(--space-4);
background: transparent;
color: var(--color-text);
border: 1px solid var(--color-border);
border-radius: var(--radius-md); cursor: pointer;
font-size: var(--font-sm);
}
.btn-secondary:hover { background: var(--color-surface-alt); }
</style>

View file

@ -12,7 +12,8 @@ export const router = createRouter({
{ path: '/apply/:id', component: () => import('../views/ApplyWorkspaceView.vue') },
{ path: '/resumes', component: () => import('../views/ResumesView.vue') },
{ path: '/interviews', component: () => import('../views/InterviewsView.vue') },
{ path: '/contacts', component: () => import('../views/ContactsView.vue') },
{ path: '/messages', component: () => import('../views/MessagingView.vue') },
{ path: '/contacts', redirect: '/messages' },
{ path: '/references', component: () => import('../views/ReferencesView.vue') },
{ path: '/digest', component: () => import('../views/DigestView.vue') },
{ path: '/prep', component: () => import('../views/InterviewPrepView.vue') },

174
web/src/stores/messaging.ts Normal file
View file

@ -0,0 +1,174 @@
// web/src/stores/messaging.ts
import { ref } from 'vue'
import { defineStore } from 'pinia'
import { useApiFetch } from '../composables/useApi'
export interface Message {
id: number
job_id: number | null
job_contact_id: number | null
type: 'call_note' | 'in_person' | 'email' | 'draft'
direction: 'inbound' | 'outbound' | null
subject: string | null
body: string | null
from_addr: string | null
to_addr: string | null
logged_at: string
approved_at: string | null
template_id: number | null
osprey_call_id: string | null
}
export interface MessageTemplate {
id: number
key: string | null
title: string
category: string
subject_template: string | null
body_template: string
is_builtin: number
is_community: number
community_source: string | null
created_at: string
updated_at: string
}
export const useMessagingStore = defineStore('messaging', () => {
const messages = ref<Message[]>([])
const templates = ref<MessageTemplate[]>([])
const loading = ref(false)
const saving = ref(false)
const error = ref<string | null>(null)
const draftPending = ref<number | null>(null) // message_id of pending draft
async function fetchMessages(jobId: number) {
loading.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<Message[]>(
`/api/messages?job_id=${jobId}`
)
loading.value = false
if (fetchErr) { error.value = 'Could not load messages.'; return }
messages.value = data ?? []
}
async function fetchTemplates() {
const { data, error: fetchErr } = await useApiFetch<MessageTemplate[]>(
'/api/message-templates'
)
if (fetchErr) { error.value = 'Could not load templates.'; return }
templates.value = data ?? []
}
async function createMessage(payload: Omit<Message, 'id' | 'approved_at' | 'osprey_call_id'> & { logged_at?: string }) {
saving.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<Message>(
'/api/messages',
{ method: 'POST', body: JSON.stringify(payload), headers: { 'Content-Type': 'application/json' } }
)
saving.value = false
if (fetchErr || !data) { error.value = 'Failed to save message.'; return null }
messages.value = [data, ...messages.value]
return data
}
async function deleteMessage(id: number) {
const { error: fetchErr } = await useApiFetch(
`/api/messages/${id}`,
{ method: 'DELETE' }
)
if (fetchErr) { error.value = 'Failed to delete message.'; return }
messages.value = messages.value.filter(m => m.id !== id)
}
async function createTemplate(payload: Pick<MessageTemplate, 'title' | 'category' | 'body_template'> & { subject_template?: string }) {
saving.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<MessageTemplate>(
'/api/message-templates',
{ method: 'POST', body: JSON.stringify(payload), headers: { 'Content-Type': 'application/json' } }
)
saving.value = false
if (fetchErr || !data) { error.value = 'Failed to create template.'; return null }
templates.value = [...templates.value, data]
return data
}
async function updateTemplate(id: number, payload: Partial<Pick<MessageTemplate, 'title' | 'category' | 'subject_template' | 'body_template'>>) {
saving.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<MessageTemplate>(
`/api/message-templates/${id}`,
{ method: 'PUT', body: JSON.stringify(payload), headers: { 'Content-Type': 'application/json' } }
)
saving.value = false
if (fetchErr || !data) { error.value = 'Failed to update template.'; return null }
templates.value = templates.value.map(t => t.id === id ? data : t)
return data
}
async function deleteTemplate(id: number) {
const { error: fetchErr } = await useApiFetch(
`/api/message-templates/${id}`,
{ method: 'DELETE' }
)
if (fetchErr) { error.value = 'Failed to delete template.'; return }
templates.value = templates.value.filter(t => t.id !== id)
}
async function requestDraft(contactId: number) {
loading.value = true
error.value = null
const { data, error: fetchErr } = await useApiFetch<{ message_id: number }>(
`/api/contacts/${contactId}/draft-reply`,
{ method: 'POST', headers: { 'Content-Type': 'application/json' } }
)
loading.value = false
if (fetchErr || !data) {
error.value = 'Could not generate draft. Check LLM settings.'
return null
}
draftPending.value = data.message_id
return data.message_id
}
async function updateMessageBody(id: number, body: string) {
const { data, error: fetchErr } = await useApiFetch<Message>(
`/api/messages/${id}`,
{ method: 'PUT', body: JSON.stringify({ body }), headers: { 'Content-Type': 'application/json' } }
)
if (fetchErr || !data) { error.value = 'Failed to save edits.'; return null }
messages.value = messages.value.map(m => m.id === id ? { ...m, body: data.body } : m)
return data
}
async function approveDraft(messageId: number): Promise<string | null> {
const { data, error: fetchErr } = await useApiFetch<{ body: string; approved_at: string }>(
`/api/messages/${messageId}/approve`,
{ method: 'POST' }
)
if (fetchErr || !data) { error.value = 'Approve failed.'; return null }
messages.value = messages.value.map(m =>
m.id === messageId ? { ...m, approved_at: data.approved_at } : m
)
draftPending.value = null
return data.body
}
function clear() {
messages.value = []
templates.value = []
loading.value = false
saving.value = false
error.value = null
draftPending.value = null
}
return {
messages, templates, loading, saving, error, draftPending,
fetchMessages, fetchTemplates, createMessage, deleteMessage,
createTemplate, updateTemplate, deleteTemplate,
requestDraft, approveDraft, updateMessageBody, clear,
}
})

View file

@ -8,6 +8,12 @@ export interface WorkEntry {
industry: string; responsibilities: string; skills: string[]
}
export interface EducationEntry {
id: string
institution: string; degree: string; field: string
start_date: string; end_date: string
}
export const useResumeStore = defineStore('settings/resume', () => {
const hasResume = ref(false)
const loading = ref(false)
@ -31,6 +37,11 @@ export const useResumeStore = defineStore('settings/resume', () => {
const veteran_status = ref(''); const disability = ref('')
// Keywords
const skills = ref<string[]>([]); const domains = ref<string[]>([]); const keywords = ref<string[]>([])
// Extended profile fields
const career_summary = ref('')
const education = ref<EducationEntry[]>([])
const achievements = ref<string[]>([])
const lastSynced = ref<string | null>(null)
// LLM suggestions (pending, not yet accepted)
const skillSuggestions = ref<string[]>([])
const domainSuggestions = ref<string[]>([])
@ -69,6 +80,9 @@ export const useResumeStore = defineStore('settings/resume', () => {
skills.value = (data.skills as string[]) ?? []
domains.value = (data.domains as string[]) ?? []
keywords.value = (data.keywords as string[]) ?? []
career_summary.value = String(data.career_summary ?? '')
education.value = ((data.education as Omit<EducationEntry, 'id'>[]) ?? []).map(e => ({ ...e, id: crypto.randomUUID() }))
achievements.value = (data.achievements as string[]) ?? []
}
async function save() {
@ -84,12 +98,19 @@ export const useResumeStore = defineStore('settings/resume', () => {
gender: gender.value, pronouns: pronouns.value, ethnicity: ethnicity.value,
veteran_status: veteran_status.value, disability: disability.value,
skills: skills.value, domains: domains.value, keywords: keywords.value,
career_summary: career_summary.value,
education: education.value.map(({ id: _id, ...e }) => e),
achievements: achievements.value,
}
const { error } = await useApiFetch('/api/settings/resume', {
method: 'PUT', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(body),
})
saving.value = false
if (error) saveError.value = 'Save failed — please try again.'
if (error) {
saveError.value = 'Save failed — please try again.'
} else {
lastSynced.value = new Date().toISOString()
}
}
async function createBlank() {
@ -105,6 +126,16 @@ export const useResumeStore = defineStore('settings/resume', () => {
experience.value.splice(idx, 1)
}
function addEducation() {
education.value.push({
id: crypto.randomUUID(), institution: '', degree: '', field: '', start_date: '', end_date: ''
})
}
function removeEducation(idx: number) {
education.value.splice(idx, 1)
}
async function suggestTags(field: 'skills' | 'domains' | 'keywords') {
suggestingField.value = field
const current = field === 'skills' ? skills.value : field === 'domains' ? domains.value : keywords.value
@ -149,7 +180,8 @@ export const useResumeStore = defineStore('settings/resume', () => {
gender, pronouns, ethnicity, veteran_status, disability,
skills, domains, keywords,
skillSuggestions, domainSuggestions, keywordSuggestions, suggestingField,
career_summary, education, achievements, lastSynced,
syncFromProfile, load, save, createBlank,
addExperience, removeExperience, addTag, removeTag, suggestTags, acceptTagSuggestion,
addExperience, removeExperience, addEducation, removeEducation, addTag, removeTag, suggestTags, acceptTagSuggestion,
}
})

View file

@ -28,14 +28,33 @@ export interface SurveyResponse {
created_at: string | null
}
interface TaskStatus {
status: 'queued' | 'running' | 'completed' | 'failed' | 'none' | null
stage: string | null
result: { output: string; source: string } | null
message: string | null
}
export const useSurveyStore = defineStore('survey', () => {
const analysis = ref<SurveyAnalysis | null>(null)
const history = ref<SurveyResponse[]>([])
const loading = ref(false)
const saving = ref(false)
const error = ref<string | null>(null)
const analysis = ref<SurveyAnalysis | null>(null)
const history = ref<SurveyResponse[]>([])
const loading = ref(false)
const saving = ref(false)
const error = ref<string | null>(null)
const taskStatus = ref<TaskStatus>({ status: null, stage: null, result: null, message: null })
const visionAvailable = ref(false)
const currentJobId = ref<number | null>(null)
const currentJobId = ref<number | null>(null)
// Pending analyze payload held across the poll lifecycle so rawInput/mode survive
const _pendingPayload = ref<{ text?: string; image_b64?: string; mode: 'quick' | 'detailed' } | null>(null)
let pollInterval: ReturnType<typeof setInterval> | null = null
function _clearInterval() {
if (pollInterval !== null) {
clearInterval(pollInterval)
pollInterval = null
}
}
async function fetchFor(jobId: number) {
if (jobId !== currentJobId.value) {
@ -43,6 +62,7 @@ export const useSurveyStore = defineStore('survey', () => {
history.value = []
error.value = null
visionAvailable.value = false
taskStatus.value = { status: null, stage: null, result: null, message: null }
currentJobId.value = jobId
}
@ -69,23 +89,55 @@ export const useSurveyStore = defineStore('survey', () => {
jobId: number,
payload: { text?: string; image_b64?: string; mode: 'quick' | 'detailed' }
) {
_clearInterval()
loading.value = true
error.value = null
const { data, error: fetchError } = await useApiFetch<{ output: string; source: string }>(
_pendingPayload.value = payload
const { data, error: fetchError } = await useApiFetch<{ task_id: number; is_new: boolean }>(
`/api/jobs/${jobId}/survey/analyze`,
{ method: 'POST', body: JSON.stringify(payload) }
)
loading.value = false
if (fetchError || !data) {
error.value = 'Analysis failed. Please try again.'
loading.value = false
error.value = 'Failed to start analysis. Please try again.'
return
}
analysis.value = {
output: data.output,
source: isValidSource(data.source) ? data.source : 'text_paste',
mode: payload.mode,
rawInput: payload.text ?? null,
}
// Silently attach to the existing task if is_new=false — same task_id, same poll
taskStatus.value = { status: 'queued', stage: null, result: null, message: null }
pollTask(jobId, data.task_id)
}
function pollTask(jobId: number, taskId: number) {
_clearInterval()
pollInterval = setInterval(async () => {
const { data } = await useApiFetch<TaskStatus>(
`/api/jobs/${jobId}/survey/analyze/task?task_id=${taskId}`
)
if (!data) return
taskStatus.value = data
if (data.status === 'completed' || data.status === 'failed') {
_clearInterval()
loading.value = false
if (data.status === 'completed' && data.result) {
const payload = _pendingPayload.value
analysis.value = {
output: data.result.output,
source: isValidSource(data.result.source) ? data.result.source : 'text_paste',
mode: payload?.mode ?? 'quick',
rawInput: payload?.text ?? null,
}
} else if (data.status === 'failed') {
error.value = data.message ?? 'Analysis failed. Please try again.'
}
_pendingPayload.value = null
}
}, 3000)
}
async function saveResponse(
@ -96,12 +148,12 @@ export const useSurveyStore = defineStore('survey', () => {
saving.value = true
error.value = null
const body = {
survey_name: args.surveyName || undefined,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_b64: args.image_b64,
llm_output: analysis.value.output,
survey_name: args.surveyName || undefined,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_b64: args.image_b64,
llm_output: analysis.value.output,
reported_score: args.reportedScore || undefined,
}
const { data, error: fetchError } = await useApiFetch<{ id: number }>(
@ -113,32 +165,34 @@ export const useSurveyStore = defineStore('survey', () => {
error.value = 'Save failed. Your analysis is preserved — try again.'
return
}
// Prepend the saved response to history
const now = new Date().toISOString()
const saved: SurveyResponse = {
id: data.id,
survey_name: args.surveyName || null,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_path: null,
llm_output: analysis.value.output,
id: data.id,
survey_name: args.surveyName || null,
mode: analysis.value.mode,
source: analysis.value.source,
raw_input: analysis.value.rawInput,
image_path: null,
llm_output: analysis.value.output,
reported_score: args.reportedScore || null,
received_at: now,
created_at: now,
received_at: now,
created_at: now,
}
history.value = [saved, ...history.value]
analysis.value = null
}
function clear() {
analysis.value = null
history.value = []
loading.value = false
saving.value = false
error.value = null
_clearInterval()
analysis.value = null
history.value = []
loading.value = false
saving.value = false
error.value = null
taskStatus.value = { status: null, stage: null, result: null, message: null }
visionAvailable.value = false
currentJobId.value = null
currentJobId.value = null
_pendingPayload.value = null
}
return {
@ -147,6 +201,7 @@ export const useSurveyStore = defineStore('survey', () => {
loading,
saving,
error,
taskStatus,
visionAvailable,
currentJobId,
fetchFor,

View file

@ -0,0 +1,562 @@
<!-- web/src/views/MessagingView.vue -->
<template>
<div class="messaging-layout">
<!-- Left panel: job list -->
<aside class="job-panel" role="complementary" aria-label="Jobs with messages">
<div class="job-panel__header">
<h1 class="job-panel__title">Messages</h1>
</div>
<ul class="job-list" role="list" aria-label="Jobs">
<li
v-for="job in jobsWithMessages"
:key="job.id"
class="job-list__item"
:class="{ 'job-list__item--active': selectedJobId === job.id }"
role="listitem"
:aria-label="`${job.company}, ${job.title}`"
>
<button class="job-list__btn" @click="selectJob(job.id)">
<span class="job-list__company">{{ job.company }}</span>
<span class="job-list__role">{{ job.title }}</span>
<span v-if="job.preview" class="job-list__preview">{{ job.preview }}</span>
</button>
</li>
<li v-if="jobsWithMessages.length === 0" class="job-list__empty">
No messages yet. Select a job to log a call or use a template.
</li>
</ul>
</aside>
<!-- Right panel: thread view -->
<main class="thread-panel" aria-label="Message thread">
<div v-if="!selectedJobId" class="thread-panel__empty">
<p>Select a job to view its communication timeline.</p>
</div>
<template v-else>
<!-- Action bar -->
<div class="action-bar" role="toolbar" aria-label="Message actions">
<button class="btn btn--ghost" @click="openLogModal('call_note')">Log call</button>
<button class="btn btn--ghost" @click="openLogModal('in_person')">Log note</button>
<button class="btn btn--ghost" @click="openTemplateModal('apply')">Use template</button>
<button
class="btn btn--primary"
:disabled="store.loading"
@click="requestDraft"
>
{{ store.loading ? 'Drafting…' : 'Draft reply with LLM' }}
</button>
<!-- Osprey (Phase 2 stub) aria-disabled, never hidden -->
<button
class="btn btn--osprey"
aria-disabled="true"
:title="ospreyTitle"
@mouseenter="handleOspreyHover"
@focus="handleOspreyHover"
>
📞 Call via Osprey
</button>
</div>
<!-- Draft pending announcement (screen reader) -->
<div aria-live="polite" aria-atomic="true" class="sr-only">
{{ draftAnnouncement }}
</div>
<!-- Error banner -->
<p v-if="store.error" class="thread-error" role="alert">{{ store.error }}</p>
<!-- Timeline -->
<div v-if="store.loading && timeline.length === 0" class="thread-loading">
Loading messages
</div>
<ul v-else class="timeline" role="list" aria-label="Message timeline">
<li
v-for="item in timeline"
:key="item._key"
class="timeline__item"
:class="[`timeline__item--${item.type}`, item.approved_at === null && item.type === 'draft' ? 'timeline__item--draft-pending' : '']"
role="listitem"
:aria-label="`${typeLabel(item.type)}, ${item.direction || ''}, ${item.logged_at}`"
>
<span class="timeline__icon" aria-hidden="true">{{ typeIcon(item.type) }}</span>
<div class="timeline__content">
<div class="timeline__meta">
<span class="timeline__type-label">{{ typeLabel(item.type) }}</span>
<span v-if="item.direction" class="timeline__direction">{{ item.direction }}</span>
<time class="timeline__time">{{ formatTime(item.logged_at) }}</time>
<span
v-if="item.type === 'draft' && item.approved_at === null"
class="timeline__badge timeline__badge--pending"
>
Pending approval
</span>
<span
v-if="item.type === 'draft' && item.approved_at !== null"
class="timeline__badge timeline__badge--approved"
>
Approved
</span>
</div>
<p v-if="item.subject" class="timeline__subject">{{ item.subject }}</p>
<!-- Draft body is editable before approval -->
<template v-if="item.type === 'draft' && item.approved_at === null">
<textarea
:ref="el => setDraftRef(item.id, el)"
class="timeline__draft-body"
:value="item.body ?? ''"
@input="updateDraftBody(item.id, ($event.target as HTMLTextAreaElement).value)"
rows="6"
aria-label="Edit draft reply before approving"
/>
<div class="timeline__draft-actions">
<button class="btn btn--primary btn--sm" @click="approveDraft(item.id)">
Approve + copy
</button>
<a
v-if="item.to_addr"
:href="`mailto:${item.to_addr}?subject=${encodeURIComponent(item.subject ?? '')}&body=${encodeURIComponent(item.body ?? '')}`"
class="btn btn--ghost btn--sm"
target="_blank"
rel="noopener"
>
Open in email client
</a>
<button class="btn btn--ghost btn--sm btn--danger" @click="confirmDelete(item.id)">
Discard
</button>
</div>
</template>
<template v-else>
<p class="timeline__body">{{ item.body }}</p>
</template>
</div>
</li>
<li v-if="timeline.length === 0" class="timeline__empty">
No messages logged yet for this job.
</li>
</ul>
</template>
</main>
<!-- Modals -->
<MessageLogModal
:show="logModal.show"
:job-id="selectedJobId ?? 0"
:type="logModal.type"
@close="logModal.show = false"
@saved="onLogSaved"
/>
<MessageTemplateModal
:show="tplModal.show"
:mode="tplModal.mode"
:job-tokens="jobTokens"
:edit-template="tplModal.editTemplate"
@close="tplModal.show = false"
@saved="onTemplateSaved"
/>
<!-- Delete confirmation -->
<div v-if="deleteConfirm !== null" class="modal-backdrop" @click.self="deleteConfirm = null">
<div class="modal-dialog modal-dialog--sm" role="dialog" aria-modal="true" aria-label="Confirm delete">
<div class="modal-body">
<p>Are you sure you want to delete this message? This cannot be undone.</p>
<div class="modal-footer">
<button class="btn btn--ghost" @click="deleteConfirm = null">Cancel</button>
<button class="btn btn--danger" @click="doDelete">Delete</button>
</div>
</div>
</div>
</div>
</div>
</template>
<script setup lang="ts">
import { ref, computed, watch, onMounted, onUnmounted } from 'vue'
import { useMessagingStore, type MessageTemplate } from '../stores/messaging'
import { useApiFetch } from '../composables/useApi'
import MessageLogModal from '../components/MessageLogModal.vue'
import MessageTemplateModal from '../components/MessageTemplateModal.vue'
const store = useMessagingStore()
// Jobs list
interface JobSummary { id: number; company: string; title: string; preview?: string }
const allJobs = ref<JobSummary[]>([])
const selectedJobId = ref<number | null>(null)
async function loadJobs() {
const { data } = await useApiFetch<Array<{ id: number; company: string; title: string }>>('/api/jobs?status=applied&limit=200')
allJobs.value = data ?? []
}
const jobsWithMessages = computed(() => allJobs.value)
async function selectJob(id: number) {
selectedJobId.value = id
draftBodyEdits.value = {}
await store.fetchMessages(id)
}
// Timeline: UNION of job_contacts + messages
interface TimelineItem {
_key: string
id: number
type: 'call_note' | 'in_person' | 'email' | 'draft'
direction: string | null
subject: string | null
body: string | null
to_addr: string | null
logged_at: string
approved_at: string | null
}
interface JobContact {
id: number
direction: string | null
subject: string | null
from_addr: string | null
to_addr: string | null
body: string | null
received_at: string | null
}
const jobContacts = ref<JobContact[]>([])
watch(selectedJobId, async (id) => {
if (id === null) { jobContacts.value = []; return }
const { data } = await useApiFetch<JobContact[]>(`/api/contacts?job_id=${id}`)
jobContacts.value = data ?? []
})
const timeline = computed<TimelineItem[]>(() => {
const contactItems: TimelineItem[] = jobContacts.value.map(c => ({
_key: `jc-${c.id}`,
id: c.id,
type: 'email',
direction: c.direction,
subject: c.subject,
body: c.body,
to_addr: c.to_addr,
logged_at: c.received_at ?? '',
approved_at: 'n/a', // contacts are always "approved"
}))
const messageItems: TimelineItem[] = store.messages.map(m => ({
_key: `msg-${m.id}`,
id: m.id,
type: m.type,
direction: m.direction,
subject: m.subject,
body: draftBodyEdits.value[m.id] ?? m.body,
to_addr: m.to_addr,
logged_at: m.logged_at,
approved_at: m.approved_at,
}))
return [...contactItems, ...messageItems].sort(
(a, b) => new Date(b.logged_at).getTime() - new Date(a.logged_at).getTime()
)
})
// Draft body edits (local, before approve)
const draftBodyEdits = ref<Record<number, string>>({})
const draftRefs = ref<Record<number, HTMLTextAreaElement | null>>({})
function setDraftRef(id: number, el: unknown) {
draftRefs.value[id] = el as HTMLTextAreaElement | null
}
function updateDraftBody(id: number, value: string) {
draftBodyEdits.value = { ...draftBodyEdits.value, [id]: value }
}
// LLM draft + approval
const draftAnnouncement = ref('')
async function requestDraft() {
// Find the most recent inbound job_contact for this job
const inbound = jobContacts.value.find(c => c.direction === 'inbound')
if (!inbound) {
store.error = 'No inbound emails found for this job to draft a reply to.'
return
}
const msgId = await store.requestDraft(inbound.id)
if (msgId) {
draftAnnouncement.value = 'Draft reply generated and ready for review.'
await store.fetchMessages(selectedJobId.value!)
setTimeout(() => { draftAnnouncement.value = '' }, 3000)
}
}
async function approveDraft(messageId: number) {
const editedBody = draftBodyEdits.value[messageId]
// Persist edits to DB before approving so history shows final version
if (editedBody !== undefined) {
const updated = await store.updateMessageBody(messageId, editedBody)
if (!updated) return // error already set in store
}
const body = await store.approveDraft(messageId)
if (body) {
const finalBody = editedBody ?? body
await navigator.clipboard.writeText(finalBody)
draftAnnouncement.value = 'Approved and copied to clipboard.'
setTimeout(() => { draftAnnouncement.value = '' }, 3000)
}
}
// Delete confirmation
const deleteConfirm = ref<number | null>(null)
function confirmDelete(id: number) {
deleteConfirm.value = id
}
async function doDelete() {
if (deleteConfirm.value === null) return
await store.deleteMessage(deleteConfirm.value)
deleteConfirm.value = null
}
// Osprey easter egg
const OSPREY_HOVER_KEY = 'peregrine-osprey-hover-count'
const ospreyTitle = ref('Osprey IVR calling — coming in Phase 2')
function handleOspreyHover() {
const count = parseInt(localStorage.getItem(OSPREY_HOVER_KEY) ?? '0', 10) + 1
localStorage.setItem(OSPREY_HOVER_KEY, String(count))
if (count >= 10) {
ospreyTitle.value = "Osprey is still learning to fly... 🦅"
}
}
// Modals
const logModal = ref<{ show: boolean; type: 'call_note' | 'in_person' }>({
show: false, type: 'call_note',
})
function openLogModal(type: 'call_note' | 'in_person') {
logModal.value = { show: true, type }
}
function onLogSaved() {
logModal.value.show = false
if (selectedJobId.value) store.fetchMessages(selectedJobId.value)
}
const tplModal = ref<{
show: boolean
mode: 'apply' | 'create' | 'edit'
editTemplate?: MessageTemplate
}>({ show: false, mode: 'apply' })
function openTemplateModal(mode: 'apply' | 'create' | 'edit', tpl?: MessageTemplate) {
tplModal.value = { show: true, mode, editTemplate: tpl }
}
function onTemplateSaved() {
tplModal.value.show = false
store.fetchTemplates()
}
// Job tokens for template substitution
const jobTokens = computed<Record<string, string>>(() => {
const job = allJobs.value.find(j => j.id === selectedJobId.value)
return {
company: job?.company ?? '',
role: job?.title ?? '',
name: '', // loaded from user profile; left empty user fills in
recruiter_name: '',
date: new Date().toLocaleDateString(),
accommodation_details: '',
}
})
// Helpers
function typeIcon(type: string): string {
return { call_note: '📞', in_person: '🤝', email: '✉️', draft: '📝' }[type] ?? '💬'
}
function typeLabel(type: string): string {
return {
call_note: 'Call note', in_person: 'In-person note',
email: 'Email', draft: 'Draft reply',
}[type] ?? type
}
function formatTime(iso: string): string {
if (!iso) return ''
return new Date(iso).toLocaleString(undefined, {
month: 'short', day: 'numeric', hour: '2-digit', minute: '2-digit'
})
}
// Lifecycle
onMounted(async () => {
await Promise.all([loadJobs(), store.fetchTemplates()])
})
onUnmounted(() => {
store.clear()
})
</script>
<style scoped>
.messaging-layout {
display: flex;
height: 100%;
min-height: 0;
}
/* ── Left panel ─────────────────────── */
.job-panel {
width: 260px;
min-width: 200px;
flex-shrink: 0;
border-right: 1px solid var(--color-border);
display: flex;
flex-direction: column;
overflow: hidden;
}
.job-panel__header {
padding: var(--space-4);
border-bottom: 1px solid var(--color-border-light);
}
.job-panel__title { font-size: var(--text-lg); font-weight: 600; margin: 0; }
.job-list {
flex: 1; overflow-y: auto;
list-style: none; margin: 0; padding: var(--space-2) 0;
}
.job-list__item { margin: 0; }
.job-list__item--active .job-list__btn {
background: var(--app-primary-light);
color: var(--app-primary);
}
.job-list__btn {
width: 100%; padding: var(--space-3) var(--space-4);
text-align: left; background: none; border: none; cursor: pointer;
display: flex; flex-direction: column; gap: 2px;
transition: background 150ms;
}
.job-list__btn:hover { background: var(--color-surface-alt); }
.job-list__company { font-size: var(--text-sm); font-weight: 600; }
.job-list__role { font-size: var(--text-xs); color: var(--color-text-muted); }
.job-list__preview { font-size: var(--text-xs); color: var(--color-text-muted); white-space: nowrap; overflow: hidden; text-overflow: ellipsis; max-width: 220px; }
.job-list__empty { padding: var(--space-4); font-size: var(--text-sm); color: var(--color-text-muted); }
/* ── Right panel ────────────────────── */
.thread-panel {
flex: 1; min-width: 0;
display: flex; flex-direction: column;
overflow: hidden;
}
.thread-panel__empty {
flex: 1; display: flex; align-items: center; justify-content: center;
color: var(--color-text-muted);
}
.action-bar {
display: flex; flex-wrap: wrap; gap: var(--space-2); align-items: center;
padding: var(--space-3) var(--space-4);
border-bottom: 1px solid var(--color-border-light);
}
.btn--osprey {
opacity: 0.5; cursor: not-allowed;
background: none; border: 1px dashed var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text-muted); font-size: var(--text-sm);
padding: var(--space-2) var(--space-3); min-height: 36px;
}
.thread-error {
margin: var(--space-2) var(--space-4);
color: var(--app-accent); font-size: var(--text-sm);
}
.thread-loading { padding: var(--space-4); color: var(--color-text-muted); font-size: var(--text-sm); }
.timeline {
flex: 1; overflow-y: auto;
list-style: none; margin: 0; padding: var(--space-4);
display: flex; flex-direction: column; gap: var(--space-3);
}
.timeline__item {
display: flex; gap: var(--space-3);
padding: var(--space-3); border-radius: var(--radius-md);
background: var(--color-surface-alt);
border: 1px solid var(--color-border);
}
.timeline__item--draft-pending {
border-color: var(--app-accent);
background: color-mix(in srgb, var(--app-accent) 8%, var(--color-surface));
}
.timeline__icon { font-size: 1.2rem; flex-shrink: 0; }
.timeline__content { flex: 1; min-width: 0; display: flex; flex-direction: column; gap: var(--space-1); }
.timeline__meta { display: flex; align-items: center; gap: var(--space-2); flex-wrap: wrap; }
.timeline__type-label { font-size: var(--text-sm); font-weight: 600; }
.timeline__direction { font-size: var(--text-xs); color: var(--color-text-muted); text-transform: capitalize; }
.timeline__time { font-size: var(--text-xs); color: var(--color-text-muted); margin-left: auto; }
.timeline__badge {
font-size: var(--text-xs); font-weight: 700;
padding: 1px 6px; border-radius: var(--radius-full);
}
.timeline__badge--pending { background: #fef3c7; color: #d97706; }
.timeline__badge--approved { background: #d1fae5; color: #065f46; }
.timeline__subject { font-size: var(--text-sm); font-weight: 500; margin: 0; }
.timeline__body { font-size: var(--text-sm); white-space: pre-wrap; margin: 0; color: var(--color-text); }
.timeline__draft-body {
width: 100%; font-size: var(--text-sm); font-family: var(--font-body);
padding: var(--space-2); border: 1px solid var(--color-border);
border-radius: var(--radius-md); background: var(--color-surface);
color: var(--color-text); resize: vertical;
}
.timeline__draft-body:focus-visible { outline: 2px solid var(--app-primary); outline-offset: 2px; }
.timeline__draft-actions { display: flex; gap: var(--space-2); flex-wrap: wrap; }
.timeline__empty { color: var(--color-text-muted); font-size: var(--text-sm); padding: var(--space-2); }
/* Buttons */
.btn { padding: var(--space-2) var(--space-3); border-radius: var(--radius-md); font-size: var(--text-sm); font-weight: 500; cursor: pointer; min-height: 36px; }
.btn--sm { padding: var(--space-1) var(--space-3); min-height: 30px; font-size: var(--text-xs); }
.btn--primary { background: var(--app-primary); color: var(--color-surface); border: none; }
.btn--primary:hover:not(:disabled) { opacity: 0.9; }
.btn--primary:disabled { opacity: 0.5; cursor: not-allowed; }
.btn--ghost { background: none; border: 1px solid var(--color-border); color: var(--color-text); }
.btn--ghost:hover { background: var(--color-surface-alt); }
.btn--danger { background: var(--app-accent); color: white; border: none; }
.btn--danger:hover { opacity: 0.9; }
/* Modals (delete confirm) */
.modal-backdrop {
position: fixed; inset: 0;
background: rgba(0,0,0,0.5);
display: flex; align-items: center; justify-content: center;
z-index: 200;
}
.modal-dialog {
background: var(--color-surface-raised); border: 1px solid var(--color-border);
border-radius: var(--radius-lg); width: min(400px, 95vw); outline: none;
}
.modal-dialog--sm { width: min(360px, 95vw); }
.modal-body { padding: var(--space-5); display: flex; flex-direction: column; gap: var(--space-4); }
.modal-footer { display: flex; justify-content: flex-end; gap: var(--space-3); }
/* Screen-reader only utility */
.sr-only {
position: absolute; width: 1px; height: 1px;
padding: 0; margin: -1px; overflow: hidden;
clip: rect(0,0,0,0); white-space: nowrap; border: 0;
}
/* Responsive: stack panels on narrow screens */
@media (max-width: 700px) {
.messaging-layout { flex-direction: column; }
.job-panel { width: 100%; border-right: none; border-bottom: 1px solid var(--color-border); max-height: 180px; }
}
</style>

View file

@ -33,6 +33,7 @@
</span>
<div class="rv__item-info">
<span class="rv__item-name">{{ r.name }}</span>
<span v-if="r.is_default" class="rv__active-badge">Active profile</span>
<span class="rv__item-meta">{{ r.word_count }} words · {{ fmtDate(r.created_at) }}</span>
<span v-if="r.job_id" class="rv__item-source">Built for job #{{ r.job_id }}</span>
</div>
@ -51,6 +52,11 @@
<button v-if="!selected.is_default" class="btn-secondary" @click="setDefault">
Set as Default
</button>
<button class="btn-generate" @click="applyToProfile"
:disabled="syncApplying"
aria-describedby="apply-to-profile-desc">
{{ syncApplying ? 'Applying…' : '⇩ Apply to profile' }}
</button>
<button class="btn-secondary" @click="toggleEdit">
{{ editing ? 'Cancel' : 'Edit' }}
</button>
@ -90,20 +96,50 @@
<button class="btn-secondary" @click="toggleEdit">Discard</button>
</div>
<p id="apply-to-profile-desc" class="rv__sync-desc">
Replaces your resume profile content with this version. Your current profile is backed up first.
</p>
<p v-if="selected.synced_at" class="rv__synced-at">
Last synced to profile: {{ fmtDate(selected.synced_at) }}
</p>
<p v-if="actionError" class="rv__error" role="alert">{{ actionError }}</p>
</div>
</div>
</div>
<!-- Persistent sync notice (dismissible) -->
<div v-if="syncNotice" class="rv__sync-notice" role="status" aria-live="polite">
Profile updated. Previous content backed up as
<strong>{{ syncNotice.backupName }}</strong>.
<button class="rv__sync-notice-dismiss" @click="dismissSyncNotice" aria-label="Dismiss"></button>
</div>
<ResumeSyncConfirmModal
:show="showSyncModal"
:current-summary="buildSummary(resumes.find(r => r.is_default === 1) ?? null)"
:source-summary="buildSummary(selected)"
:blank-fields="selected?.struct_json
? (JSON.parse(selected.struct_json).experience?.length
? ['experience[].industry']
: [])
: []"
@confirm="confirmApplyToProfile"
@cancel="showSyncModal = false"
/>
</template>
<script setup lang="ts">
import { ref, onMounted } from 'vue'
import { onBeforeRouteLeave } from 'vue-router'
import { useApiFetch } from '../composables/useApi'
import ResumeSyncConfirmModal from '../components/ResumeSyncConfirmModal.vue'
interface Resume {
id: number; name: string; source: string; job_id: number | null
text: string; struct_json: string | null; word_count: number
is_default: number; created_at: string; updated_at: string
synced_at: string | null
}
const resumes = ref<Resume[]>([])
@ -116,6 +152,25 @@ const saving = ref(false)
const actionError = ref('')
const showDownloadMenu = ref(false)
const showSyncModal = ref(false)
const syncApplying = ref(false)
const syncNotice = ref<{ backupName: string; backupId: number } | null>(null)
interface ContentSummary { name: string; careerSummary: string; latestRole: string }
function buildSummary(r: Resume | null): ContentSummary {
if (!r) return { name: '', careerSummary: '', latestRole: '' }
try {
const s = r.struct_json ? JSON.parse(r.struct_json) : {}
const exp = Array.isArray(s.experience) ? s.experience[0] : null
return {
name: s.name || r.name,
careerSummary: (s.career_summary || '').slice(0, 120),
latestRole: exp ? `${exp.title || ''} at ${exp.company || ''}`.replace(/^ at | at $/, '') : '',
}
} catch { return { name: r.name, careerSummary: '', latestRole: '' } }
}
function fmtDate(iso: string) {
return new Date(iso).toLocaleDateString('en-US', { month: 'short', day: 'numeric', year: 'numeric' })
}
@ -185,6 +240,30 @@ async function confirmDelete() {
await loadList()
}
async function applyToProfile() {
if (!selected.value) return
showSyncModal.value = true
}
async function confirmApplyToProfile() {
if (!selected.value) return
showSyncModal.value = false
syncApplying.value = true
actionError.value = ''
const { data, error } = await useApiFetch<{
ok: boolean; backup_id: number; backup_name: string
}>(`/api/resumes/${selected.value.id}/apply-to-profile`, { method: 'POST' })
syncApplying.value = false
if (error || !data?.ok) {
actionError.value = 'Profile sync failed — please try again.'
return
}
syncNotice.value = { backupName: data.backup_name, backupId: data.backup_id }
await loadList()
}
function dismissSyncNotice() { syncNotice.value = null }
async function handleImport(e: Event) {
const file = (e.target as HTMLInputElement).files?.[0]
if (!file) return
@ -221,6 +300,15 @@ function downloadYaml() {
}
onMounted(loadList)
onBeforeRouteLeave(() => {
if (editing.value && (editName.value !== selected.value?.name || editText.value !== selected.value?.text)) {
const confirmed = window.confirm(
`You have unsaved edits to "${selected.value?.name}". Leave without saving?`
)
if (!confirmed) return false
}
})
</script>
<style scoped>
@ -337,4 +425,32 @@ onMounted(loadList)
.rv__layout { grid-template-columns: 1fr; }
.rv__list { max-height: 200px; }
}
.rv__active-badge {
font-size: 0.7rem; font-weight: 700; text-transform: uppercase; letter-spacing: 0.04em;
background: color-mix(in srgb, var(--color-primary) 15%, var(--color-surface-alt));
color: var(--color-primary);
border: 1px solid color-mix(in srgb, var(--color-primary) 30%, var(--color-border));
border-radius: var(--radius-sm, 0.25rem);
padding: 1px 6px; margin-left: var(--space-1);
}
.rv__sync-desc {
font-size: 0.78rem; color: var(--color-text-muted); margin-top: var(--space-1);
}
.rv__synced-at {
font-size: 0.78rem; color: var(--color-text-muted); margin-top: var(--space-1);
}
.rv__sync-notice {
position: fixed; bottom: var(--space-6); left: 50%; transform: translateX(-50%);
background: var(--color-surface-raised);
border: 1px solid var(--color-primary);
border-radius: var(--radius-md); padding: var(--space-3) var(--space-5);
font-size: 0.9rem; z-index: 500; max-width: 480px;
display: flex; gap: var(--space-3); align-items: center;
box-shadow: 0 4px 24px rgba(0,0,0,0.15);
}
.rv__sync-notice-dismiss {
background: none; border: none; cursor: pointer;
color: var(--color-text-muted); font-size: 1rem; flex-shrink: 0;
}
</style>

View file

@ -269,7 +269,7 @@ function toggleHistoryEntry(id: number) {
@click="runAnalyze"
>
<span v-if="surveyStore.loading" class="spinner" aria-hidden="true"></span>
{{ surveyStore.loading ? 'Analyzing…' : '🔍 Analyze' }}
{{ surveyStore.loading ? (surveyStore.taskStatus.stage ? surveyStore.taskStatus.stage + '…' : 'Analyzing…') : '🔍 Analyze' }}
</button>
<!-- Analyze error -->

View file

@ -56,6 +56,23 @@
<p v-if="uploadError" class="error">{{ uploadError }}</p>
</section>
<!-- Sync status label -->
<div v-if="store.lastSynced" class="sync-status-label">
Content synced from Resume Library {{ fmtDate(store.lastSynced) }}.
Changes here update the default library entry when you save.
</div>
<!-- Career Summary -->
<section class="form-section">
<h3>Career Summary</h3>
<p class="section-note">Used in cover letter generation and as your professional introduction.</p>
<div class="field-row">
<label for="career-summary">Career summary</label>
<textarea id="career-summary" v-model="store.career_summary"
rows="4" placeholder="2-3 sentences summarising your background and what you bring."></textarea>
</div>
</section>
<!-- Personal Information -->
<section class="form-section">
<h3>Personal Information</h3>
@ -130,6 +147,57 @@
<button @click="store.addExperience()">+ Add Position</button>
</section>
<!-- Education -->
<section class="form-section">
<h3>Education</h3>
<div v-for="(edu, idx) in store.education" :key="edu.id" class="experience-card">
<div class="experience-card__header">
<span class="experience-card__label">Education {{ idx + 1 }}</span>
<button class="btn-remove" @click="store.removeEducation(idx)"
:aria-label="`Remove education entry ${idx + 1}`">Remove</button>
</div>
<div class="field-row">
<label>Institution</label>
<input v-model="edu.institution" placeholder="University or school name" />
</div>
<div class="field-row-grid">
<div class="field-row">
<label>Degree</label>
<input v-model="edu.degree" placeholder="e.g. B.S., M.A., Ph.D." />
</div>
<div class="field-row">
<label>Field of study</label>
<input v-model="edu.field" placeholder="e.g. Computer Science" />
</div>
</div>
<div class="field-row-grid">
<div class="field-row">
<label>Start year</label>
<input v-model="edu.start_date" placeholder="2015" />
</div>
<div class="field-row">
<label>End year</label>
<input v-model="edu.end_date" placeholder="2019" />
</div>
</div>
</div>
<button class="btn-secondary" @click="store.addEducation">+ Add education</button>
</section>
<!-- Achievements -->
<section class="form-section">
<h3>Achievements</h3>
<p class="section-note">Awards, certifications, open-source projects, publications.</p>
<div v-for="(ach, idx) in store.achievements" :key="idx" class="achievement-row">
<input :value="ach"
@input="store.achievements[idx] = ($event.target as HTMLInputElement).value"
placeholder="Describe the achievement" />
<button class="btn-remove" @click="store.achievements.splice(idx, 1)"
:aria-label="`Remove achievement ${idx + 1}`">&#x2715;</button>
</div>
<button class="btn-secondary" @click="store.achievements.push('')">+ Add achievement</button>
</section>
<!-- Preferences -->
<section class="form-section">
<h3>Preferences & Availability</h3>
@ -302,6 +370,10 @@ function handleFileSelect(event: Event) {
uploadError.value = null
}
function fmtDate(iso: string) {
return new Date(iso).toLocaleDateString('en-US', { month: 'short', day: 'numeric', year: 'numeric' })
}
async function handleUpload() {
const file = pendingFile.value
if (!file) return
@ -407,4 +479,34 @@ h3 { font-size: 1rem; font-weight: 600; margin-bottom: var(--space-3); }
.toggle-btn { margin-left: 10px; padding: 2px 10px; background: transparent; border: 1px solid var(--color-border); border-radius: 4px; color: var(--color-text-muted); cursor: pointer; font-size: 0.78rem; }
.loading { text-align: center; padding: var(--space-8); color: var(--color-text-muted); }
.replace-section { background: var(--color-surface-alt); border-radius: 8px; padding: var(--space-4); }
.sync-status-label {
font-size: 0.82rem; color: var(--color-text-muted);
border-left: 3px solid var(--color-primary);
padding: var(--space-2) var(--space-3);
margin-bottom: var(--space-6);
background: color-mix(in srgb, var(--color-primary) 6%, var(--color-surface-alt));
border-radius: 0 var(--radius-sm) var(--radius-sm) 0;
}
.achievement-row {
display: flex; gap: var(--space-2); align-items: center; margin-bottom: var(--space-2);
}
.achievement-row input { flex: 1; }
.btn-remove {
background: none; border: 1px solid var(--color-border);
border-radius: var(--radius-sm); padding: 2px var(--space-2);
cursor: pointer; color: var(--color-text-muted); font-size: 0.8rem;
white-space: nowrap;
}
.btn-remove:hover { color: var(--color-error, #dc2626); border-color: var(--color-error, #dc2626); }
.field-row-grid { display: grid; grid-template-columns: 1fr 1fr; gap: var(--space-3); }
.btn-secondary {
padding: 7px 16px; background: transparent;
border: 1px solid var(--color-border); border-radius: 6px;
color: var(--color-text-muted); cursor: pointer; font-size: 0.85rem;
}
.btn-secondary:hover { border-color: var(--color-accent); color: var(--color-accent); }
.experience-card__header {
display: flex; justify-content: space-between; align-items: center; margin-bottom: var(--space-3);
}
.experience-card__label { font-size: 0.82rem; color: var(--color-text-muted); font-weight: 500; }
</style>