# Survey Assistant Vue SPA Implementation Plan > **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. **Goal:** Port `app/pages/7_Survey.py` to a Vue 3 SPA view at `/survey/:id` with text/screenshot input, Quick/Detailed mode, synchronous LLM analysis, save-to-job, and response history. **Architecture:** Three independent layers — backend (4 new FastAPI endpoints in `dev-api.py`), a Pinia store (`survey.ts`) that wraps those endpoints, and a single-column Vue view (`SurveyView.vue`) that reads from the store. The InterviewCard kanban component gains a "Survey →" emit button wired to the parent `InterviewsView`. **Tech Stack:** Python/FastAPI (`dev-api.py`), Pinia + Vue 3 Composition API, Vitest (store tests), pytest + FastAPI TestClient (backend tests). --- ## File Map | Action | Path | Responsibility | |--------|------|----------------| | Modify | `dev-api.py` | 4 new survey endpoints: vision health, analyze, save response, get history | | Create | `tests/test_dev_api_survey.py` | Backend tests for all 4 endpoints | | Create | `web/src/stores/survey.ts` | Pinia store: `fetchFor`, `analyze`, `saveResponse`, `clear` | | Create | `web/src/stores/survey.test.ts` | Unit tests for survey store | | Modify | `web/src/router/index.ts` | Add `/survey/:id` route (currently only `/survey` exists) | | Modify | `web/src/views/SurveyView.vue` | Replace placeholder stub with full implementation | | Modify | `web/src/components/InterviewCard.vue` | Add `emit('survey', job.id)` button | | Modify | `web/src/views/InterviewsView.vue` | Add `@survey` on 3 kanban InterviewCard instances + "Survey →" button in pre-list row for `survey`-stage jobs | --- ## Task 1: Backend — 4 survey endpoints + tests **Files:** - Modify: `dev-api.py` — add after the interview prep block (~line 367) - Create: `tests/test_dev_api_survey.py` ### Context for the implementer `dev-api.py` already imports: `datetime`, `Path`, `BaseModel`, `Optional`, `HTTPException`. **`requests` is NOT currently imported** — add `import requests` to the module-level imports section at the top of the file. The LLMRouter is NOT pre-imported — use a lazy import inside the endpoint function (consistent with `submit_task` pattern). The prompt builders below use **lowercase** mode (`"quick"` / `"detailed"`) because the frontend sends lowercase. The existing Streamlit page uses capitalized mode but the Vue version standardizes to lowercase throughout. `insert_survey_response` signature (from `scripts/db.py`): ```python def insert_survey_response( db_path, job_id, survey_name, received_at, source, raw_input, image_path, mode, llm_output, reported_score ) -> int ``` `get_survey_responses(db_path, job_id)` returns `list[dict]`, newest first. Test import pattern (from `tests/test_dev_api_prep.py`): ```python @pytest.fixture def client(): import sys sys.path.insert(0, "/Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa") from dev_api import app return TestClient(app) ``` Note: the module name is `dev_api` (underscore) even though the file is `dev-api.py`. Mock pattern for DB: `patch("dev_api._get_db", return_value=mock_db)`. Mock pattern for LLMRouter: `patch("dev_api.LLMRouter")` — the lazy import means we patch at the `dev_api` module level after it's been imported. - [ ] **Step 1: Write the failing tests** Create `tests/test_dev_api_survey.py`: ```python """Tests for survey endpoints: vision health, analyze, save response, get history.""" import pytest from unittest.mock import patch, MagicMock from fastapi.testclient import TestClient @pytest.fixture def client(): import sys sys.path.insert(0, "/Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa") from dev_api import app return TestClient(app) # ── GET /api/vision/health ─────────────────────────────────────────────────── def test_vision_health_available(client): """Returns available=true when vision service responds 200.""" mock_resp = MagicMock() mock_resp.status_code = 200 with patch("dev_api.requests.get", return_value=mock_resp): resp = client.get("/api/vision/health") assert resp.status_code == 200 assert resp.json() == {"available": True} def test_vision_health_unavailable(client): """Returns available=false when vision service times out or errors.""" with patch("dev_api.requests.get", side_effect=Exception("timeout")): resp = client.get("/api/vision/health") assert resp.status_code == 200 assert resp.json() == {"available": False} # ── POST /api/jobs/{id}/survey/analyze ────────────────────────────────────── def test_analyze_text_quick(client): """Text mode quick analysis returns output and source=text_paste.""" mock_router = MagicMock() mock_router.complete.return_value = "1. B — best option" mock_router.config.get.return_value = ["claude_code", "vllm"] with patch("dev_api.LLMRouter", return_value=mock_router): resp = client.post("/api/jobs/1/survey/analyze", json={ "text": "Q1: Do you prefer teamwork?\nA. Solo B. Together", "mode": "quick", }) assert resp.status_code == 200 data = resp.json() assert data["source"] == "text_paste" assert "B" in data["output"] # System prompt must be passed for text path call_kwargs = mock_router.complete.call_args[1] assert "system" in call_kwargs assert "culture-fit survey" in call_kwargs["system"] def test_analyze_text_detailed(client): """Text mode detailed analysis passes correct prompt.""" mock_router = MagicMock() mock_router.complete.return_value = "Option A: good for... Option B: better because..." mock_router.config.get.return_value = [] with patch("dev_api.LLMRouter", return_value=mock_router): resp = client.post("/api/jobs/1/survey/analyze", json={ "text": "Q1: Describe your work style.", "mode": "detailed", }) assert resp.status_code == 200 assert resp.json()["source"] == "text_paste" def test_analyze_image(client): """Image mode routes through vision path with NO system prompt.""" mock_router = MagicMock() mock_router.complete.return_value = "1. C — collaborative choice" mock_router.config.get.return_value = ["vision_service", "claude_code"] with patch("dev_api.LLMRouter", return_value=mock_router): resp = client.post("/api/jobs/1/survey/analyze", json={ "image_b64": "aGVsbG8=", "mode": "quick", }) assert resp.status_code == 200 data = resp.json() assert data["source"] == "screenshot" # No system prompt on vision path call_kwargs = mock_router.complete.call_args[1] assert "system" not in call_kwargs def test_analyze_llm_failure(client): """Returns 500 when LLM raises an exception.""" mock_router = MagicMock() mock_router.complete.side_effect = Exception("LLM unavailable") mock_router.config.get.return_value = [] with patch("dev_api.LLMRouter", return_value=mock_router): resp = client.post("/api/jobs/1/survey/analyze", json={ "text": "Q1: test", "mode": "quick", }) assert resp.status_code == 500 # ── POST /api/jobs/{id}/survey/responses ──────────────────────────────────── def test_save_response_text(client): """Save text response writes to DB and returns id.""" mock_db = MagicMock() with patch("dev_api._get_db", return_value=mock_db): with patch("dev_api.insert_survey_response", return_value=42) as mock_insert: resp = client.post("/api/jobs/1/survey/responses", json={ "mode": "quick", "source": "text_paste", "raw_input": "Q1: test question", "llm_output": "1. B — good reason", }) assert resp.status_code == 200 assert resp.json()["id"] == 42 # received_at generated by backend — not None call_args = mock_insert.call_args assert call_args[1]["received_at"] is not None or call_args[0][3] is not None def test_save_response_with_image(client, tmp_path, monkeypatch): """Save image response writes PNG file and stores path in DB.""" monkeypatch.setenv("STAGING_DB", str(tmp_path / "test.db")) # Patch DATA_DIR inside dev_api to a temp path with patch("dev_api.insert_survey_response", return_value=7) as mock_insert: with patch("dev_api.Path") as mock_path_cls: mock_path_cls.return_value.__truediv__ = lambda s, o: tmp_path / o resp = client.post("/api/jobs/1/survey/responses", json={ "mode": "quick", "source": "screenshot", "image_b64": "aGVsbG8=", # valid base64 "llm_output": "1. B — reason", }) assert resp.status_code == 200 assert resp.json()["id"] == 7 # ── GET /api/jobs/{id}/survey/responses ───────────────────────────────────── def test_get_history_empty(client): """Returns empty list when no history exists.""" with patch("dev_api.get_survey_responses", return_value=[]): resp = client.get("/api/jobs/1/survey/responses") assert resp.status_code == 200 assert resp.json() == [] def test_get_history_populated(client): """Returns history rows newest first.""" rows = [ {"id": 2, "survey_name": "Round 2", "mode": "detailed", "source": "text_paste", "raw_input": None, "image_path": None, "llm_output": "Option A is best", "reported_score": "90%", "received_at": "2026-03-21T14:00:00", "created_at": "2026-03-21T14:00:01"}, {"id": 1, "survey_name": "Round 1", "mode": "quick", "source": "text_paste", "raw_input": "Q1: test", "image_path": None, "llm_output": "1. B", "reported_score": None, "received_at": "2026-03-21T12:00:00", "created_at": "2026-03-21T12:00:01"}, ] with patch("dev_api.get_survey_responses", return_value=rows): resp = client.get("/api/jobs/1/survey/responses") assert resp.status_code == 200 data = resp.json() assert len(data) == 2 assert data[0]["id"] == 2 assert data[0]["survey_name"] == "Round 2" ``` - [ ] **Step 2: Run tests to confirm they fail** ```bash cd /Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa /devl/miniconda3/envs/job-seeker/bin/pytest tests/test_dev_api_survey.py -v ``` Expected: All 10 tests FAIL with 404 / import errors (endpoints don't exist yet). - [ ] **Step 3: Implement the 4 endpoints in `dev-api.py`** First, add `import requests` to the module-level imports at the top of `dev-api.py` (after the existing `from typing import Optional` line). Then add the following block after the `# Interview Prep endpoints` section (~line 367), before the `# GET /api/jobs/:id/cover_letter/pdf` section: ```python # ── Survey endpoints ───────────────────────────────────────────────────────── _SURVEY_SYSTEM = ( "You are a job application advisor helping a candidate answer a culture-fit survey. " "The candidate values collaborative teamwork, clear communication, growth, and impact. " "Choose answers that present them in the best professional light." ) def _build_text_prompt(text: str, mode: str) -> str: if mode == "quick": return ( "Answer each survey question below. For each, give ONLY the letter of the best " "option and a single-sentence reason. Format exactly as:\n" "1. B — reason here\n2. A — reason here\n\n" f"Survey:\n{text}" ) return ( "Analyze each survey question below. For each question:\n" "- Briefly evaluate each option (1 sentence each)\n" "- State your recommendation with reasoning\n\n" f"Survey:\n{text}" ) def _build_image_prompt(mode: str) -> str: if mode == "quick": return ( "This is a screenshot of a culture-fit survey. Read all questions and answer each " "with the letter of the best option for a collaborative, growth-oriented candidate. " "Format: '1. B — brief reason' on separate lines." ) return ( "This is a screenshot of a culture-fit survey. For each question, evaluate each option " "and recommend the best choice for a collaborative, growth-oriented candidate. " "Include a brief breakdown per option and a clear recommendation." ) @app.get("/api/vision/health") def vision_health(): try: r = requests.get("http://localhost:8002/health", timeout=2) return {"available": r.status_code == 200} except Exception: return {"available": False} class SurveyAnalyzeBody(BaseModel): text: Optional[str] = None image_b64: Optional[str] = None mode: str # "quick" or "detailed" @app.post("/api/jobs/{job_id}/survey/analyze") def survey_analyze(job_id: int, body: SurveyAnalyzeBody): try: from scripts.llm_router import LLMRouter router = LLMRouter() if body.image_b64: prompt = _build_image_prompt(body.mode) output = router.complete( prompt, images=[body.image_b64], fallback_order=router.config.get("vision_fallback_order"), ) source = "screenshot" else: prompt = _build_text_prompt(body.text or "", body.mode) output = router.complete( prompt, system=_SURVEY_SYSTEM, fallback_order=router.config.get("research_fallback_order"), ) source = "text_paste" return {"output": output, "source": source} except Exception as e: raise HTTPException(500, str(e)) class SurveySaveBody(BaseModel): survey_name: Optional[str] = None mode: str source: str raw_input: Optional[str] = None image_b64: Optional[str] = None llm_output: str reported_score: Optional[str] = None @app.post("/api/jobs/{job_id}/survey/responses") def save_survey_response(job_id: int, body: SurveySaveBody): from scripts.db import insert_survey_response received_at = datetime.now().isoformat() image_path = None if body.image_b64: import base64 screenshots_dir = Path(DB_PATH).parent / "survey_screenshots" / str(job_id) screenshots_dir.mkdir(parents=True, exist_ok=True) timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") img_path = screenshots_dir / f"{timestamp}.png" img_path.write_bytes(base64.b64decode(body.image_b64)) image_path = str(img_path) row_id = insert_survey_response( db_path=Path(DB_PATH), job_id=job_id, survey_name=body.survey_name, received_at=received_at, source=body.source, raw_input=body.raw_input, image_path=image_path, mode=body.mode, llm_output=body.llm_output, reported_score=body.reported_score, ) return {"id": row_id} @app.get("/api/jobs/{job_id}/survey/responses") def get_survey_history(job_id: int): from scripts.db import get_survey_responses return get_survey_responses(db_path=Path(DB_PATH), job_id=job_id) ``` - [ ] **Step 4: Run tests — verify they pass** ```bash /devl/miniconda3/envs/job-seeker/bin/pytest tests/test_dev_api_survey.py -v ``` Expected: 10/10 PASS. If `test_save_response_with_image` needs mock adjustments to the `Path`/directory creation, fix the test mock rather than the implementation. - [ ] **Step 5: Run full test suite to catch regressions** ```bash /devl/miniconda3/envs/job-seeker/bin/pytest tests/ -v ``` Expected: All previously-passing tests still pass. - [ ] **Step 6: Commit** ```bash cd /Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa git add dev-api.py tests/test_dev_api_survey.py git commit -m "feat(survey): add 4 backend survey endpoints with tests" ``` --- ## Task 2: Survey Pinia store + unit tests **Files:** - Create: `web/src/stores/survey.ts` - Create: `web/src/stores/survey.test.ts` ### Context for the implementer Follow the exact same patterns as `web/src/stores/prep.ts`: - `defineStore` with setup function (not options API) - `useApiFetch` imported from `'../composables/useApi'` — returns `{ data: T | null, error: ApiError | null }` - All refs initialized to their zero values Test patterns from `web/src/stores/prep.test.ts`: ```typescript vi.mock('../composables/useApi', () => ({ useApiFetch: vi.fn() })) import { useApiFetch } from '../composables/useApi' // ... const mockApiFetch = vi.mocked(useApiFetch) mockApiFetch.mockResolvedValueOnce({ data: ..., error: null }) ``` The store has NO polling — survey analysis is synchronous. No `setInterval`/`clearInterval` needed. `saveResponse` constructs the full save body by combining fields from `analysis` (set during `analyze`) with the args passed to `saveResponse`. The `analysis` ref must include `mode` and `rawInput` so `saveResponse` can include them without extra parameters. - [ ] **Step 1: Write the failing store tests** Create `web/src/stores/survey.test.ts`: ```typescript import { describe, it, expect, beforeEach, afterEach, vi } from 'vitest' import { setActivePinia, createPinia } from 'pinia' import { useSurveyStore } from './survey' vi.mock('../composables/useApi', () => ({ useApiFetch: vi.fn(), })) import { useApiFetch } from '../composables/useApi' describe('useSurveyStore', () => { beforeEach(() => { setActivePinia(createPinia()) }) afterEach(() => { vi.clearAllMocks() }) it('fetchFor loads history and vision availability in parallel', async () => { const mockApiFetch = vi.mocked(useApiFetch) mockApiFetch .mockResolvedValueOnce({ data: [], error: null }) // history .mockResolvedValueOnce({ data: { available: true }, error: null }) // vision const store = useSurveyStore() await store.fetchFor(1) expect(store.history).toEqual([]) expect(store.visionAvailable).toBe(true) expect(store.currentJobId).toBe(1) expect(mockApiFetch).toHaveBeenCalledTimes(2) }) it('fetchFor clears state when called for a different job', async () => { const mockApiFetch = vi.mocked(useApiFetch) // Job 1 mockApiFetch .mockResolvedValueOnce({ data: [{ id: 1, llm_output: 'old' }], error: null }) .mockResolvedValueOnce({ data: { available: false }, error: null }) const store = useSurveyStore() await store.fetchFor(1) expect(store.history.length).toBe(1) // Job 2 — state must be cleared before new data arrives mockApiFetch .mockResolvedValueOnce({ data: [], error: null }) .mockResolvedValueOnce({ data: { available: true }, error: null }) await store.fetchFor(2) expect(store.history).toEqual([]) expect(store.currentJobId).toBe(2) }) it('analyze stores result including mode and rawInput', async () => { const mockApiFetch = vi.mocked(useApiFetch) mockApiFetch.mockResolvedValueOnce({ data: { output: '1. B — reason', source: 'text_paste' }, error: null, }) const store = useSurveyStore() await store.analyze(1, { text: 'Q1: test', mode: 'quick' }) expect(store.analysis).not.toBeNull() expect(store.analysis!.output).toBe('1. B — reason') expect(store.analysis!.source).toBe('text_paste') expect(store.analysis!.mode).toBe('quick') expect(store.analysis!.rawInput).toBe('Q1: test') expect(store.loading).toBe(false) }) it('analyze sets error on failure', async () => { const mockApiFetch = vi.mocked(useApiFetch) mockApiFetch.mockResolvedValueOnce({ data: null, error: { kind: 'http', status: 500, detail: 'LLM unavailable' }, }) const store = useSurveyStore() await store.analyze(1, { text: 'Q1: test', mode: 'quick' }) expect(store.analysis).toBeNull() expect(store.error).toBeTruthy() expect(store.loading).toBe(false) }) it('saveResponse prepends to history and clears analysis', async () => { const mockApiFetch = vi.mocked(useApiFetch) // Setup: fetchFor mockApiFetch .mockResolvedValueOnce({ data: [], error: null }) .mockResolvedValueOnce({ data: { available: true }, error: null }) const store = useSurveyStore() await store.fetchFor(1) // Set analysis state manually (as if analyze() was called) store.analysis = { output: '1. B — reason', source: 'text_paste', mode: 'quick', rawInput: 'Q1: test', } // Save mockApiFetch.mockResolvedValueOnce({ data: { id: 42 }, error: null, }) await store.saveResponse(1, { surveyName: 'Round 1', reportedScore: '85%' }) expect(store.history.length).toBe(1) expect(store.history[0].id).toBe(42) expect(store.history[0].llm_output).toBe('1. B — reason') expect(store.analysis).toBeNull() expect(store.saving).toBe(false) }) it('clear resets all state to initial values', async () => { const mockApiFetch = vi.mocked(useApiFetch) mockApiFetch .mockResolvedValueOnce({ data: [{ id: 1, llm_output: 'test' }], error: null }) .mockResolvedValueOnce({ data: { available: true }, error: null }) const store = useSurveyStore() await store.fetchFor(1) store.clear() expect(store.history).toEqual([]) expect(store.analysis).toBeNull() expect(store.visionAvailable).toBe(false) expect(store.loading).toBe(false) expect(store.saving).toBe(false) expect(store.error).toBeNull() expect(store.currentJobId).toBeNull() }) }) ``` - [ ] **Step 2: Run tests to confirm they fail** ```bash cd /Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa/web npm run test -- survey.test.ts ``` Expected: All 6 tests FAIL (store doesn't exist yet). - [ ] **Step 3: Implement `web/src/stores/survey.ts`** ```typescript import { ref } from 'vue' import { defineStore } from 'pinia' import { useApiFetch } from '../composables/useApi' export interface SurveyAnalysis { output: string source: 'text_paste' | 'screenshot' mode: 'quick' | 'detailed' rawInput: string | null } export interface SurveyResponse { id: number survey_name: string | null mode: 'quick' | 'detailed' source: string raw_input: string | null image_path: string | null llm_output: string reported_score: string | null received_at: string | null created_at: string | null } export const useSurveyStore = defineStore('survey', () => { const analysis = ref(null) const history = ref([]) const loading = ref(false) const saving = ref(false) const error = ref(null) const visionAvailable = ref(false) const currentJobId = ref(null) async function fetchFor(jobId: number) { if (jobId !== currentJobId.value) { analysis.value = null history.value = [] error.value = null visionAvailable.value = false currentJobId.value = jobId } const [historyResult, visionResult] = await Promise.all([ useApiFetch(`/api/jobs/${jobId}/survey/responses`), useApiFetch<{ available: boolean }>('/api/vision/health'), ]) if (historyResult.error) { error.value = 'Could not load survey history.' } else { history.value = historyResult.data ?? [] } visionAvailable.value = visionResult.data?.available ?? false } async function analyze( jobId: number, payload: { text?: string; image_b64?: string; mode: 'quick' | 'detailed' } ) { loading.value = true error.value = null const { data, error: fetchError } = await useApiFetch<{ output: string; source: string }>( `/api/jobs/${jobId}/survey/analyze`, { method: 'POST', body: JSON.stringify(payload) } ) loading.value = false if (fetchError || !data) { error.value = 'Analysis failed. Please try again.' return } analysis.value = { output: data.output, source: data.source as 'text_paste' | 'screenshot', mode: payload.mode, rawInput: payload.text ?? null, } } async function saveResponse( jobId: number, args: { surveyName: string; reportedScore: string; image_b64?: string } ) { if (!analysis.value) return saving.value = true error.value = null const body = { survey_name: args.surveyName || undefined, mode: analysis.value.mode, source: analysis.value.source, raw_input: analysis.value.rawInput, image_b64: args.image_b64, llm_output: analysis.value.output, reported_score: args.reportedScore || undefined, } const { data, error: fetchError } = await useApiFetch<{ id: number }>( `/api/jobs/${jobId}/survey/responses`, { method: 'POST', body: JSON.stringify(body) } ) saving.value = false if (fetchError || !data) { error.value = 'Save failed. Your analysis is preserved — try again.' return } // Prepend the saved response to history const saved: SurveyResponse = { id: data.id, survey_name: args.surveyName || null, mode: analysis.value.mode, source: analysis.value.source, raw_input: analysis.value.rawInput, image_path: null, llm_output: analysis.value.output, reported_score: args.reportedScore || null, received_at: new Date().toISOString(), created_at: new Date().toISOString(), } history.value = [saved, ...history.value] analysis.value = null } function clear() { analysis.value = null history.value = [] loading.value = false saving.value = false error.value = null visionAvailable.value = false currentJobId.value = null } return { analysis, history, loading, saving, error, visionAvailable, currentJobId, fetchFor, analyze, saveResponse, clear, } }) ``` - [ ] **Step 4: Run store tests — verify they pass** ```bash cd /Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa/web npm run test -- survey.test.ts ``` Expected: 6/6 PASS. - [ ] **Step 5: Run full frontend test suite** ```bash npm run test ``` Expected: All previously-passing tests still pass. - [ ] **Step 6: Commit** ```bash cd /Library/Development/CircuitForge/peregrine/.worktrees/feature-vue-spa git add web/src/stores/survey.ts web/src/stores/survey.test.ts git commit -m "feat(survey): add survey Pinia store with tests" ``` --- ## Task 3: SurveyView.vue + router + navigation wiring **Files:** - Modify: `web/src/router/index.ts` — add `/survey/:id` route - Modify: `web/src/views/SurveyView.vue` — replace placeholder stub with full implementation - Modify: `web/src/components/InterviewCard.vue` — add `emit('survey', job.id)` button - Modify: `web/src/views/InterviewsView.vue` — add `@survey` on 3 `` instances + "Survey →" button in pre-list row for `survey`-stage jobs ### Context for the implementer **Router:** Currently has `{ path: '/survey', component: ... }` but NOT `/survey/:id`. The spec requires both. Add the `:id` variant. **SurveyView.vue:** Currently a 18-line placeholder stub. Replace it entirely. Layout spec: - Single column, `max-width: 760px`, centered (`margin: 0 auto`), padding `var(--space-6)` - Uses `var(--space-*)` and `var(--color-*)` CSS variables (theme-aware — see other views for examples) - Sticky context bar at the top - Tabs: Text paste (always active) / Screenshot (disabled with `aria-disabled` when `!visionAvailable`) - Mode cards: two full-width stacked cards (⚡ Quick / 📋 Detailed), selected card gets border highlight - Analyze button: full-width, disabled when no input; spinner while loading - Results card: appears when `analysis` set; `white-space: pre-wrap`; inline save form below - History accordion: closed by default; `summary` element for toggle **InterviewCard.vue:** The established emit pattern: ```typescript // Line 12-14 (existing) const emit = defineEmits<{ prep: [jobId: number] }>() // Add 'survey' to this definition const emit = defineEmits<{ prep: [jobId: number]; survey: [jobId: number] }>() ``` Existing "Prep →" button (line ~182): ```html ``` Add "Survey →" button with stages `['survey', 'phone_screen', 'interviewing', 'offer']` using the same `card-action` class. **InterviewsView.vue:** There are **3** `` instances (kanban columns: phoneScreen line ~462, interviewing ~475, offerHired ~488) — NOT 4. The `survey`-stage jobs live in the pre-list section (lines ~372–432) which renders plain `
` elements, not ``. Two changes needed: 1. Add `@survey="router.push('/survey/' + $event)"` to all 3 `` instances (same pattern as `@prep`). 2. Add a "Survey →" button directly to the pre-list row template for `survey`-stage jobs. The pre-list row is at line ~373 inside `v-for="job in pagedApplied"`. Add a button after the existing `btn-move-pre`: ```html ``` **Mount guard:** Read `route.params.id` → redirect to `/interviews` if missing or non-numeric. Look up job in `interviewsStore.jobs`; if status not in `['survey', 'phone_screen', 'interviewing', 'offer']`, redirect. Call `surveyStore.fetchFor(jobId)` on mount; `surveyStore.clear()` on unmount. **useApiFetch body pattern:** Look at how `InterviewPrepView.vue` makes POST calls if needed — but the store handles all API calls; the view only calls store methods. Look at `web/src/views/InterviewPrepView.vue` as the reference for how views use stores, handle route guards, and apply CSS variables. The theme variables file is at `web/src/assets/theme.css` or similar — check what exists. - [ ] **Step 1: Verify existing theme variables and CSS patterns** ```bash grep -r "var(--space\|var(--color\|var(--font" web/src/assets/ web/src/views/InterviewPrepView.vue 2>/dev/null | head -20 ls web/src/assets/ ``` This confirms which CSS variables are available for the layout. - [ ] **Step 2: Add `/survey/:id` route to router** In `web/src/router/index.ts`, add after the existing `/survey` line: ```typescript { path: '/survey/:id', component: () => import('../views/SurveyView.vue') }, ``` The existing `/survey` (no-id) route will continue to load `SurveyView.vue`, which is fine — the component mount guard immediately redirects to `/interviews` when `jobId` is missing/NaN. No router-level redirect needed. - [ ] **Step 3: Implement SurveyView.vue** Replace the placeholder stub entirely. Key sections: ```vue