Move POST /api/jobs/:id/survey/analyze off the FastAPI worker thread by routing it through the LLM task queue (same pattern as cover_letter, company_research, resume_optimize). - Extract prompt builders + run_survey_analyze() to scripts/survey_assistant.py - Add survey_analyze to LLM_TASK_TYPES (task_scheduler.py) with 2.5 GB VRAM budget (text mode: phi3:mini; visual mode uses vision service's own VRAM pool) - Add elif branch in task_runner._run_task; result stored as JSON in error col - Replace sync endpoint body with submit_task(); add GET /survey/analyze/task poll - Update survey.ts store: analyze() now fires task + polls at 3s interval; silently attaches to existing in-flight task when is_new=false - SurveyView button label shows task stage while polling Fixes load-test spike: ~22 greenlets blocking on LLM inference at 100 concurrent users, causing 90s poll timeouts on cover_letter and research tasks. |
||
|---|---|---|
| .. | ||
| public | ||
| src | ||
| .gitignore | ||
| index.html | ||
| package-lock.json | ||
| package.json | ||
| tsconfig.app.json | ||
| tsconfig.json | ||
| tsconfig.node.json | ||
| uno.config.ts | ||
| vite.config.ts | ||