Peregrine by Circuit Forge LLC — LLM-powered job discovery and application pipeline https://circuitforge.tech/software/peregrine
Find a file
pyr0ball 53b07568d9 feat(vue): accumulated parity work — Q&A, Apply highlights, AppNav switcher, cloud API
API additions (dev-api.py split across this and next commit):
- /api/jobs/{job_id}/qa GET/PATCH/suggest — Interview Prep answer storage + LLM suggestions
- /api/settings/ui-preference POST — persist streamlit/vue preference to user.yaml
- cancel_task() added to scripts/db.py (per-task cancel for Danger Zone)

Vue / UI:
- AppNav: " Classic" button to switch back to Streamlit UI (writes cookie + persists to user.yaml)
- ApplyWorkspace: Resume Highlights panel (collapsible skills/domains/keywords with job-match highlighting)
- SettingsView: hide Data tab in cloud mode (showData guard)
- ResumeProfileView: minor improvements
- useApi.ts: error handling improvements

Infra:
- compose.cloud.yml: add api service (uvicorn dev_api running in cloud container)
- docker/web/nginx.conf: proxy /api/* to api service in cloud mode
- README.md: Vue SPA now listed as Free tier feature
2026-04-04 22:04:51 -07:00
.gitea/ISSUE_TEMPLATE feat: issue templates, PR template, security redirect 2026-03-02 19:35:06 -08:00
.githooks feat: add pre-commit sensitive file blocker and support request issue template 2026-03-16 11:30:11 -07:00
.github ci: configure Forgejo git credentials before pip install 2026-04-01 13:43:54 -07:00
app feat: wire circuitforge-core config.load_env at entry points (closes #68 partial) 2026-04-04 19:37:58 -07:00
config feat(vue): open Vue SPA to all tiers; fix cloud nav and feedback button 2026-04-02 17:41:35 -07:00
data feat: add scoring JSONL example and gitignore for benchmark data files 2026-02-26 23:46:29 -08:00
demo fix(demo): block Vue navigation in demo mode; fix wizard gate ui sync 2026-03-24 12:31:37 -07:00
docker feat(vue): accumulated parity work — Q&A, Apply highlights, AppNav switcher, cloud API 2026-04-04 22:04:51 -07:00
docs feat(web): Vue 3 SPA scaffold with avocet lessons applied 2026-03-17 21:24:00 -07:00
scrapers fix: repair beta installer path for Docker-first deployment 2026-02-25 16:03:10 -08:00
scripts feat(vue): accumulated parity work — Q&A, Apply highlights, AppNav switcher, cloud API 2026-04-04 22:04:51 -07:00
tests fix: address code review — drop OLLAMA_RESEARCH_HOST, fix test fidelity, simplify model guard 2026-04-04 19:26:08 -07:00
tools feat: discard button — removes email from queue without writing to score file 2026-02-27 15:48:47 -08:00
web feat(vue): accumulated parity work — Q&A, Apply highlights, AppNav switcher, cloud API 2026-04-04 22:04:51 -07:00
.dockerignore feat: add Docker Compose stack with remote/cpu/single-gpu/dual-gpu profiles 2026-02-24 19:31:57 -08:00
.env.e2e.example chore(e2e): add .env.e2e.example and gitignore .env.e2e 2026-03-16 22:41:24 -07:00
.env.example chore: add LLM env-var config + CF coordinator vars to .env.example 2026-04-04 18:37:58 -07:00
.gitignore feat(web): merge Vue SPA from feature/vue-spa; add ClassicUIButton + useFeatureFlag 2026-03-22 18:46:11 -07:00
.gitleaks.toml chore: expand peregrine .gitleaks.toml allowlists for history scan 2026-03-07 13:24:18 -08:00
CHANGELOG.md chore(release): v0.8.5 2026-04-02 18:47:33 -07:00
compose.cloud.yml feat(vue): accumulated parity work — Q&A, Apply highlights, AppNav switcher, cloud API 2026-04-04 22:04:51 -07:00
compose.demo.yml feat(vue): open Vue SPA to all tiers; fix cloud nav and feedback button 2026-04-02 17:41:35 -07:00
compose.gpu.yml feat: assign ollama_research to GPU 1 in Docker and Podman GPU overlays 2026-02-27 06:16:04 -08:00
compose.podman-gpu.yml feat: assign ollama_research to GPU 1 in Docker and Podman GPU overlays 2026-02-27 06:16:04 -08:00
compose.test-cfcore.yml feat(scheduler): read CF_ORCH_URL env var for coordinator address 2026-04-01 11:06:38 -07:00
compose.yml feat(vue): open Vue SPA to all tiers; fix cloud nav and feedback button 2026-04-02 17:41:35 -07:00
CONTRIBUTING.md docs: add CONTRIBUTING.md with BSL policy and CLA note 2026-03-02 19:26:25 -08:00
dev-api.py fix(isolation): 4 user config isolation + resume upload bugs 2026-04-02 18:23:02 -07:00
dev_api.py feat(interviews): add stage signals, email sync, and dismiss endpoints to dev-api 2026-03-19 16:17:22 -07:00
Dockerfile fix(linkedin): conservative settings merge, mkdir guard, split dockerfile playwright layer 2026-03-13 10:58:58 -07:00
Dockerfile.cfcore fix(ci): replace local -e path with Forgejo VCS URL for circuitforge-core 2026-04-01 13:22:06 -07:00
Dockerfile.finetune feat: containerize fine-tune pipeline (Dockerfile.finetune + make finetune) 2026-02-25 16:22:48 -08:00
environment.yml chore: rename conda env job-seeker to cf; update README 2026-03-31 10:39:25 -07:00
LICENSE-BSL docs: LICENSE-MIT + LICENSE-BSL + updated README for 7-step wizard and current feature set 2026-02-25 12:06:28 -08:00
LICENSE-MIT docs: LICENSE-MIT + LICENSE-BSL + updated README for 7-step wizard and current feature set 2026-02-25 12:06:28 -08:00
Makefile feat: inject DUAL_GPU_MODE sub-profile in Makefile; update manage.sh help 2026-02-27 06:18:34 -08:00
manage.sh chore: rename conda env job-seeker to cf; update README 2026-03-31 10:39:25 -07:00
mkdocs.yml docs: add cloud architecture + cloud-deployment.md 2026-03-09 23:02:29 -07:00
PRIVACY.md docs: add privacy policy reference 2026-03-05 20:59:01 -08:00
pytest.ini chore: seed Peregrine from personal job-seeker (pre-generalization) 2026-02-24 18:25:39 -08:00
README.md feat(vue): accumulated parity work — Q&A, Apply highlights, AppNav switcher, cloud API 2026-04-04 22:04:51 -07:00
requirements.txt feat: wire circuitforge-core config.load_env at entry points (closes #68 partial) 2026-04-04 19:37:58 -07:00
SECURITY.md docs: add SECURITY.md — responsible disclosure policy 2026-03-02 19:26:23 -08:00
setup.sh chore: activate circuitforge-hooks, add peregrine .gitleaks.toml 2026-03-07 13:20:52 -08:00

Peregrine

Primary development happens at git.opensourcesolarpunk.com — GitHub and Codeberg are push mirrors. Issues and PRs are welcome on either platform.

License: BSL 1.1 CI

Job search pipeline — by Circuit Forge LLC

"Tools for the jobs that the system made hard on purpose."


Job search is a second job nobody hired you for.

ATS filters designed to reject. Job boards that show the same listing eight times. Cover letter number forty-seven for a role that might already be filled. Hours of prep for a phone screen that lasts twelve minutes.

Peregrine handles the pipeline — discovery, matching, tracking, drafting, and prep — so you can spend your time doing the work you actually want to be doing.

LLM support is optional. The full discovery and tracking pipeline works without one. When you do configure a backend, the LLM drafts the parts that are genuinely miserable — cover letters, company research briefs, interview prep sheets — and waits for your approval before anything goes anywhere.

What Peregrine does not do

Peregrine does not submit job applications for you. You still have to go to each employer's site and click apply yourself.

This is intentional. Automated mass-applying is a bad experience for everyone — it's also a trust violation with employers who took the time to post a real role. Peregrine is a preparation and organization tool, not a bot.

What it does cover is everything before and after that click: finding the jobs, matching them against your resume, generating cover letters and prep materials, and once you've applied — tracking where you stand, classifying the emails that come back, and surfacing company research when an interview lands on your calendar. The submit button is yours. The rest of the grind is ours.

Exception: AIHawk is a separate, optional tool that handles LinkedIn Easy Apply automation. Peregrine integrates with it for AIHawk-compatible profiles, but it is not part of Peregrine's core pipeline.


Quick Start

1. Clone and install dependencies (Docker, NVIDIA toolkit if needed):

git clone https://git.opensourcesolarpunk.com/Circuit-Forge/peregrine
cd peregrine
./manage.sh setup

2. Start Peregrine:

./manage.sh start                          # remote profile (API-only, no GPU)
./manage.sh start --profile cpu            # local Ollama (CPU, or Metal GPU on Apple Silicon — see below)
./manage.sh start --profile single-gpu    # Ollama + Vision on GPU 0  (NVIDIA only)
./manage.sh start --profile dual-gpu      # Ollama + Vision + vLLM (GPU 0 + 1)  (NVIDIA only)

Or use make directly:

make start                        # remote profile
make start PROFILE=single-gpu

3. Open http://localhost:8501 — the setup wizard guides you through the rest.

macOS / Apple Silicon: Docker Desktop must be running. For Metal GPU-accelerated inference, install Ollama natively before starting — setup.sh will prompt you to do this. See Apple Silicon GPU below. Windows: Not supported — use WSL2 with Ubuntu.

Installing to /opt or other system directories

If you clone into a root-owned directory (e.g. sudo git clone ... /opt/peregrine), two things need fixing:

1. Git ownership warning (fatal: detected dubious ownership) — ./manage.sh setup fixes this automatically. If you need git to work before running setup:

git config --global --add safe.directory /opt/peregrine

2. Preflight write access — preflight writes .env and compose.override.yml into the repo directory. Fix ownership once:

sudo chown -R $USER:$USER /opt/peregrine

After that, run everything without sudo.

Podman

Podman is rootless by default — no sudo needed. ./manage.sh setup will configure podman-compose if it isn't already present.

Docker

After ./manage.sh setup, log out and back in for docker group membership to take effect. Until then, prefix commands with sudo. After re-login, sudo is no longer required.


Inference Profiles

Profile Services started Use case
remote app + searxng No GPU; LLM calls go to Anthropic / OpenAI
cpu app + ollama + searxng No GPU; local models on CPU. On Apple Silicon, use with native Ollama for Metal acceleration — see below.
single-gpu app + ollama + vision + searxng One NVIDIA GPU: cover letters, research, vision
dual-gpu app + ollama + vllm + vision + searxng Two NVIDIA GPUs: GPU 0 = Ollama, GPU 1 = vLLM

Apple Silicon GPU

Docker Desktop on macOS runs in a Linux VM — it cannot access the Apple GPU. Metal-accelerated inference requires Ollama to run natively on the host.

setup.sh handles this automatically: it offers to install Ollama via Homebrew, starts it as a background service, and explains what happens next. If Ollama is running on port 11434 when you start Peregrine, preflight detects it, stubs out the Docker Ollama container, and routes inference through the native process — which uses Metal automatically.

To do it manually:

brew install ollama
brew services start ollama          # starts at login, uses Metal GPU
./manage.sh start --profile cpu     # preflight adopts native Ollama; Docker container is skipped

The cpu profile label is a slight misnomer in this context — Ollama will be running on the GPU. single-gpu and dual-gpu profiles are NVIDIA-specific and not applicable on Mac.


First-Run Wizard

On first launch the setup wizard walks through seven steps:

  1. Hardware — detects NVIDIA GPUs (Linux) or Apple Silicon GPU (macOS) and recommends a profile
  2. Tier — choose free, paid, or premium (or use dev_tier_override for local testing)
  3. Identity — name, email, phone, LinkedIn, career summary
  4. Resume — upload a PDF/DOCX for LLM parsing, or use the guided form builder
  5. Inference — configure LLM backends and API keys
  6. Search — job titles, locations, boards, keywords, blocklist
  7. Integrations — optional cloud storage, calendar, and notification services

Wizard state is saved after each step — a crash or browser close resumes where you left off. Re-enter the wizard any time via Settings → Developer → Reset wizard.


Features

Feature Tier
Job discovery (JobSpy + custom boards) Free
Resume keyword matching & gap analysis Free
Document storage sync (Google Drive, Dropbox, OneDrive, MEGA, Nextcloud) Free
Webhook notifications (Discord, Home Assistant) Free
Cover letter generation Free with LLM¹
Company research briefs Free with LLM¹
Interview prep & practice Q&A Free with LLM¹
Survey assistant (culture-fit Q&A, screenshot analysis) Free with LLM¹
Wizard helpers (career summary, bullet expansion, skill suggestions, job title suggestions, mission notes) Free with LLM¹
Managed cloud LLM (no API key needed) Paid
Email sync & auto-classification Paid
LLM-powered keyword blocklist Paid
Job tracking integrations (Notion, Airtable, Google Sheets) Paid
Calendar sync (Google, Apple) Paid
Slack notifications Paid
CircuitForge shared cover-letter model Paid
Vue 3 SPA — full UI with onboarding wizard, job board, apply workspace, sort/filter, research modal, draft cover letter Free
Voice guidelines (custom writing style & tone) Premium with LLM¹ ²
Cover letter model fine-tuning (your writing, your model) Premium
Multi-user support Premium

¹ BYOK (bring your own key/backend) unlock: configure any LLM backend — a local Ollama or vLLM instance, or your own API key (Anthropic, OpenAI-compatible) — and all features marked Free with LLM or Premium with LLM unlock at no charge. The paid tier earns its price by providing managed cloud inference so you don't need a key at all, plus integrations and email sync.

² Voice guidelines requires Premium tier without a configured LLM backend. With BYOK, it unlocks at any tier.


Email Sync

Monitors your inbox for job-related emails and automatically updates job stages (interview requests, rejections, survey links, offers).

Configure in Settings → Email. Requires IMAP access and, for Gmail, an App Password.


Integrations

Connect external services in Settings → Integrations:

  • Job tracking: Notion, Airtable, Google Sheets
  • Document storage: Google Drive, Dropbox, OneDrive, MEGA, Nextcloud
  • Calendar: Google Calendar, Apple Calendar (CalDAV)
  • Notifications: Slack, Discord (webhook), Home Assistant

CLI Reference (manage.sh)

manage.sh is the single entry point for all common operations — no need to remember Make targets or Docker commands.

./manage.sh setup               Install Docker/Podman + NVIDIA toolkit
./manage.sh start [--profile P] Preflight check then start services
./manage.sh stop                Stop all services
./manage.sh restart             Restart all services
./manage.sh status              Show running containers
./manage.sh logs [service]      Tail logs (default: app)
./manage.sh update              Pull latest images + rebuild app container
./manage.sh preflight           Check ports + resources; write .env
./manage.sh test                Run test suite
./manage.sh prepare-training    Scan docs for cover letters → training JSONL
./manage.sh finetune            Run LoRA fine-tune (needs --profile single-gpu+)
./manage.sh open                Open the web UI in your browser
./manage.sh clean               Remove containers, images, volumes (asks to confirm)

Developer Docs

Full documentation at: https://docs.circuitforge.tech/peregrine


License

Core discovery pipeline: MIT LLM features (cover letter generation, company research, interview prep, UI): BSL 1.1

© 2026 Circuit Forge LLC