preflight.py now detects when a managed service (ollama, vllm, vision, searxng) is already running on its configured port and adopts it rather than reassigning or conflicting: - Generates compose.override.yml disabling Docker containers for adopted services (profiles: [_external_] — a profile never passed via --profile) - Rewrites config/llm.yaml base_url entries to host.docker.internal:<port> so the app container can reach host-side services through Docker's host-gateway mapping - compose.yml: adds extra_hosts host.docker.internal:host-gateway to the app service (required on Linux; no-op on macOS Docker Desktop) - .gitignore: excludes compose.override.yml (auto-generated, host-specific) Only streamlit is non-adoptable and continues to reassign on conflict. |
||
|---|---|---|
| app | ||
| config | ||
| data/survey_screenshots | ||
| docker | ||
| docs | ||
| scrapers | ||
| scripts | ||
| tests | ||
| .dockerignore | ||
| .env.example | ||
| .gitignore | ||
| CHANGELOG.md | ||
| CLAUDE.md | ||
| compose.gpu.yml | ||
| compose.podman-gpu.yml | ||
| compose.yml | ||
| CONTRIBUTING.md | ||
| Dockerfile | ||
| Dockerfile.finetune | ||
| environment.yml | ||
| LICENSE-BSL | ||
| LICENSE-MIT | ||
| Makefile | ||
| manage.sh | ||
| mkdocs.yml | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| setup.sh | ||
Peregrine
AI-powered job search pipeline — by Circuit Forge LLC
"Don't be evil, for real and forever."
Automates the full job search lifecycle: discovery → matching → cover letters → applications → interview prep. Privacy-first, local-first. Your data never leaves your machine.
Quick Start
1. Clone and install dependencies (Docker, NVIDIA toolkit if needed):
git clone https://git.opensourcesolarpunk.com/pyr0ball/peregrine
cd peregrine
./manage.sh setup
2. Start Peregrine:
./manage.sh start # remote profile (API-only, no GPU)
./manage.sh start --profile cpu # local Ollama on CPU
./manage.sh start --profile single-gpu # Ollama + Vision on GPU 0
./manage.sh start --profile dual-gpu # Ollama + Vision + vLLM (GPU 0 + 1)
Or use make directly:
make start # remote profile
make start PROFILE=single-gpu
3. Open http://localhost:8501 — the setup wizard guides you through the rest.
macOS: Docker Desktop must be running before starting. Windows: Not supported — use WSL2 with Ubuntu.
Inference Profiles
| Profile | Services started | Use case |
|---|---|---|
remote |
app + searxng | No GPU; LLM calls go to Anthropic / OpenAI |
cpu |
app + ollama + searxng | No GPU; local models on CPU (slow) |
single-gpu |
app + ollama + vision + searxng | One GPU: cover letters, research, vision |
dual-gpu |
app + ollama + vllm + vision + searxng | GPU 0 = Ollama, GPU 1 = vLLM |
First-Run Wizard
On first launch the setup wizard walks through seven steps:
- Hardware — detects NVIDIA GPUs and recommends a profile
- Tier — choose free, paid, or premium (or use
dev_tier_overridefor local testing) - Identity — name, email, phone, LinkedIn, career summary
- Resume — upload a PDF/DOCX for LLM parsing, or use the guided form builder
- Inference — configure LLM backends and API keys
- Search — job titles, locations, boards, keywords, blocklist
- Integrations — optional cloud storage, calendar, and notification services
Wizard state is saved after each step — a crash or browser close resumes where you left off. Re-enter the wizard any time via Settings → Developer → Reset wizard.
Features
| Feature | Tier |
|---|---|
| Job discovery (JobSpy + custom boards) | Free |
| Resume keyword matching | Free |
| Cover letter generation | Paid |
| Company research briefs | Paid |
| Interview prep & practice Q&A | Paid |
| Email sync & auto-classification | Paid |
| Survey assistant (culture-fit Q&A) | Paid |
| Integration connectors (Notion, Airtable, Google Sheets, etc.) | Paid |
| Calendar sync (Google, Apple) | Paid |
| Cover letter model fine-tuning | Premium |
| Multi-user support | Premium |
Email Sync
Monitors your inbox for job-related emails and automatically updates job stages (interview requests, rejections, survey links, offers).
Configure in Settings → Email. Requires IMAP access and, for Gmail, an App Password.
Integrations
Connect external services in Settings → Integrations:
- Job tracking: Notion, Airtable, Google Sheets
- Document storage: Google Drive, Dropbox, OneDrive, MEGA, Nextcloud
- Calendar: Google Calendar, Apple Calendar (CalDAV)
- Notifications: Slack, Discord (webhook), Home Assistant
CLI Reference (manage.sh)
manage.sh is the single entry point for all common operations — no need to remember Make targets or Docker commands.
./manage.sh setup Install Docker/Podman + NVIDIA toolkit
./manage.sh start [--profile P] Preflight check then start services
./manage.sh stop Stop all services
./manage.sh restart Restart all services
./manage.sh status Show running containers
./manage.sh logs [service] Tail logs (default: app)
./manage.sh update Pull latest images + rebuild app container
./manage.sh preflight Check ports + resources; write .env
./manage.sh test Run test suite
./manage.sh prepare-training Scan docs for cover letters → training JSONL
./manage.sh finetune Run LoRA fine-tune (needs --profile single-gpu+)
./manage.sh open Open the web UI in your browser
./manage.sh clean Remove containers, images, volumes (asks to confirm)
Developer Docs
Full documentation at: https://docs.circuitforge.io/peregrine
License
Core discovery pipeline: MIT AI features (cover letter generation, company research, interview prep, UI): BSL 1.1
© 2026 Circuit Forge LLC