|
Some checks failed
CI / test (pull_request) Failing after 1m16s
- Lower vue_ui_beta gate to "free" so all licensed users can access the new UI without a paid subscription - Remove "Paid tier" wording from the Try New UI banner - Fix Vue SPA navigation in cloud/demo deployments: add VITE_BASE_PATH build arg so Vite sets the correct subpath base, and pass import.meta.env.BASE_URL to createWebHistory() so router links emit /peregrine/... paths that Caddy can match - Fix feedback button missing on cloud instance by passing FORGEJO_API_TOKEN through compose.cloud.yml - Remove vLLM container from compose.yml (vLLM dropped from stack; cf-research service in cfcore covers the use case) - Fix cloud config path in Apply page (use get_config_dir() so per-user cloud data roots resolve correctly for user.yaml and resume YAML) - Refactor generate_cover_letter._build_system_context and _build_mission_notes to accept explicit profile arg (enables per-user cover letter generation in cloud multi-tenant mode) - Add API proxy block to nginx.conf (Vue web container can now call /api/ directly without Vite dev proxy) - Update .env.example: remove vLLM vars, add research model + tuning vars for external vLLM deployments - Update llm.yaml: switch vllm base_url to host.docker.internal (vLLM now runs outside Docker stack) Closes #63 (feedback button) Related: #8 (Vue SPA), #50–#62 (parity milestone) |
||
|---|---|---|
| .gitea/ISSUE_TEMPLATE | ||
| .githooks | ||
| .github | ||
| app | ||
| config | ||
| data | ||
| demo | ||
| docker | ||
| docs | ||
| scrapers | ||
| scripts | ||
| tests | ||
| tools | ||
| web | ||
| .dockerignore | ||
| .env.e2e.example | ||
| .env.example | ||
| .gitignore | ||
| .gitleaks.toml | ||
| CHANGELOG.md | ||
| compose.cloud.yml | ||
| compose.demo.yml | ||
| compose.gpu.yml | ||
| compose.podman-gpu.yml | ||
| compose.test-cfcore.yml | ||
| compose.yml | ||
| CONTRIBUTING.md | ||
| dev-api.py | ||
| dev_api.py | ||
| Dockerfile | ||
| Dockerfile.cfcore | ||
| Dockerfile.finetune | ||
| environment.yml | ||
| LICENSE-BSL | ||
| LICENSE-MIT | ||
| Makefile | ||
| manage.sh | ||
| mkdocs.yml | ||
| PRIVACY.md | ||
| pytest.ini | ||
| README.md | ||
| requirements.txt | ||
| SECURITY.md | ||
| setup.sh | ||
Peregrine
Primary development happens at git.opensourcesolarpunk.com — GitHub and Codeberg are push mirrors. Issues and PRs are welcome on either platform.
Job search pipeline — by Circuit Forge LLC
"Tools for the jobs that the system made hard on purpose."
Job search is a second job nobody hired you for.
ATS filters designed to reject. Job boards that show the same listing eight times. Cover letter number forty-seven for a role that might already be filled. Hours of prep for a phone screen that lasts twelve minutes.
Peregrine handles the pipeline — discovery, matching, tracking, drafting, and prep — so you can spend your time doing the work you actually want to be doing.
LLM support is optional. The full discovery and tracking pipeline works without one. When you do configure a backend, the LLM drafts the parts that are genuinely miserable — cover letters, company research briefs, interview prep sheets — and waits for your approval before anything goes anywhere.
What Peregrine does not do
Peregrine does not submit job applications for you. You still have to go to each employer's site and click apply yourself.
This is intentional. Automated mass-applying is a bad experience for everyone — it's also a trust violation with employers who took the time to post a real role. Peregrine is a preparation and organization tool, not a bot.
What it does cover is everything before and after that click: finding the jobs, matching them against your resume, generating cover letters and prep materials, and once you've applied — tracking where you stand, classifying the emails that come back, and surfacing company research when an interview lands on your calendar. The submit button is yours. The rest of the grind is ours.
Exception: AIHawk is a separate, optional tool that handles LinkedIn Easy Apply automation. Peregrine integrates with it for AIHawk-compatible profiles, but it is not part of Peregrine's core pipeline.
Quick Start
1. Clone and install dependencies (Docker, NVIDIA toolkit if needed):
git clone https://git.opensourcesolarpunk.com/Circuit-Forge/peregrine
cd peregrine
./manage.sh setup
2. Start Peregrine:
./manage.sh start # remote profile (API-only, no GPU)
./manage.sh start --profile cpu # local Ollama (CPU, or Metal GPU on Apple Silicon — see below)
./manage.sh start --profile single-gpu # Ollama + Vision on GPU 0 (NVIDIA only)
./manage.sh start --profile dual-gpu # Ollama + Vision + vLLM (GPU 0 + 1) (NVIDIA only)
Or use make directly:
make start # remote profile
make start PROFILE=single-gpu
3. Open http://localhost:8501 — the setup wizard guides you through the rest.
macOS / Apple Silicon: Docker Desktop must be running. For Metal GPU-accelerated inference, install Ollama natively before starting —
setup.shwill prompt you to do this. See Apple Silicon GPU below. Windows: Not supported — use WSL2 with Ubuntu.
Installing to /opt or other system directories
If you clone into a root-owned directory (e.g. sudo git clone ... /opt/peregrine), two things need fixing:
1. Git ownership warning (fatal: detected dubious ownership) — ./manage.sh setup fixes this automatically. If you need git to work before running setup:
git config --global --add safe.directory /opt/peregrine
2. Preflight write access — preflight writes .env and compose.override.yml into the repo directory. Fix ownership once:
sudo chown -R $USER:$USER /opt/peregrine
After that, run everything without sudo.
Podman
Podman is rootless by default — no sudo needed. ./manage.sh setup will configure podman-compose if it isn't already present.
Docker
After ./manage.sh setup, log out and back in for docker group membership to take effect. Until then, prefix commands with sudo. After re-login, sudo is no longer required.
Inference Profiles
| Profile | Services started | Use case |
|---|---|---|
remote |
app + searxng | No GPU; LLM calls go to Anthropic / OpenAI |
cpu |
app + ollama + searxng | No GPU; local models on CPU. On Apple Silicon, use with native Ollama for Metal acceleration — see below. |
single-gpu |
app + ollama + vision + searxng | One NVIDIA GPU: cover letters, research, vision |
dual-gpu |
app + ollama + vllm + vision + searxng | Two NVIDIA GPUs: GPU 0 = Ollama, GPU 1 = vLLM |
Apple Silicon GPU
Docker Desktop on macOS runs in a Linux VM — it cannot access the Apple GPU. Metal-accelerated inference requires Ollama to run natively on the host.
setup.sh handles this automatically: it offers to install Ollama via Homebrew, starts it as a background service, and explains what happens next. If Ollama is running on port 11434 when you start Peregrine, preflight detects it, stubs out the Docker Ollama container, and routes inference through the native process — which uses Metal automatically.
To do it manually:
brew install ollama
brew services start ollama # starts at login, uses Metal GPU
./manage.sh start --profile cpu # preflight adopts native Ollama; Docker container is skipped
The cpu profile label is a slight misnomer in this context — Ollama will be running on the GPU. single-gpu and dual-gpu profiles are NVIDIA-specific and not applicable on Mac.
First-Run Wizard
On first launch the setup wizard walks through seven steps:
- Hardware — detects NVIDIA GPUs (Linux) or Apple Silicon GPU (macOS) and recommends a profile
- Tier — choose free, paid, or premium (or use
dev_tier_overridefor local testing) - Identity — name, email, phone, LinkedIn, career summary
- Resume — upload a PDF/DOCX for LLM parsing, or use the guided form builder
- Inference — configure LLM backends and API keys
- Search — job titles, locations, boards, keywords, blocklist
- Integrations — optional cloud storage, calendar, and notification services
Wizard state is saved after each step — a crash or browser close resumes where you left off. Re-enter the wizard any time via Settings → Developer → Reset wizard.
Features
| Feature | Tier |
|---|---|
| Job discovery (JobSpy + custom boards) | Free |
| Resume keyword matching & gap analysis | Free |
| Document storage sync (Google Drive, Dropbox, OneDrive, MEGA, Nextcloud) | Free |
| Webhook notifications (Discord, Home Assistant) | Free |
| Cover letter generation | Free with LLM¹ |
| Company research briefs | Free with LLM¹ |
| Interview prep & practice Q&A | Free with LLM¹ |
| Survey assistant (culture-fit Q&A, screenshot analysis) | Free with LLM¹ |
| Wizard helpers (career summary, bullet expansion, skill suggestions, job title suggestions, mission notes) | Free with LLM¹ |
| Managed cloud LLM (no API key needed) | Paid |
| Email sync & auto-classification | Paid |
| LLM-powered keyword blocklist | Paid |
| Job tracking integrations (Notion, Airtable, Google Sheets) | Paid |
| Calendar sync (Google, Apple) | Paid |
| Slack notifications | Paid |
| CircuitForge shared cover-letter model | Paid |
| Vue 3 SPA beta UI | Paid |
| Voice guidelines (custom writing style & tone) | Premium with LLM¹ ² |
| Cover letter model fine-tuning (your writing, your model) | Premium |
| Multi-user support | Premium |
¹ BYOK (bring your own key/backend) unlock: configure any LLM backend — a local Ollama or vLLM instance, or your own API key (Anthropic, OpenAI-compatible) — and all features marked Free with LLM or Premium with LLM unlock at no charge. The paid tier earns its price by providing managed cloud inference so you don't need a key at all, plus integrations and email sync.
² Voice guidelines requires Premium tier without a configured LLM backend. With BYOK, it unlocks at any tier.
Email Sync
Monitors your inbox for job-related emails and automatically updates job stages (interview requests, rejections, survey links, offers).
Configure in Settings → Email. Requires IMAP access and, for Gmail, an App Password.
Integrations
Connect external services in Settings → Integrations:
- Job tracking: Notion, Airtable, Google Sheets
- Document storage: Google Drive, Dropbox, OneDrive, MEGA, Nextcloud
- Calendar: Google Calendar, Apple Calendar (CalDAV)
- Notifications: Slack, Discord (webhook), Home Assistant
CLI Reference (manage.sh)
manage.sh is the single entry point for all common operations — no need to remember Make targets or Docker commands.
./manage.sh setup Install Docker/Podman + NVIDIA toolkit
./manage.sh start [--profile P] Preflight check then start services
./manage.sh stop Stop all services
./manage.sh restart Restart all services
./manage.sh status Show running containers
./manage.sh logs [service] Tail logs (default: app)
./manage.sh update Pull latest images + rebuild app container
./manage.sh preflight Check ports + resources; write .env
./manage.sh test Run test suite
./manage.sh prepare-training Scan docs for cover letters → training JSONL
./manage.sh finetune Run LoRA fine-tune (needs --profile single-gpu+)
./manage.sh open Open the web UI in your browser
./manage.sh clean Remove containers, images, volumes (asks to confirm)
Developer Docs
Full documentation at: https://docs.circuitforge.tech/peregrine
License
Core discovery pipeline: MIT LLM features (cover letter generation, company research, interview prep, UI): BSL 1.1
© 2026 Circuit Forge LLC