Shared scaffold for CircuitForge products
- Add circuitforge_core/resources/inference/llm_server.py: generic OpenAI-compatible FastAPI server for any HuggingFace causal LM (Phi-4-mini-instruct, Qwen2.5-3B-Instruct) - Add service_manager.py + service_probe.py: ProcessSpec start/stop/is_running support (Popen-based; socket probe confirms readiness before marking running) - Update all 4 public GPU profiles to use ProcessSpec→llm_server instead of Docker vllm: 6gb (max_mb 5500), 8gb (max_mb 6500), 16gb/24gb (max_mb 9000) - Model candidates: Phi-4-mini-instruct first (7.2GB), Qwen2.5-3B-Instruct fallback (5.8GB) - Remove ouro_server.py (Ouro incompatible with transformers 5.x; vllm Docker also incompatible) - Add 17 tests for ServiceManager ProcessSpec (start/stop/is_running/list/get_url) |
||
|---|---|---|
| circuitforge_core | ||
| tests | ||
| .gitignore | ||
| pyproject.toml | ||
| README.md | ||
circuitforge-core
Shared scaffold for CircuitForge products.
Modules
circuitforge_core.db— SQLite connection factory and migration runnercircuitforge_core.llm— LLM router with fallback chaincircuitforge_core.tiers— Tier system with BYOK and local vision unlockscircuitforge_core.config— Env validation and .env loadercircuitforge_core.vision— Vision router stub (v0.2+)circuitforge_core.wizard— First-run wizard base class stubcircuitforge_core.pipeline— Staging queue stub (v0.2+)
Install
pip install -e .
License
BSL 1.1 — see LICENSE