circuitforge-core/circuitforge_core
pyr0ball c78341fc6f feat(orch): replace Ouro/vllm-Docker with generic HF inference server; add ProcessSpec
- Add circuitforge_core/resources/inference/llm_server.py: generic OpenAI-compatible
  FastAPI server for any HuggingFace causal LM (Phi-4-mini-instruct, Qwen2.5-3B-Instruct)
- Add service_manager.py + service_probe.py: ProcessSpec start/stop/is_running support
  (Popen-based; socket probe confirms readiness before marking running)
- Update all 4 public GPU profiles to use ProcessSpec→llm_server instead of Docker vllm:
  6gb (max_mb 5500), 8gb (max_mb 6500), 16gb/24gb (max_mb 9000)
- Model candidates: Phi-4-mini-instruct first (7.2GB), Qwen2.5-3B-Instruct fallback (5.8GB)
- Remove ouro_server.py (Ouro incompatible with transformers 5.x; vllm Docker also incompatible)
- Add 17 tests for ServiceManager ProcessSpec (start/stop/is_running/list/get_url)
2026-04-02 15:33:08 -07:00
..
config feat: add config module and vision router stub 2026-03-25 11:08:03 -07:00
db feat(orch): replace Ouro/vllm-Docker with generic HF inference server; add ProcessSpec 2026-04-02 15:33:08 -07:00
llm fix: TTL sweep, immutability, service-scoped release, logger in orch alloc 2026-04-02 12:55:38 -07:00
pipeline feat: add wizard and pipeline stubs 2026-03-25 11:09:40 -07:00
resources feat(orch): replace Ouro/vllm-Docker with generic HF inference server; add ProcessSpec 2026-04-02 15:33:08 -07:00
tasks fix(scheduler): join batch worker threads in shutdown() 2026-04-01 11:21:30 -07:00
tiers fix(core): SQLite timeout=30, INSERT OR IGNORE migrations, parameterize tier unlockables 2026-03-31 10:37:51 -07:00
vision feat: add config module and vision router stub 2026-03-25 11:08:03 -07:00
wizard feat: add wizard and pipeline stubs 2026-03-25 11:09:40 -07:00
__init__.py feat: scaffold circuitforge-core package 2026-03-25 11:02:26 -07:00