-
v0.10.0 — Community Module
Stablereleased this
2026-04-12 17:24:31 -07:00 | 8 commits to main since this releasecircuitforge-core v0.10.0
Added
circuitforge_core.community— shared community signal module (BSL 1.1)CommunityDB: psycopg2 connection pool withrun_migrations(). Idempotent on every startup.CommunityPost: frozen dataclass capturing user-authored community posts with item snapshots.SharedStore: base class for product-specific community stores with typed pg read/write helpers.- Migration 001:
community_postsschema. Migration 002:community_reactionsstub. psycopg2-binaryadded to[community]optional extras.
See CHANGELOG.md for full history.
Downloads
-
Source code (ZIP)
0 downloads
-
Source code (TAR.GZ)
0 downloads
-
released this
2026-04-02 23:05:22 -07:00 | 53 commits to main since this releaseWhat's new
circuitforge_core.manage(closes #6)Replaces bash-only
manage.shwith a cross-platform Python process manager that works on Linux, macOS, and Windows natively — no WSL2, no Docker required.Docker mode (auto-detected, or
--mode docker): wrapsdocker compose/docker-composeNative mode (
--mode native, default on Windows): PID-file process managementplatformdirsfor OS-appropriate PID and log paths- Cross-platform kill (SIGTERM/SIGKILL on Unix,
taskkill /Fon Windows) - Polling log tail — no
tail -fdependency
Getting started in a product:
python -m circuitforge_core.manage install-shims # → writes manage.sh, manage.ps1, manage.toml.example cp manage.toml.example manage.toml # edit for your services ./manage.sh startTests
279 total (+36 new): config (6), docker_mode (9), native_mode (21)
Downloads
-
Source code (ZIP)
0 downloads
-
Source code (TAR.GZ)
0 downloads
-
released this
2026-04-02 22:13:01 -07:00 | 56 commits to main since this releaseWhat's new
Agent watchdog (closes #15)
Coordinator restarts no longer require manual agent restarts. Known nodes are persisted to SQLite and restored on startup; remote nodes reappear within ~10 s automatically. Agent processes now run a 30 s reconnect loop so they re-register whenever the coordinator comes back.
Ollama adopt-if-running (closes #16)
adopt: truein any ProcessSpec lets cf-orch claim an already-running process rather than spawning a new one — the primary use case being Ollama running as a system daemon. All GPU profiles now have an Ollama managed block withadopt: trueandhealth_path: /api/tagsso Ollama VRAM is accounted for in the allocator and the service appears in the dashboard.Tests
243 total (+27 new): NodeStore (8), AgentSupervisor watchdog (8), Ollama adopt (11)
Downloads
-
Source code (ZIP)
0 downloads
-
Source code (TAR.GZ)
0 downloads
-
Source code (ZIP)
-
released this
2026-04-02 18:56:49 -07:00 | 60 commits to main since this releaseWhat's new
Hardware module (
circuitforge_core.hardware)detect_hardware()→ VRAM tier selection →generate_profile()for llm.yaml-compatible config- Supports nvidia-smi, rocm-smi, Apple silicon, CPU fallback
cf-docuvision service (
circuitforge_core.resources.docuvision)- FastAPI wrapper for ByteDance/Dolphin-v2 (Qwen2.5-VL, ~8 GB VRAM)
POST /extractwith hint modes: auto / table / text / form- ProcessSpec
managed:blocks wired into all GPU profiles (6/8/16/24 GB)
Documents module (
circuitforge_core.documents)ingest(image_bytes, hint) → StructuredDocument— single entry point for all products- cf-docuvision primary path with LLMRouter vision fallback
- Consumed by: kiwi (receipts/recipes), falcon (forms), peregrine (resume images), godwit (identity docs)
Tests
- 216 tests total (+58 new): hardware (31), docuvision (14), documents (22), coordinator probe (4)
Downloads
-
Source code (ZIP)
0 downloads
-
Source code (TAR.GZ)
0 downloads
-
released this
2026-04-02 17:25:06 -07:00 | 63 commits to main since this releaseWhat's New
Orchestrator — auto service lifecycle
- Health probe loop (closes #10): coordinator background task polls all
startinginstances every 5 s viaGET /health; promotes torunningon 200, marksstoppedafter 300 s NodeSelector: warm-first GPU scoring — prefers nodes already running the requested model; falls back to highest free VRAMServiceRegistry: in-memory state machine tracking allocations acrossstarting → running → idle → stopped/api/services/{service}/allocate: auto-selects best node, starts llm_server via agent, returns URLCFOrchClient: sync + async context managers for coordinator allocation/release- Idle sweep in
AgentSupervisor: stops instances idle longer thanidle_stop_after_s(default 600 s) - Services table in coordinator dashboard
HF Inference Server
- Generic HuggingFace
transformersinference endpoint (replaces Ouro/vllm-Docker-specific code) - Handles transformers 5.x
BatchEncodingfromapply_chat_template - Uses
dtype=kwarg (deprecatestorch_dtype=)
Bug Fixes
- VRAM pre-flight (closes #11 / related): tightened threshold from
max_mb // 2to fullmax_mb— prevents starting instances on GPUs without sufficient headroom ServiceInstanceseeded correctly on first/allocatecall- TTL sweep, immutability, and service-scoped release correctness
LLM Router + Scheduler
- cf-orch allocation support in
LLMRouterbackends - VRAM lease acquisition/release through scheduler batch workers
join()on batch workers during shutdown
See CHANGELOG.md for the full history.
Downloads
-
Source code (ZIP)
0 downloads
-
Source code (TAR.GZ)
0 downloads
- Health probe loop (closes #10): coordinator background task polls all