• v0.10.0 38c2bd702a

    v0.10.0 — Community Module
    Some checks failed
    Mirror / mirror (push) Has been cancelled
    Release — PyPI / release (push) Has been cancelled
    Stable

    pyr0ball released this 2026-04-12 17:24:31 -07:00 | 8 commits to main since this release

    circuitforge-core v0.10.0

    Added

    circuitforge_core.community — shared community signal module (BSL 1.1)

    • CommunityDB: psycopg2 connection pool with run_migrations(). Idempotent on every startup.
    • CommunityPost: frozen dataclass capturing user-authored community posts with item snapshots.
    • SharedStore: base class for product-specific community stores with typed pg read/write helpers.
    • Migration 001: community_posts schema. Migration 002: community_reactions stub.
    • psycopg2-binary added to [community] optional extras.

    See CHANGELOG.md for full history.

    Downloads
  • v0.5.0 9544f695e6

    pyr0ball released this 2026-04-02 23:05:22 -07:00 | 53 commits to main since this release

    What's new

    circuitforge_core.manage (closes #6)

    Replaces bash-only manage.sh with a cross-platform Python process manager that works on Linux, macOS, and Windows natively — no WSL2, no Docker required.

    Docker mode (auto-detected, or --mode docker): wraps docker compose / docker-compose

    Native mode (--mode native, default on Windows): PID-file process management

    • platformdirs for OS-appropriate PID and log paths
    • Cross-platform kill (SIGTERM/SIGKILL on Unix, taskkill /F on Windows)
    • Polling log tail — no tail -f dependency

    Getting started in a product:

    python -m circuitforge_core.manage install-shims
    # → writes manage.sh, manage.ps1, manage.toml.example
    cp manage.toml.example manage.toml  # edit for your services
    ./manage.sh start
    

    Tests

    279 total (+36 new): config (6), docker_mode (9), native_mode (21)

    Downloads
  • v0.4.0 6e3474b97b

    pyr0ball released this 2026-04-02 22:13:01 -07:00 | 56 commits to main since this release

    What's new

    Agent watchdog (closes #15)

    Coordinator restarts no longer require manual agent restarts. Known nodes are persisted to SQLite and restored on startup; remote nodes reappear within ~10 s automatically. Agent processes now run a 30 s reconnect loop so they re-register whenever the coordinator comes back.

    Ollama adopt-if-running (closes #16)

    adopt: true in any ProcessSpec lets cf-orch claim an already-running process rather than spawning a new one — the primary use case being Ollama running as a system daemon. All GPU profiles now have an Ollama managed block with adopt: true and health_path: /api/tags so Ollama VRAM is accounted for in the allocator and the service appears in the dashboard.

    Tests

    243 total (+27 new): NodeStore (8), AgentSupervisor watchdog (8), Ollama adopt (11)

    Downloads
  • v0.3.0 a36f469d60

    pyr0ball released this 2026-04-02 18:56:49 -07:00 | 60 commits to main since this release

    What's new

    Hardware module (circuitforge_core.hardware)

    • detect_hardware() → VRAM tier selection → generate_profile() for llm.yaml-compatible config
    • Supports nvidia-smi, rocm-smi, Apple silicon, CPU fallback

    cf-docuvision service (circuitforge_core.resources.docuvision)

    • FastAPI wrapper for ByteDance/Dolphin-v2 (Qwen2.5-VL, ~8 GB VRAM)
    • POST /extract with hint modes: auto / table / text / form
    • ProcessSpec managed: blocks wired into all GPU profiles (6/8/16/24 GB)

    Documents module (circuitforge_core.documents)

    • ingest(image_bytes, hint) → StructuredDocument — single entry point for all products
    • cf-docuvision primary path with LLMRouter vision fallback
    • Consumed by: kiwi (receipts/recipes), falcon (forms), peregrine (resume images), godwit (identity docs)

    Tests

    • 216 tests total (+58 new): hardware (31), docuvision (14), documents (22), coordinator probe (4)

    Closes #5, #7, #8, #13

    Downloads
  • v0.2.0 482c430cdb

    pyr0ball released this 2026-04-02 17:25:06 -07:00 | 63 commits to main since this release

    What's New

    Orchestrator — auto service lifecycle

    • Health probe loop (closes #10): coordinator background task polls all starting instances every 5 s via GET /health; promotes to running on 200, marks stopped after 300 s
    • NodeSelector: warm-first GPU scoring — prefers nodes already running the requested model; falls back to highest free VRAM
    • ServiceRegistry: in-memory state machine tracking allocations across starting → running → idle → stopped
    • /api/services/{service}/allocate: auto-selects best node, starts llm_server via agent, returns URL
    • CFOrchClient: sync + async context managers for coordinator allocation/release
    • Idle sweep in AgentSupervisor: stops instances idle longer than idle_stop_after_s (default 600 s)
    • Services table in coordinator dashboard

    HF Inference Server

    • Generic HuggingFace transformers inference endpoint (replaces Ouro/vllm-Docker-specific code)
    • Handles transformers 5.x BatchEncoding from apply_chat_template
    • Uses dtype= kwarg (deprecates torch_dtype=)

    Bug Fixes

    • VRAM pre-flight (closes #11 / related): tightened threshold from max_mb // 2 to full max_mb — prevents starting instances on GPUs without sufficient headroom
    • ServiceInstance seeded correctly on first /allocate call
    • TTL sweep, immutability, and service-scoped release correctness

    LLM Router + Scheduler

    • cf-orch allocation support in LLMRouter backends
    • VRAM lease acquisition/release through scheduler batch workers
    • join() on batch workers during shutdown

    See CHANGELOG.md for the full history.

    Downloads