- Add app/llm/router.py shim — tri-level config lookup: repo config/llm.yaml → ~/.config/circuitforge/llm.yaml → env vars - Add config/llm.cloud.yaml — ollama via cf-orch, llama3.1:8b - Add config/llm.yaml.example — self-hosted reference config - compose.cloud.yml: mount llm.cloud.yaml, set CF_ORCH_URL, add host.docker.internal:host-gateway (required on Linux Docker) - api/main.py: use app.llm.router.LLMRouter (shim) not core directly - .env.example: update LLM section to reference config/llm.yaml.example - .gitignore: exclude config/llm.yaml (keep example + cloud yaml) End-to-end tested: 3.2s for "used RTX 3080 under $400, no mining cards" via cloud container → host.docker.internal:11434 → Ollama llama3.1:8b
12 lines
127 B
Text
12 lines
127 B
Text
__pycache__/
|
|
*.pyc
|
|
*.pyo
|
|
.env
|
|
*.egg-info/
|
|
dist/
|
|
.pytest_cache/
|
|
data/
|
|
.superpowers/
|
|
web/node_modules/
|
|
web/dist/
|
|
config/llm.yaml
|