- llm.yaml + example: replace localhost URLs with Docker service names
(ollama:11434, vllm:8000, vision:8002); replace personal model names
(alex-cover-writer, llama3.1:8b) with llama3.2:3b
- user.yaml.example: update service hosts to Docker names (ollama, vllm,
searxng) and searxng port from 8888 (host-mapped) to 8080 (internal)
- wizard step 5: fix hardcoded localhost defaults — wizard runs inside
Docker, so service name defaults are required for connection tests to pass
- scrapers/companyScraper.py: bundle scraper so Dockerfile COPY succeeds
- setup.sh: remove host Ollama install (conflicts with Docker Ollama on
port 11434); Docker entrypoint handles model download automatically
- README + setup.sh banner: add Circuit Forge mission statement
Replaces the old 5-step wizard with a 7-step orchestrator that uses the
step modules built in Tasks 2-8. Steps 1-6 are mandatory (hardware, tier,
identity, resume, inference, search); step 7 (integrations) is optional.
Each Next click validates, writes wizard_step to user.yaml for crash recovery,
and resumes at the correct step on page reload. LLM generation buttons
submit wizard_generate tasks and poll via @st.fragment(run_every=3). Finish
sets wizard_complete=True, removes wizard_step, and calls apply_service_urls.
Adds tests/test_wizard_flow.py (7 tests) covering validate() chain, yaml
persistence helpers, and wizard state inference.