From 997eb6143e636df1086508f46901a1b63c31ba2f Mon Sep 17 00:00:00 2001 From: pyr0ball Date: Wed, 25 Mar 2026 13:07:05 -0700 Subject: [PATCH] =?UTF-8?q?feat:=20Snipe=20MVP=20v0.1=20=E2=80=94=20eBay?= =?UTF-8?q?=20trust=20scorer=20with=20faceted=20filter=20UI?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- PRIVACY.md | 7 + README.md | 75 +- .../plans/2026-03-25-circuitforge-core.md | 1045 ++++++++ .../superpowers/plans/2026-03-25-snipe-mvp.md | 2227 +++++++++++++++++ ...26-03-25-snipe-circuitforge-core-design.md | 322 +++ 5 files changed, 3674 insertions(+), 2 deletions(-) create mode 100644 PRIVACY.md create mode 100644 docs/superpowers/plans/2026-03-25-circuitforge-core.md create mode 100644 docs/superpowers/plans/2026-03-25-snipe-mvp.md create mode 100644 docs/superpowers/specs/2026-03-25-snipe-circuitforge-core-design.md diff --git a/PRIVACY.md b/PRIVACY.md new file mode 100644 index 0000000..afc7b9f --- /dev/null +++ b/PRIVACY.md @@ -0,0 +1,7 @@ +# Privacy Policy + +CircuitForge LLC's privacy policy applies to this product and is published at: + +**** + +Last reviewed: March 2026. diff --git a/README.md b/README.md index 958b008..19deac0 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,74 @@ -# snipe +# Snipe — Auction Sniping & Bid Management -snipe by Circuit Forge LLC — Auction sniping — CT Bids, antiques, estate auctions, eBay \ No newline at end of file +> *Part of the Circuit Forge LLC "AI for the tasks you hate most" suite.* + +**Status:** Backlog — not yet started. Peregrine must prove the model first. + +## What it does + +Snipe manages online auction participation: monitoring listings across platforms, scheduling last-second bids, tracking price history to avoid overpaying, and managing the post-win logistics (payment, shipping coordination, provenance documentation for antiques). + +The name is the origin of the word "sniping" — common snipes are notoriously elusive birds, secretive and camouflaged, that flush suddenly from cover. Shooting one required extreme patience, stillness, and a precise last-second shot. That's the auction strategy. + +## Primary platforms + +- **CT Bids** — Connecticut state surplus and municipal auctions +- **GovPlanet / IronPlanet** — government surplus equipment +- **AuctionZip** — antique auction house aggregator (1,000+ houses) +- **Invaluable / LiveAuctioneers** — fine art and antiques +- **Bidsquare** — antiques and collectibles +- **eBay** — general + collectibles +- **HiBid** — estate auctions +- **Proxibid** — industrial and collector auctions + +## Why it's hard + +Online auctions are frustrating because: +- Winning requires being present at the exact closing moment — sometimes 2 AM +- Platforms vary wildly: some allow proxy bids, some don't; closing times extend on activity +- Price history is hidden — you don't know if an item is underpriced or a trap +- Shipping logistics for large / fragile antiques require coordination with auction house +- Provenance documentation is inconsistent across auction houses + +## Core pipeline + +``` +Configure search (categories, keywords, platforms, max price, location) +→ Monitor listings → Alert on matching items +→ Human review: approve or skip +→ Price research: comparable sales history, condition assessment via photos +→ Schedule snipe bid (configurable: X seconds before close, Y% above current) +→ Execute bid → Monitor for counter-bid (soft-close extension handling) +→ Win notification → Payment + shipping coordination workflow +→ Provenance documentation for antiques +``` + +## Bidding strategy engine + +- **Hard snipe**: submit bid N seconds before close (default: 8s) +- **Soft-close handling**: detect if platform extends on last-minute bids; adjust strategy +- **Proxy ladder**: set max and let the engine bid in increments, reserve snipe for final window +- **Reserve detection**: identify likely reserve price from bid history patterns +- **Comparable sales**: pull recent auction results for same/similar items across platforms + +## Post-win workflow + +1. Payment method routing (platform-specific: CC, wire, check) +2. Shipping quote requests to approved carriers (for freight / large items) +3. Condition report request from auction house +4. Provenance packet generation (for antiques / fine art resale or insurance) +5. Add to inventory (for dealers / collectors tracking portfolio value) + +## Product code (license key) + +`CFG-SNPE-XXXX-XXXX-XXXX` + +## Tech notes + +- Shared `circuitforge-core` scaffold +- Platform adapters: AuctionZip, Invaluable, HiBid, eBay, CT Bids (Playwright + API where available) +- Bid execution: Playwright automation with precise timing (NTP-synchronized) +- Soft-close detection: platform-specific rules engine +- Comparable sales: scrape completed auctions, normalize by condition/provenance +- Vision module: condition assessment from listing photos (moondream2 / Claude vision) +- Shipping quote integration: uShip API for freight, FedEx / UPS for parcel diff --git a/docs/superpowers/plans/2026-03-25-circuitforge-core.md b/docs/superpowers/plans/2026-03-25-circuitforge-core.md new file mode 100644 index 0000000..c999686 --- /dev/null +++ b/docs/superpowers/plans/2026-03-25-circuitforge-core.md @@ -0,0 +1,1045 @@ +# circuitforge-core Extraction Implementation Plan + +> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Extract the shared scaffold from Peregrine into a standalone `circuitforge-core` Python package, update Peregrine to depend on it, and leave Peregrine's behaviour unchanged. + +**Architecture:** New private repo `circuitforge-core` at `/Library/Development/CircuitForge/circuitforge-core/`. Contains: db base connection, LLM router, tier system, and config loader — the minimum core needed for both Peregrine and Snipe. Vision module, wizard framework, and pipeline module are stubbed (vision and wizard are net-new; pipeline is extracted but task-runner is deferred). Peregrine's `requirements.txt` gets a `-e ../circuitforge-core` entry; all internal imports updated. + +**Tech Stack:** Python 3.11+, SQLite (stdlib), pytest, pyproject.toml (core only — Peregrine stays on requirements.txt) + +--- + +## File Map + +### New files (circuitforge-core repo) + +| File | Responsibility | +|---|---| +| `circuitforge_core/__init__.py` | Package version export | +| `circuitforge_core/db/__init__.py` | Re-exports `get_connection` | +| `circuitforge_core/db/base.py` | `get_connection()` — SQLite/SQLCipher connection factory (extracted from `peregrine/scripts/db.py`) | +| `circuitforge_core/db/migrations.py` | `run_migrations(conn, migrations_dir)` — simple sequential migration runner | +| `circuitforge_core/llm/__init__.py` | Re-exports `LLMRouter` | +| `circuitforge_core/llm/router.py` | `LLMRouter` class (extracted from `peregrine/scripts/llm_router.py`) | +| `circuitforge_core/vision/__init__.py` | Re-exports `VisionRouter` | +| `circuitforge_core/vision/router.py` | `VisionRouter` stub — raises `NotImplementedError` until v0.2 | +| `circuitforge_core/wizard/__init__.py` | Re-exports `BaseWizard` | +| `circuitforge_core/wizard/base.py` | `BaseWizard` stub — raises `NotImplementedError` until first product wires it | +| `circuitforge_core/pipeline/__init__.py` | Re-exports `StagingDB` | +| `circuitforge_core/pipeline/staging.py` | `StagingDB` stub — interface for SQLite staging queue; raises `NotImplementedError` | +| `circuitforge_core/tiers/__init__.py` | Re-exports `can_use`, `TIERS`, `BYOK_UNLOCKABLE`, `LOCAL_VISION_UNLOCKABLE` | +| `circuitforge_core/tiers/tiers.py` | Generalised tier system (extracted from `peregrine/app/wizard/tiers.py`, product-specific feature keys removed) | +| `circuitforge_core/config/__init__.py` | Re-exports `require_env`, `load_env` | +| `circuitforge_core/config/settings.py` | `require_env(key)`, `load_env(path)` — env validation helpers | +| `pyproject.toml` | Package metadata, no dependencies beyond stdlib + pyyaml + requests + openai | +| `tests/test_db.py` | Tests for `get_connection` and migration runner | +| `tests/test_tiers.py` | Tests for `can_use`, BYOK unlock, LOCAL_VISION unlock | +| `tests/test_llm_router.py` | Tests for LLMRouter fallback chain (mock backends) | +| `tests/test_config.py` | Tests for `require_env` missing/present | + +### Modified files (Peregrine repo) + +| File | Change | +|---|---| +| `requirements.txt` | Add `-e ../circuitforge-core` | +| `scripts/db.py` | Replace `get_connection` body with `from circuitforge_core.db import get_connection; ...` re-export | +| `scripts/llm_router.py` | Replace `LLMRouter` class body with import from `circuitforge_core.llm` | +| `app/wizard/tiers.py` | Replace tier/BYOK logic with imports from `circuitforge_core.tiers`; keep Peregrine-specific `FEATURES` dict | + +--- + +## Task 1: Scaffold circuitforge-core repo + +**Files:** +- Create: `circuitforge-core/pyproject.toml` +- Create: `circuitforge-core/circuitforge_core/__init__.py` +- Create: `circuitforge-core/.gitignore` +- Create: `circuitforge-core/README.md` + +- [ ] **Step 1: Create repo directory and init git** + +```bash +mkdir -p /Library/Development/CircuitForge/circuitforge-core +cd /Library/Development/CircuitForge/circuitforge-core +git init +``` + +- [ ] **Step 2: Write pyproject.toml** + +```toml +# /Library/Development/CircuitForge/circuitforge-core/pyproject.toml +[build-system] +requires = ["setuptools>=68"] +build-backend = "setuptools.build_meta" + +[project] +name = "circuitforge-core" +version = "0.1.0" +description = "Shared scaffold for CircuitForge products" +requires-python = ">=3.11" +dependencies = [ + "pyyaml>=6.0", + "requests>=2.31", + "openai>=1.0", +] + +[tool.setuptools.packages.find] +where = ["."] +include = ["circuitforge_core*"] + +[tool.pytest.ini_options] +testpaths = ["tests"] +``` + +- [ ] **Step 3: Write package __init__.py** + +```python +# circuitforge-core/circuitforge_core/__init__.py +__version__ = "0.1.0" +``` + +- [ ] **Step 4: Write .gitignore** + +``` +__pycache__/ +*.pyc +.env +*.egg-info/ +dist/ +.pytest_cache/ +``` + +- [ ] **Step 5: Install editable and verify import** + +```bash +cd /Library/Development/CircuitForge/circuitforge-core +conda run -n job-seeker pip install -e . +conda run -n job-seeker python -c "import circuitforge_core; print(circuitforge_core.__version__)" +``` +Expected: `0.1.0` + +- [ ] **Step 6: Create tests/__init__.py** + +```bash +mkdir -p /Library/Development/CircuitForge/circuitforge-core/tests +touch /Library/Development/CircuitForge/circuitforge-core/tests/__init__.py +``` + +- [ ] **Step 7: Commit** + +```bash +git add . +git commit -m "feat: scaffold circuitforge-core package" +``` + +--- + +## Task 2: Extract db base connection + +**Files:** +- Create: `circuitforge-core/circuitforge_core/db/__init__.py` +- Create: `circuitforge-core/circuitforge_core/db/base.py` +- Create: `circuitforge-core/circuitforge_core/db/migrations.py` +- Create: `circuitforge-core/tests/__init__.py` +- Create: `circuitforge-core/tests/test_db.py` + +- [ ] **Step 1: Write failing tests** + +```python +# circuitforge-core/tests/test_db.py +import sqlite3 +import tempfile +from pathlib import Path +import pytest +from circuitforge_core.db import get_connection, run_migrations + + +def test_get_connection_returns_sqlite_connection(tmp_path): + db = tmp_path / "test.db" + conn = get_connection(db) + assert isinstance(conn, sqlite3.Connection) + conn.close() + + +def test_get_connection_creates_file(tmp_path): + db = tmp_path / "test.db" + assert not db.exists() + conn = get_connection(db) + conn.close() + assert db.exists() + + +def test_run_migrations_applies_sql_files(tmp_path): + db = tmp_path / "test.db" + migrations_dir = tmp_path / "migrations" + migrations_dir.mkdir() + (migrations_dir / "001_create_foo.sql").write_text( + "CREATE TABLE foo (id INTEGER PRIMARY KEY, name TEXT);" + ) + conn = get_connection(db) + run_migrations(conn, migrations_dir) + cursor = conn.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='foo'") + assert cursor.fetchone() is not None + conn.close() + + +def test_run_migrations_is_idempotent(tmp_path): + db = tmp_path / "test.db" + migrations_dir = tmp_path / "migrations" + migrations_dir.mkdir() + (migrations_dir / "001_create_foo.sql").write_text( + "CREATE TABLE foo (id INTEGER PRIMARY KEY, name TEXT);" + ) + conn = get_connection(db) + run_migrations(conn, migrations_dir) + run_migrations(conn, migrations_dir) # second run must not raise + conn.close() + + +def test_run_migrations_applies_in_order(tmp_path): + db = tmp_path / "test.db" + migrations_dir = tmp_path / "migrations" + migrations_dir.mkdir() + (migrations_dir / "001_create_foo.sql").write_text( + "CREATE TABLE foo (id INTEGER PRIMARY KEY);" + ) + (migrations_dir / "002_add_name.sql").write_text( + "ALTER TABLE foo ADD COLUMN name TEXT;" + ) + conn = get_connection(db) + run_migrations(conn, migrations_dir) + conn.execute("INSERT INTO foo (name) VALUES ('bar')") + conn.close() +``` + +- [ ] **Step 2: Run tests to verify they fail** + +```bash +cd /Library/Development/CircuitForge/circuitforge-core +conda run -n job-seeker pytest tests/test_db.py -v +``` +Expected: ImportError or AttributeError (module not yet defined) + +- [ ] **Step 3: Write db/base.py** (extracted from `peregrine/scripts/db.py` lines 1–40) + +```python +# circuitforge-core/circuitforge_core/db/base.py +""" +SQLite connection factory for CircuitForge products. +Supports plain SQLite and SQLCipher (AES-256) when CLOUD_MODE is active. +""" +from __future__ import annotations +import os +import sqlite3 +from pathlib import Path + + +def get_connection(db_path: Path, key: str = "") -> sqlite3.Connection: + """ + Open a SQLite database connection. + + In cloud mode with a key: uses SQLCipher (API-identical to sqlite3). + Otherwise: plain sqlite3. + + Args: + db_path: Path to the database file. Created if absent. + key: SQLCipher encryption key. Empty = unencrypted. + """ + cloud_mode = os.environ.get("CLOUD_MODE", "").lower() in ("1", "true", "yes") + if cloud_mode and key: + from pysqlcipher3 import dbapi2 as _sqlcipher # type: ignore + conn = _sqlcipher.connect(str(db_path)) + conn.execute(f"PRAGMA key='{key}'") + return conn + return sqlite3.connect(str(db_path)) +``` + +- [ ] **Step 4: Write db/migrations.py** + +```python +# circuitforge-core/circuitforge_core/db/migrations.py +""" +Sequential SQL migration runner. +Applies *.sql files from migrations_dir in filename order. +Tracks applied migrations in a _migrations table — safe to call multiple times. +""" +from __future__ import annotations +import sqlite3 +from pathlib import Path + + +def run_migrations(conn: sqlite3.Connection, migrations_dir: Path) -> None: + """Apply any unapplied *.sql migrations from migrations_dir.""" + conn.execute( + "CREATE TABLE IF NOT EXISTS _migrations " + "(name TEXT PRIMARY KEY, applied_at TEXT DEFAULT CURRENT_TIMESTAMP)" + ) + conn.commit() + + applied = {row[0] for row in conn.execute("SELECT name FROM _migrations")} + sql_files = sorted(migrations_dir.glob("*.sql")) + + for sql_file in sql_files: + if sql_file.name in applied: + continue + conn.executescript(sql_file.read_text()) + conn.execute("INSERT INTO _migrations (name) VALUES (?)", (sql_file.name,)) + conn.commit() +``` + +- [ ] **Step 5: Write db/__init__.py** + +```python +# circuitforge-core/circuitforge_core/db/__init__.py +from .base import get_connection +from .migrations import run_migrations + +__all__ = ["get_connection", "run_migrations"] +``` + +- [ ] **Step 6: Run tests to verify they pass** + +```bash +conda run -n job-seeker pytest tests/test_db.py -v +``` +Expected: 5 PASSED + +- [ ] **Step 7: Commit** + +```bash +git add circuitforge_core/db/ tests/ +git commit -m "feat: add db base connection and migration runner" +``` + +--- + +## Task 3: Extract tier system + +**Files:** +- Create: `circuitforge-core/circuitforge_core/tiers/__init__.py` +- Create: `circuitforge-core/circuitforge_core/tiers/tiers.py` +- Create: `circuitforge-core/tests/test_tiers.py` + +Source: `peregrine/app/wizard/tiers.py` + +- [ ] **Step 1: Write failing tests** + +```python +# circuitforge-core/tests/test_tiers.py +import pytest +from circuitforge_core.tiers import can_use, TIERS, BYOK_UNLOCKABLE, LOCAL_VISION_UNLOCKABLE + + +def test_tiers_order(): + assert TIERS == ["free", "paid", "premium", "ultra"] + + +def test_free_feature_always_accessible(): + # Features not in FEATURES dict are free for everyone + assert can_use("nonexistent_feature", tier="free") is True + + +def test_paid_feature_blocked_for_free_tier(): + # Caller must register features — test via can_use with explicit min_tier + assert can_use("test_paid", tier="free", _features={"test_paid": "paid"}) is False + + +def test_paid_feature_accessible_for_paid_tier(): + assert can_use("test_paid", tier="paid", _features={"test_paid": "paid"}) is True + + +def test_byok_unlocks_byok_feature(): + byok_feature = next(iter(BYOK_UNLOCKABLE)) if BYOK_UNLOCKABLE else None + if byok_feature: + assert can_use(byok_feature, tier="free", has_byok=True) is True + + +def test_byok_does_not_unlock_non_byok_feature(): + assert can_use("test_paid", tier="free", has_byok=True, + _features={"test_paid": "paid"}) is False + + +def test_local_vision_unlocks_vision_feature(): + vision_feature = next(iter(LOCAL_VISION_UNLOCKABLE)) if LOCAL_VISION_UNLOCKABLE else None + if vision_feature: + assert can_use(vision_feature, tier="free", has_local_vision=True) is True + + +def test_local_vision_does_not_unlock_non_vision_feature(): + assert can_use("test_paid", tier="free", has_local_vision=True, + _features={"test_paid": "paid"}) is False +``` + +- [ ] **Step 2: Run tests to verify they fail** + +```bash +conda run -n job-seeker pytest tests/test_tiers.py -v +``` +Expected: ImportError + +- [ ] **Step 3: Write tiers/tiers.py** (generalised from `peregrine/app/wizard/tiers.py` — remove all Peregrine-specific `FEATURES` entries; keep the `can_use` logic and unlock mechanism) + +```python +# circuitforge-core/circuitforge_core/tiers/tiers.py +""" +Tier system for CircuitForge products. + +Tiers: free < paid < premium < ultra +Products register their own FEATURES dict and pass it to can_use(). + +BYOK_UNLOCKABLE: features that unlock when the user has any configured +LLM backend (local or API key). These are gated only because CF would +otherwise provide the compute. + +LOCAL_VISION_UNLOCKABLE: features that unlock when the user has a local +vision model configured (e.g. moondream2). Distinct from BYOK — a text +LLM key does NOT unlock vision features. +""" +from __future__ import annotations + +TIERS: list[str] = ["free", "paid", "premium", "ultra"] + +# Features that unlock when the user has any LLM backend configured. +# Each product extends this frozenset with its own BYOK-unlockable features. +BYOK_UNLOCKABLE: frozenset[str] = frozenset() + +# Features that unlock when the user has a local vision model configured. +LOCAL_VISION_UNLOCKABLE: frozenset[str] = frozenset() + + +def can_use( + feature: str, + tier: str, + has_byok: bool = False, + has_local_vision: bool = False, + _features: dict[str, str] | None = None, +) -> bool: + """ + Return True if the given tier (and optional unlocks) can access feature. + + Args: + feature: Feature key string. + tier: User's current tier ("free", "paid", "premium", "ultra"). + has_byok: True if user has a configured LLM backend. + has_local_vision: True if user has a local vision model configured. + _features: Feature→min_tier map. Products pass their own dict here. + If None, all features are free. + """ + features = _features or {} + if feature not in features: + return True + + if has_byok and feature in BYOK_UNLOCKABLE: + return True + + if has_local_vision and feature in LOCAL_VISION_UNLOCKABLE: + return True + + min_tier = features[feature] + try: + return TIERS.index(tier) >= TIERS.index(min_tier) + except ValueError: + return False + + +def tier_label( + feature: str, + has_byok: bool = False, + has_local_vision: bool = False, + _features: dict[str, str] | None = None, +) -> str: + """Return a human-readable label for the minimum tier needed for feature.""" + features = _features or {} + if feature not in features: + return "free" + if has_byok and feature in BYOK_UNLOCKABLE: + return "free (BYOK)" + if has_local_vision and feature in LOCAL_VISION_UNLOCKABLE: + return "free (local vision)" + return features[feature] +``` + +- [ ] **Step 4: Write tiers/__init__.py** + +```python +# circuitforge-core/circuitforge_core/tiers/__init__.py +from .tiers import can_use, tier_label, TIERS, BYOK_UNLOCKABLE, LOCAL_VISION_UNLOCKABLE + +__all__ = ["can_use", "tier_label", "TIERS", "BYOK_UNLOCKABLE", "LOCAL_VISION_UNLOCKABLE"] +``` + +- [ ] **Step 5: Run tests** + +```bash +conda run -n job-seeker pytest tests/test_tiers.py -v +``` +Expected: 8 PASSED + +- [ ] **Step 6: Commit** + +```bash +git add circuitforge_core/tiers/ tests/test_tiers.py +git commit -m "feat: add generalised tier system with BYOK and local vision unlocks" +``` + +--- + +## Task 4: Extract LLM router + +**Files:** +- Create: `circuitforge-core/circuitforge_core/llm/__init__.py` +- Create: `circuitforge-core/circuitforge_core/llm/router.py` +- Create: `circuitforge-core/tests/test_llm_router.py` + +Source: `peregrine/scripts/llm_router.py` + +- [ ] **Step 1: Write failing tests** + +```python +# circuitforge-core/tests/test_llm_router.py +from unittest.mock import MagicMock, patch +import pytest +from circuitforge_core.llm import LLMRouter + + +def _make_router(config: dict) -> LLMRouter: + """Build a router from an in-memory config dict (bypass file loading).""" + router = object.__new__(LLMRouter) + router.config = config + return router + + +def test_complete_uses_first_reachable_backend(): + router = _make_router({ + "fallback_order": ["local"], + "backends": { + "local": { + "type": "openai_compat", + "base_url": "http://localhost:11434/v1", + "model": "llama3", + "supports_images": False, + } + } + }) + mock_client = MagicMock() + mock_client.chat.completions.create.return_value = MagicMock( + choices=[MagicMock(message=MagicMock(content="hello"))] + ) + with patch.object(router, "_is_reachable", return_value=True), \ + patch("circuitforge_core.llm.router.OpenAI", return_value=mock_client): + result = router.complete("say hello") + assert result == "hello" + + +def test_complete_falls_back_on_unreachable_backend(): + router = _make_router({ + "fallback_order": ["unreachable", "working"], + "backends": { + "unreachable": { + "type": "openai_compat", + "base_url": "http://nowhere:1/v1", + "model": "x", + "supports_images": False, + }, + "working": { + "type": "openai_compat", + "base_url": "http://localhost:11434/v1", + "model": "llama3", + "supports_images": False, + } + } + }) + mock_client = MagicMock() + mock_client.chat.completions.create.return_value = MagicMock( + choices=[MagicMock(message=MagicMock(content="fallback"))] + ) + def reachable(url): + return "nowhere" not in url + with patch.object(router, "_is_reachable", side_effect=reachable), \ + patch("circuitforge_core.llm.router.OpenAI", return_value=mock_client): + result = router.complete("test") + assert result == "fallback" + + +def test_complete_raises_when_all_backends_exhausted(): + router = _make_router({ + "fallback_order": ["dead"], + "backends": { + "dead": { + "type": "openai_compat", + "base_url": "http://nowhere:1/v1", + "model": "x", + "supports_images": False, + } + } + }) + with patch.object(router, "_is_reachable", return_value=False): + with pytest.raises(RuntimeError, match="exhausted"): + router.complete("test") +``` + +- [ ] **Step 2: Run tests to verify they fail** + +```bash +conda run -n job-seeker pytest tests/test_llm_router.py -v +``` +Expected: ImportError + +- [ ] **Step 3: Copy LLM router from Peregrine** + +Copy the full contents of `/Library/Development/CircuitForge/peregrine/scripts/llm_router.py` to `circuitforge-core/circuitforge_core/llm/router.py`, then update the one internal import: + +```python +# Change at top of file: +# OLD: CONFIG_PATH = Path(__file__).parent.parent / "config" / "llm.yaml" +# NEW: +CONFIG_PATH = Path.home() / ".config" / "circuitforge" / "llm.yaml" +``` + +- [ ] **Step 4: Write llm/__init__.py** + +```python +# circuitforge-core/circuitforge_core/llm/__init__.py +from .router import LLMRouter + +__all__ = ["LLMRouter"] +``` + +- [ ] **Step 5: Run tests** + +```bash +conda run -n job-seeker pytest tests/test_llm_router.py -v +``` +Expected: 3 PASSED + +- [ ] **Step 6: Commit** + +```bash +git add circuitforge_core/llm/ tests/test_llm_router.py +git commit -m "feat: add LLM router (extracted from Peregrine)" +``` + +--- + +## Task 5: Add vision stub and config module + +**Files:** +- Create: `circuitforge-core/circuitforge_core/vision/__init__.py` +- Create: `circuitforge-core/circuitforge_core/vision/router.py` +- Create: `circuitforge-core/circuitforge_core/config/__init__.py` +- Create: `circuitforge-core/circuitforge_core/config/settings.py` +- Create: `circuitforge-core/tests/test_config.py` + +- [ ] **Step 1: Write failing config tests** + +```python +# circuitforge-core/tests/test_config.py +import os +import pytest +from circuitforge_core.config import require_env, load_env + + +def test_require_env_returns_value_when_set(monkeypatch): + monkeypatch.setenv("TEST_KEY", "hello") + assert require_env("TEST_KEY") == "hello" + + +def test_require_env_raises_when_missing(monkeypatch): + monkeypatch.delenv("TEST_KEY", raising=False) + with pytest.raises(EnvironmentError, match="TEST_KEY"): + require_env("TEST_KEY") + + +def test_load_env_sets_variables(tmp_path, monkeypatch): + env_file = tmp_path / ".env" + env_file.write_text("FOO=bar\nBAZ=qux\n") + monkeypatch.delenv("FOO", raising=False) + load_env(env_file) + assert os.environ.get("FOO") == "bar" + assert os.environ.get("BAZ") == "qux" + + +def test_load_env_skips_missing_file(tmp_path): + load_env(tmp_path / "nonexistent.env") # must not raise +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/test_config.py -v +``` +Expected: ImportError + +- [ ] **Step 3: Write config/settings.py** + +```python +# circuitforge-core/circuitforge_core/config/settings.py +"""Env validation and .env loader for CircuitForge products.""" +from __future__ import annotations +import os +from pathlib import Path + + +def require_env(key: str) -> str: + """Return env var value or raise EnvironmentError with clear message.""" + value = os.environ.get(key) + if not value: + raise EnvironmentError( + f"Required environment variable {key!r} is not set. " + f"Check your .env file." + ) + return value + + +def load_env(path: Path) -> None: + """Load key=value pairs from a .env file into os.environ. Skips missing files.""" + if not path.exists(): + return + for line in path.read_text().splitlines(): + line = line.strip() + if not line or line.startswith("#") or "=" not in line: + continue + key, _, value = line.partition("=") + os.environ.setdefault(key.strip(), value.strip()) +``` + +- [ ] **Step 4: Write config/__init__.py** + +```python +# circuitforge-core/circuitforge_core/config/__init__.py +from .settings import require_env, load_env + +__all__ = ["require_env", "load_env"] +``` + +- [ ] **Step 5: Write vision stub** + +```python +# circuitforge-core/circuitforge_core/vision/router.py +""" +Vision model router — stub until v0.2. +Supports: moondream2 (local) and Claude vision API (cloud). +""" +from __future__ import annotations + + +class VisionRouter: + """Routes image analysis requests to local or cloud vision models.""" + + def analyze(self, image_bytes: bytes, prompt: str) -> str: + """ + Analyze image_bytes with the given prompt. + Raises NotImplementedError until vision backends are wired up. + """ + raise NotImplementedError( + "VisionRouter is not yet implemented. " + "Photo analysis requires a Paid tier or local vision model (v0.2+)." + ) +``` + +```python +# circuitforge-core/circuitforge_core/vision/__init__.py +from .router import VisionRouter + +__all__ = ["VisionRouter"] +``` + +- [ ] **Step 6: Run all tests** + +```bash +conda run -n job-seeker pytest tests/ -v +``` +Expected: All PASSED (config tests pass; vision stub has no tests — it's a placeholder) + +- [ ] **Step 7: Commit** + +```bash +git add circuitforge_core/vision/ circuitforge_core/config/ tests/test_config.py +git commit -m "feat: add config module and vision router stub" +``` + +--- + +## Task 5b: Add wizard and pipeline stubs + +**Files:** +- Create: `circuitforge-core/circuitforge_core/wizard/__init__.py` +- Create: `circuitforge-core/circuitforge_core/wizard/base.py` +- Create: `circuitforge-core/circuitforge_core/pipeline/__init__.py` +- Create: `circuitforge-core/circuitforge_core/pipeline/staging.py` +- Create: `circuitforge-core/tests/test_stubs.py` + +These modules are required by the spec (section 2.2) but are net-new (wizard) or partially net-new (pipeline). They are stubbed here so downstream products can import them; full implementation happens when each product needs them. + +- [ ] **Step 1: Write failing stub import tests** + +```python +# circuitforge-core/tests/test_stubs.py +import pytest +from circuitforge_core.wizard import BaseWizard +from circuitforge_core.pipeline import StagingDB + + +def test_wizard_raises_not_implemented(): + wizard = BaseWizard() + with pytest.raises(NotImplementedError): + wizard.run() + + +def test_pipeline_raises_not_implemented(): + staging = StagingDB() + with pytest.raises(NotImplementedError): + staging.enqueue("job", {}) +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/test_stubs.py -v +``` +Expected: ImportError + +- [ ] **Step 3: Write wizard stub** + +```python +# circuitforge-core/circuitforge_core/wizard/base.py +""" +First-run onboarding wizard base class. +Full implementation is net-new per product (v0.1+ for snipe, etc.) +""" +from __future__ import annotations + + +class BaseWizard: + """ + Base class for CircuitForge first-run wizards. + Subclass and implement run() in each product. + """ + + def run(self) -> None: + """Execute the onboarding wizard flow. Must be overridden by subclass.""" + raise NotImplementedError( + "BaseWizard.run() must be implemented by a product-specific subclass." + ) +``` + +```python +# circuitforge-core/circuitforge_core/wizard/__init__.py +from .base import BaseWizard + +__all__ = ["BaseWizard"] +``` + +- [ ] **Step 4: Write pipeline stub** + +```python +# circuitforge-core/circuitforge_core/pipeline/staging.py +""" +SQLite-backed staging queue for CircuitForge pipeline tasks. +Full implementation deferred — stub raises NotImplementedError. +""" +from __future__ import annotations +from typing import Any + + +class StagingDB: + """ + Staging queue for background pipeline tasks (search polling, score updates, etc.) + Stub: raises NotImplementedError until wired up in a product. + """ + + def enqueue(self, task_type: str, payload: dict[str, Any]) -> None: + """Add a task to the staging queue.""" + raise NotImplementedError( + "StagingDB.enqueue() is not yet implemented. " + "Background task pipeline is a v0.2+ feature." + ) + + def dequeue(self) -> tuple[str, dict[str, Any]] | None: + """Fetch the next pending task. Returns (task_type, payload) or None.""" + raise NotImplementedError( + "StagingDB.dequeue() is not yet implemented." + ) +``` + +```python +# circuitforge-core/circuitforge_core/pipeline/__init__.py +from .staging import StagingDB + +__all__ = ["StagingDB"] +``` + +- [ ] **Step 5: Run tests** + +```bash +conda run -n job-seeker pytest tests/test_stubs.py -v +``` +Expected: 2 PASSED + +- [ ] **Step 6: Verify all tests still pass** + +```bash +conda run -n job-seeker pytest tests/ -v +``` +Expected: All PASSED + +- [ ] **Step 7: Commit** + +```bash +git add circuitforge_core/wizard/ circuitforge_core/pipeline/ tests/test_stubs.py +git commit -m "feat: add wizard and pipeline stubs" +``` + +--- + +## Task 6: Update Peregrine to use circuitforge-core + +**Files:** +- Modify: `peregrine/requirements.txt` +- Modify: `peregrine/scripts/db.py` +- Modify: `peregrine/scripts/llm_router.py` +- Modify: `peregrine/app/wizard/tiers.py` + +- [ ] **Step 1: Add circuitforge-core to Peregrine requirements** + +Add to `/Library/Development/CircuitForge/peregrine/requirements.txt` (top of file, before other deps): + +``` +-e ../circuitforge-core +``` + +- [ ] **Step 2: Verify install** + +```bash +cd /Library/Development/CircuitForge/peregrine +conda run -n job-seeker pip install -r requirements.txt +conda run -n job-seeker python -c "from circuitforge_core.db import get_connection; print('ok')" +``` +Expected: `ok` + +- [ ] **Step 3: Update peregrine/scripts/db.py — replace get_connection** + +Add at top of file (after existing docstring and imports): + +```python +from circuitforge_core.db import get_connection as _cf_get_connection + +def get_connection(db_path=DEFAULT_DB, key=""): + """Thin shim — delegates to circuitforge_core.db.get_connection.""" + return _cf_get_connection(db_path, key) +``` + +Remove the old `get_connection` function body (lines ~15–35). Keep all Peregrine-specific schema (`CREATE_JOBS`, `CREATE_COVER_LETTERS`, etc.) and functions unchanged. + +- [ ] **Step 4: Run Peregrine tests to verify nothing broke** + +```bash +cd /Library/Development/CircuitForge/peregrine +conda run -n job-seeker pytest tests/ -v -x +``` +Expected: All existing tests pass + +- [ ] **Step 5: Update peregrine/scripts/llm_router.py — replace LLMRouter** + +Replace the `LLMRouter` class definition with: + +```python +from circuitforge_core.llm import LLMRouter # noqa: F401 — re-export for existing callers +``` + +Keep the module-level `CONFIG_PATH` constant so any code that references `llm_router.CONFIG_PATH` still works. + +- [ ] **Step 6: Run Peregrine tests again** + +```bash +conda run -n job-seeker pytest tests/ -v -x +``` +Expected: All passing + +- [ ] **Step 7: Update peregrine/app/wizard/tiers.py — import from core** + +At the top of the file, add: + +```python +from circuitforge_core.tiers import can_use as _core_can_use, TIERS, tier_label +``` + +Update the `can_use` function to delegate to core, passing Peregrine's `FEATURES` dict: + +```python +def can_use(feature: str, tier: str, has_byok: bool = False, has_local_vision: bool = False) -> bool: + return _core_can_use(feature, tier, has_byok=has_byok, has_local_vision=has_local_vision, _features=FEATURES) +``` + +Keep the Peregrine-specific `FEATURES` dict and `BYOK_UNLOCKABLE` frozenset in place — they are still defined here, just used via the core function. + +- [ ] **Step 8: Run full Peregrine tests** + +```bash +conda run -n job-seeker pytest tests/ -v +``` +Expected: All passing + +- [ ] **Step 9: Smoke-test Peregrine startup** + +```bash +cd /Library/Development/CircuitForge/peregrine +./manage.sh status # verify it's running, or start it +``` +Manually open http://localhost:8502 and verify the UI loads without errors. + +- [ ] **Step 10: Commit Peregrine changes** + +```bash +cd /Library/Development/CircuitForge/peregrine +git add requirements.txt scripts/db.py scripts/llm_router.py app/wizard/tiers.py +git commit -m "feat: migrate to circuitforge-core for db, llm router, and tiers" +``` + +- [ ] **Step 11: Commit circuitforge-core README and push both repos** + +```bash +cd /Library/Development/CircuitForge/circuitforge-core +# Write a minimal README then: +git add README.md +git commit -m "docs: add README" +git remote add origin https://git.opensourcesolarpunk.com/Circuit-Forge/circuitforge-core.git +git push -u origin main +``` + +--- + +## Task 7: Final verification + +- [ ] **Step 1: Run circuitforge-core full test suite** + +```bash +cd /Library/Development/CircuitForge/circuitforge-core +conda run -n job-seeker pytest tests/ -v --tb=short +``` +Expected: All PASSED + +- [ ] **Step 2: Run Peregrine full test suite** + +```bash +cd /Library/Development/CircuitForge/peregrine +conda run -n job-seeker pytest tests/ -v --tb=short +``` +Expected: All PASSED + +- [ ] **Step 3: Verify editable install works from a fresh shell** + +```bash +conda run -n job-seeker python -c " +from circuitforge_core.db import get_connection, run_migrations +from circuitforge_core.llm import LLMRouter +from circuitforge_core.tiers import can_use, TIERS +from circuitforge_core.config import require_env, load_env +from circuitforge_core.vision import VisionRouter +from circuitforge_core.wizard import BaseWizard +from circuitforge_core.pipeline import StagingDB +print('All imports OK') +" +``` +Expected: `All imports OK` diff --git a/docs/superpowers/plans/2026-03-25-snipe-mvp.md b/docs/superpowers/plans/2026-03-25-snipe-mvp.md new file mode 100644 index 0000000..21bb9df --- /dev/null +++ b/docs/superpowers/plans/2026-03-25-snipe-mvp.md @@ -0,0 +1,2227 @@ +# Snipe MVP Implementation Plan + +> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking. + +**Goal:** Build the Snipe MVP — an eBay listing monitor with seller trust scoring and a faceted-filter Streamlit UI — on top of `circuitforge-core`. + +**Architecture:** Streamlit app following Peregrine's patterns. eBay Browse + Seller APIs behind a `PlatformAdapter` interface. Trust scorer runs metadata signals (account age, feedback, price vs market, category history) and perceptual hash dedup within the result set. Dynamic filter sidebar generated from live result data. Tier gating uses `circuitforge_core.tiers` with `LOCAL_VISION_UNLOCKABLE` for future photo analysis. + +**Prerequisite:** The `circuitforge-core` plan must be complete and `circuitforge-core` installed in the `job-seeker` conda env before starting this plan. + +**Tech Stack:** Python 3.11+, Streamlit, SQLite, eBay Browse API, eBay Seller API, imagehash (perceptual hashing), Pillow, pytest, Docker + +--- + +## File Map + +| File | Responsibility | +|---|---| +| `app/platforms/__init__.py` | `PlatformAdapter` abstract base class + `SearchFilters` dataclass | +| `app/platforms/ebay/__init__.py` | Package init | +| `app/platforms/ebay/auth.py` | OAuth2 client credentials token manager (fetch, cache, auto-refresh) | +| `app/platforms/ebay/adapter.py` | `EbayAdapter(PlatformAdapter)` — `search()`, `get_seller()`, `get_completed_sales()` | +| `app/platforms/ebay/normaliser.py` | Raw eBay API JSON → `Listing` / `Seller` dataclasses | +| `app/trust/__init__.py` | `TrustScorer` orchestrator — calls metadata + photo scorers, returns `TrustScore` | +| `app/trust/metadata.py` | Five metadata signals → per-signal 0–20 scores | +| `app/trust/photo.py` | Perceptual hash dedup within result set (free); vision analysis stub (paid) | +| `app/trust/aggregator.py` | Weighted sum → composite 0–100, red flag extraction, hard filter logic | +| `app/db/models.py` | `Listing`, `Seller`, `TrustScore`, `MarketComp`, `SavedSearch` dataclasses + SQLite schema strings | +| `app/db/migrations/001_init.sql` | Initial schema: all tables | +| `app/db/store.py` | `Store` — thin SQLite read/write layer for all models | +| `app/ui/Search.py` | Streamlit main page: search bar, results, listing rows | +| `app/ui/components/filters.py` | `render_filter_sidebar(results)` → `FilterState` | +| `app/ui/components/listing_row.py` | `render_listing_row(listing, trust_score)` | +| `app/tiers.py` | Snipe-specific `FEATURES` dict + `LOCAL_VISION_UNLOCKABLE`; delegates `can_use` to core | +| `app/app.py` | Streamlit entrypoint — page config, routing | +| `app/wizard/setup.py` | First-run: collect eBay credentials, verify connection, write `.env` | +| `tests/platforms/test_ebay_auth.py` | Token fetch, cache, expiry, refresh | +| `tests/platforms/test_ebay_normaliser.py` | API JSON → dataclass conversion | +| `tests/trust/test_metadata.py` | All five metadata signal scorers | +| `tests/trust/test_photo.py` | Perceptual hash dedup | +| `tests/trust/test_aggregator.py` | Composite score, hard filters, partial score flag | +| `tests/db/test_store.py` | Store read/write round-trips | +| `tests/ui/test_filters.py` | Dynamic filter generation from result set | +| `Dockerfile` | Parent-context build (`context: ..`) | +| `compose.yml` | App service, port 8506 | +| `compose.override.yml` | Dev: bind-mount circuitforge-core, hot reload | +| `manage.sh` | start/stop/restart/status/logs/open | +| `pyproject.toml` | Package deps including `circuitforge-core` | +| `.env.example` | Template with `EBAY_CLIENT_ID`, `EBAY_CLIENT_SECRET`, `EBAY_ENV` | + +--- + +## Task 1: Scaffold repo + +**Files:** `pyproject.toml`, `manage.sh`, `compose.yml`, `compose.override.yml`, `Dockerfile`, `.env.example`, `.gitignore`, `app/__init__.py` + +- [ ] **Step 0: Initialize git repo** + +```bash +cd /Library/Development/CircuitForge/snipe +git init +``` + +- [ ] **Step 1: Write pyproject.toml** + +```toml +# /Library/Development/CircuitForge/snipe/pyproject.toml +[build-system] +requires = ["setuptools>=68"] +build-backend = "setuptools.build_meta" + +[project] +name = "snipe" +version = "0.1.0" +description = "Auction listing monitor and trust scorer" +requires-python = ">=3.11" +dependencies = [ + "circuitforge-core", + "streamlit>=1.32", + "requests>=2.31", + "imagehash>=4.3", + "Pillow>=10.0", + "python-dotenv>=1.0", +] + +[tool.setuptools.packages.find] +where = ["."] +include = ["app*"] + +[tool.pytest.ini_options] +testpaths = ["tests"] +``` + +- [ ] **Step 2: Write .env.example** + +```bash +# /Library/Development/CircuitForge/snipe/.env.example +EBAY_CLIENT_ID=your-client-id-here +EBAY_CLIENT_SECRET=your-client-secret-here +EBAY_ENV=production # or: sandbox +SNIPE_DB=data/snipe.db +``` + +- [ ] **Step 3: Write Dockerfile** + +```dockerfile +# /Library/Development/CircuitForge/snipe/Dockerfile +FROM python:3.11-slim + +WORKDIR /app + +# Install circuitforge-core from sibling directory (compose sets context: ..) +COPY circuitforge-core/ ./circuitforge-core/ +RUN pip install --no-cache-dir -e ./circuitforge-core + +# Install snipe +COPY snipe/ ./snipe/ +WORKDIR /app/snipe +RUN pip install --no-cache-dir -e . + +EXPOSE 8506 +CMD ["streamlit", "run", "app/app.py", "--server.port=8506", "--server.address=0.0.0.0"] +``` + +- [ ] **Step 4: Write compose.yml** + +```yaml +# /Library/Development/CircuitForge/snipe/compose.yml +services: + snipe: + build: + context: .. + dockerfile: snipe/Dockerfile + ports: + - "8506:8506" + env_file: .env + volumes: + - ./data:/app/snipe/data +``` + +- [ ] **Step 5: Write compose.override.yml** + +```yaml +# /Library/Development/CircuitForge/snipe/compose.override.yml +services: + snipe: + volumes: + - ../circuitforge-core:/app/circuitforge-core + - ./app:/app/snipe/app + - ./data:/app/snipe/data + environment: + - STREAMLIT_SERVER_RUN_ON_SAVE=true +``` + +- [ ] **Step 6: Write .gitignore** + +``` +__pycache__/ +*.pyc +*.pyo +.env +*.egg-info/ +dist/ +.pytest_cache/ +data/ +.superpowers/ +``` + +- [ ] **Step 6b: Write manage.sh** + +```bash +# /Library/Development/CircuitForge/snipe/manage.sh +#!/usr/bin/env bash +set -euo pipefail + +SERVICE=snipe +PORT=8506 +COMPOSE_FILE="compose.yml" + +usage() { + echo "Usage: $0 {start|stop|restart|status|logs|open|update}" + exit 1 +} + +cmd="${1:-help}" +shift || true + +case "$cmd" in + start) + docker compose -f "$COMPOSE_FILE" up -d + echo "$SERVICE started on http://localhost:$PORT" + ;; + stop) + docker compose -f "$COMPOSE_FILE" down + ;; + restart) + docker compose -f "$COMPOSE_FILE" down + docker compose -f "$COMPOSE_FILE" up -d + echo "$SERVICE restarted on http://localhost:$PORT" + ;; + status) + docker compose -f "$COMPOSE_FILE" ps + ;; + logs) + docker compose -f "$COMPOSE_FILE" logs -f "${@:-$SERVICE}" + ;; + open) + xdg-open "http://localhost:$PORT" 2>/dev/null || open "http://localhost:$PORT" + ;; + update) + docker compose -f "$COMPOSE_FILE" pull + docker compose -f "$COMPOSE_FILE" up -d --build + ;; + *) + usage + ;; +esac +``` + +```bash +chmod +x /Library/Development/CircuitForge/snipe/manage.sh +``` + +- [ ] **Step 7: Create package __init__.py files** + +```bash +mkdir -p /Library/Development/CircuitForge/snipe/app +touch /Library/Development/CircuitForge/snipe/app/__init__.py +mkdir -p /Library/Development/CircuitForge/snipe/tests +touch /Library/Development/CircuitForge/snipe/tests/__init__.py +``` + +- [ ] **Step 8: Install and verify** + +```bash +cd /Library/Development/CircuitForge/snipe +conda run -n job-seeker pip install -e . +conda run -n job-seeker python -c "import app; print('ok')" +``` +Expected: `ok` + +- [ ] **Step 9: Commit** + +```bash +git add pyproject.toml Dockerfile compose.yml compose.override.yml manage.sh .env.example .gitignore app/__init__.py tests/__init__.py +git commit -m "feat: scaffold snipe repo" +``` + +--- + +## Task 2: Data models and DB + +**Files:** `app/db/__init__.py`, `app/db/models.py`, `app/db/migrations/001_init.sql`, `app/db/store.py`, `tests/db/__init__.py`, `tests/db/test_store.py` + +- [ ] **Step 0: Create package directories** + +```bash +mkdir -p /Library/Development/CircuitForge/snipe/app/db/migrations +touch /Library/Development/CircuitForge/snipe/app/db/__init__.py +mkdir -p /Library/Development/CircuitForge/snipe/tests/db +touch /Library/Development/CircuitForge/snipe/tests/db/__init__.py +``` + +- [ ] **Step 1: Write failing tests** + +```python +# tests/db/test_store.py +import pytest +from pathlib import Path +from app.db.store import Store +from app.db.models import Listing, Seller, TrustScore, MarketComp + + +@pytest.fixture +def store(tmp_path): + return Store(tmp_path / "test.db") + + +def test_store_creates_tables(store): + # If no exception on init, tables exist + pass + + +def test_save_and_get_seller(store): + seller = Seller( + platform="ebay", + platform_seller_id="user123", + username="techseller", + account_age_days=730, + feedback_count=450, + feedback_ratio=0.991, + category_history_json="{}", + ) + store.save_seller(seller) + result = store.get_seller("ebay", "user123") + assert result is not None + assert result.username == "techseller" + assert result.feedback_count == 450 + + +def test_save_and_get_listing(store): + listing = Listing( + platform="ebay", + platform_listing_id="ebay-123", + title="RTX 4090 FE", + price=950.00, + currency="USD", + condition="used", + seller_platform_id="user123", + url="https://ebay.com/itm/123", + photo_urls=["https://i.ebayimg.com/1.jpg"], + listing_age_days=3, + ) + store.save_listing(listing) + result = store.get_listing("ebay", "ebay-123") + assert result is not None + assert result.title == "RTX 4090 FE" + assert result.price == 950.00 + + +def test_save_and_get_market_comp(store): + comp = MarketComp( + platform="ebay", + query_hash="abc123", + median_price=1050.0, + sample_count=12, + expires_at="2026-03-26T00:00:00", + ) + store.save_market_comp(comp) + result = store.get_market_comp("ebay", "abc123") + assert result is not None + assert result.median_price == 1050.0 + + +def test_get_market_comp_returns_none_for_expired(store): + comp = MarketComp( + platform="ebay", + query_hash="expired", + median_price=900.0, + sample_count=5, + expires_at="2020-01-01T00:00:00", # past + ) + store.save_market_comp(comp) + result = store.get_market_comp("ebay", "expired") + assert result is None +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/db/test_store.py -v +``` +Expected: ImportError + +- [ ] **Step 3: Write app/db/models.py** + +```python +# app/db/models.py +"""Dataclasses for all Snipe domain objects.""" +from __future__ import annotations +from dataclasses import dataclass, field +from typing import Optional + + +@dataclass +class Seller: + platform: str + platform_seller_id: str + username: str + account_age_days: int + feedback_count: int + feedback_ratio: float # 0.0–1.0 + category_history_json: str # JSON blob of past category sales + id: Optional[int] = None + fetched_at: Optional[str] = None + + +@dataclass +class Listing: + platform: str + platform_listing_id: str + title: str + price: float + currency: str + condition: str + seller_platform_id: str + url: str + photo_urls: list[str] = field(default_factory=list) + listing_age_days: int = 0 + id: Optional[int] = None + fetched_at: Optional[str] = None + trust_score_id: Optional[int] = None + + +@dataclass +class TrustScore: + listing_id: int + composite_score: int # 0–100 + account_age_score: int # 0–20 + feedback_count_score: int # 0–20 + feedback_ratio_score: int # 0–20 + price_vs_market_score: int # 0–20 + category_history_score: int # 0–20 + photo_hash_duplicate: bool = False + photo_analysis_json: Optional[str] = None + red_flags_json: str = "[]" + score_is_partial: bool = False + id: Optional[int] = None + scored_at: Optional[str] = None + + +@dataclass +class MarketComp: + platform: str + query_hash: str + median_price: float + sample_count: int + expires_at: str # ISO8601 — checked against current time + id: Optional[int] = None + fetched_at: Optional[str] = None + + +@dataclass +class SavedSearch: + """Schema scaffolded in v0.1; background monitoring wired in v0.2.""" + name: str + query: str + platform: str + filters_json: str = "{}" + id: Optional[int] = None + created_at: Optional[str] = None + last_run_at: Optional[str] = None + + +@dataclass +class PhotoHash: + """Perceptual hash store for cross-search dedup (v0.2+). Schema scaffolded in v0.1.""" + listing_id: int + photo_url: str + phash: str # hex string from imagehash + id: Optional[int] = None + first_seen_at: Optional[str] = None +``` + +- [ ] **Step 4: Write app/db/migrations/001_init.sql** + +```sql +-- app/db/migrations/001_init.sql +CREATE TABLE IF NOT EXISTS sellers ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + platform TEXT NOT NULL, + platform_seller_id TEXT NOT NULL, + username TEXT NOT NULL, + account_age_days INTEGER NOT NULL, + feedback_count INTEGER NOT NULL, + feedback_ratio REAL NOT NULL, + category_history_json TEXT NOT NULL DEFAULT '{}', + fetched_at TEXT DEFAULT CURRENT_TIMESTAMP, + UNIQUE(platform, platform_seller_id) +); + +CREATE TABLE IF NOT EXISTS listings ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + platform TEXT NOT NULL, + platform_listing_id TEXT NOT NULL, + title TEXT NOT NULL, + price REAL NOT NULL, + currency TEXT NOT NULL DEFAULT 'USD', + condition TEXT, + seller_platform_id TEXT, + url TEXT, + photo_urls TEXT NOT NULL DEFAULT '[]', + listing_age_days INTEGER DEFAULT 0, + fetched_at TEXT DEFAULT CURRENT_TIMESTAMP, + trust_score_id INTEGER REFERENCES trust_scores(id), + UNIQUE(platform, platform_listing_id) +); + +CREATE TABLE IF NOT EXISTS trust_scores ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + listing_id INTEGER NOT NULL REFERENCES listings(id), + composite_score INTEGER NOT NULL, + account_age_score INTEGER NOT NULL DEFAULT 0, + feedback_count_score INTEGER NOT NULL DEFAULT 0, + feedback_ratio_score INTEGER NOT NULL DEFAULT 0, + price_vs_market_score INTEGER NOT NULL DEFAULT 0, + category_history_score INTEGER NOT NULL DEFAULT 0, + photo_hash_duplicate INTEGER NOT NULL DEFAULT 0, + photo_analysis_json TEXT, + red_flags_json TEXT NOT NULL DEFAULT '[]', + score_is_partial INTEGER NOT NULL DEFAULT 0, + scored_at TEXT DEFAULT CURRENT_TIMESTAMP +); + +CREATE TABLE IF NOT EXISTS market_comps ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + platform TEXT NOT NULL, + query_hash TEXT NOT NULL, + median_price REAL NOT NULL, + sample_count INTEGER NOT NULL, + fetched_at TEXT DEFAULT CURRENT_TIMESTAMP, + expires_at TEXT NOT NULL, + UNIQUE(platform, query_hash) +); + +CREATE TABLE IF NOT EXISTS saved_searches ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + name TEXT NOT NULL, + query TEXT NOT NULL, + platform TEXT NOT NULL DEFAULT 'ebay', + filters_json TEXT NOT NULL DEFAULT '{}', + created_at TEXT DEFAULT CURRENT_TIMESTAMP, + last_run_at TEXT +); + +-- PhotoHash: perceptual hash store for cross-search dedup (v0.2+). Schema present in v0.1. +CREATE TABLE IF NOT EXISTS photo_hashes ( + id INTEGER PRIMARY KEY AUTOINCREMENT, + listing_id INTEGER NOT NULL REFERENCES listings(id), + photo_url TEXT NOT NULL, + phash TEXT NOT NULL, + first_seen_at TEXT DEFAULT CURRENT_TIMESTAMP, + UNIQUE(listing_id, photo_url) +); +``` + +- [ ] **Step 5: Write app/db/store.py** + +```python +# app/db/store.py +"""Thin SQLite read/write layer for all Snipe models.""" +from __future__ import annotations +import json +from datetime import datetime, timezone +from pathlib import Path +from typing import Optional + +from circuitforge_core.db import get_connection, run_migrations + +from .models import Listing, Seller, TrustScore, MarketComp + +MIGRATIONS_DIR = Path(__file__).parent / "migrations" + + +class Store: + def __init__(self, db_path: Path): + self._conn = get_connection(db_path) + run_migrations(self._conn, MIGRATIONS_DIR) + + # --- Seller --- + + def save_seller(self, seller: Seller) -> None: + self._conn.execute( + "INSERT OR REPLACE INTO sellers " + "(platform, platform_seller_id, username, account_age_days, " + "feedback_count, feedback_ratio, category_history_json) " + "VALUES (?,?,?,?,?,?,?)", + (seller.platform, seller.platform_seller_id, seller.username, + seller.account_age_days, seller.feedback_count, seller.feedback_ratio, + seller.category_history_json), + ) + self._conn.commit() + + def get_seller(self, platform: str, platform_seller_id: str) -> Optional[Seller]: + row = self._conn.execute( + "SELECT platform, platform_seller_id, username, account_age_days, " + "feedback_count, feedback_ratio, category_history_json, id, fetched_at " + "FROM sellers WHERE platform=? AND platform_seller_id=?", + (platform, platform_seller_id), + ).fetchone() + if not row: + return None + return Seller(*row[:7], id=row[7], fetched_at=row[8]) + + # --- Listing --- + + def save_listing(self, listing: Listing) -> None: + self._conn.execute( + "INSERT OR REPLACE INTO listings " + "(platform, platform_listing_id, title, price, currency, condition, " + "seller_platform_id, url, photo_urls, listing_age_days) " + "VALUES (?,?,?,?,?,?,?,?,?,?)", + (listing.platform, listing.platform_listing_id, listing.title, + listing.price, listing.currency, listing.condition, + listing.seller_platform_id, listing.url, + json.dumps(listing.photo_urls), listing.listing_age_days), + ) + self._conn.commit() + + def get_listing(self, platform: str, platform_listing_id: str) -> Optional[Listing]: + row = self._conn.execute( + "SELECT platform, platform_listing_id, title, price, currency, condition, " + "seller_platform_id, url, photo_urls, listing_age_days, id, fetched_at " + "FROM listings WHERE platform=? AND platform_listing_id=?", + (platform, platform_listing_id), + ).fetchone() + if not row: + return None + return Listing( + *row[:8], + photo_urls=json.loads(row[8]), + listing_age_days=row[9], + id=row[10], + fetched_at=row[11], + ) + + # --- MarketComp --- + + def save_market_comp(self, comp: MarketComp) -> None: + self._conn.execute( + "INSERT OR REPLACE INTO market_comps " + "(platform, query_hash, median_price, sample_count, expires_at) " + "VALUES (?,?,?,?,?)", + (comp.platform, comp.query_hash, comp.median_price, + comp.sample_count, comp.expires_at), + ) + self._conn.commit() + + def get_market_comp(self, platform: str, query_hash: str) -> Optional[MarketComp]: + row = self._conn.execute( + "SELECT platform, query_hash, median_price, sample_count, expires_at, id, fetched_at " + "FROM market_comps WHERE platform=? AND query_hash=? AND expires_at > ?", + (platform, query_hash, datetime.now(timezone.utc).isoformat()), + ).fetchone() + if not row: + return None + return MarketComp(*row[:5], id=row[5], fetched_at=row[6]) +``` + +- [ ] **Step 6: Run tests** + +```bash +conda run -n job-seeker pytest tests/db/test_store.py -v +``` +Expected: 5 PASSED + +- [ ] **Step 7: Commit** + +```bash +git add app/db/ tests/db/ +git commit -m "feat: add data models, migrations, and store" +``` + +--- + +## Task 3: eBay OAuth token manager + +**Files:** `app/platforms/__init__.py`, `app/platforms/ebay/__init__.py`, `app/platforms/ebay/auth.py`, `tests/platforms/__init__.py`, `tests/platforms/test_ebay_auth.py` + +- [ ] **Step 0: Create platform package directories** + +```bash +mkdir -p /Library/Development/CircuitForge/snipe/app/platforms/ebay +touch /Library/Development/CircuitForge/snipe/app/platforms/ebay/__init__.py +mkdir -p /Library/Development/CircuitForge/snipe/tests/platforms +touch /Library/Development/CircuitForge/snipe/tests/platforms/__init__.py +``` + +- [ ] **Step 1: Write failing tests** + +```python +# tests/platforms/test_ebay_auth.py +import time +import requests +from unittest.mock import patch, MagicMock +import pytest +from app.platforms.ebay.auth import EbayTokenManager + + +def test_fetches_token_on_first_call(): + manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox") + mock_resp = MagicMock() + mock_resp.json.return_value = {"access_token": "tok123", "expires_in": 7200} + mock_resp.raise_for_status = MagicMock() + with patch("app.platforms.ebay.auth.requests.post", return_value=mock_resp) as mock_post: + token = manager.get_token() + assert token == "tok123" + assert mock_post.called + + +def test_returns_cached_token_before_expiry(): + manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox") + manager._token = "cached" + manager._expires_at = time.time() + 3600 + with patch("app.platforms.ebay.auth.requests.post") as mock_post: + token = manager.get_token() + assert token == "cached" + assert not mock_post.called + + +def test_refreshes_token_after_expiry(): + manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox") + manager._token = "old" + manager._expires_at = time.time() - 1 # expired + mock_resp = MagicMock() + mock_resp.json.return_value = {"access_token": "new_tok", "expires_in": 7200} + mock_resp.raise_for_status = MagicMock() + with patch("app.platforms.ebay.auth.requests.post", return_value=mock_resp): + token = manager.get_token() + assert token == "new_tok" + + +def test_token_fetch_failure_raises(): + """Spec requires: on token fetch failure, raise immediately — no silent fallback.""" + manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox") + with patch("app.platforms.ebay.auth.requests.post", side_effect=requests.RequestException("network error")): + with pytest.raises(requests.RequestException): + manager.get_token() +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/platforms/test_ebay_auth.py -v +``` + +- [ ] **Step 3: Write platform adapter base class** + +```python +# app/platforms/__init__.py +"""PlatformAdapter abstract base and shared types.""" +from __future__ import annotations +from abc import ABC, abstractmethod +from dataclasses import dataclass, field +from typing import Optional +from app.db.models import Listing, Seller + + +@dataclass +class SearchFilters: + max_price: Optional[float] = None + min_price: Optional[float] = None + condition: Optional[list[str]] = field(default_factory=list) + location_radius_km: Optional[int] = None + + +class PlatformAdapter(ABC): + @abstractmethod + def search(self, query: str, filters: SearchFilters) -> list[Listing]: ... + + @abstractmethod + def get_seller(self, seller_platform_id: str) -> Optional[Seller]: ... + + @abstractmethod + def get_completed_sales(self, query: str) -> list[Listing]: + """Fetch recently completed/sold listings for price comp data.""" + ... +``` + +- [ ] **Step 4: Write auth.py** + +```python +# app/platforms/ebay/auth.py +"""eBay OAuth2 client credentials token manager.""" +from __future__ import annotations +import base64 +import time +from typing import Optional +import requests + +EBAY_OAUTH_URLS = { + "production": "https://api.ebay.com/identity/v1/oauth2/token", + "sandbox": "https://api.sandbox.ebay.com/identity/v1/oauth2/token", +} + + +class EbayTokenManager: + """Fetches and caches eBay app-level OAuth tokens. Thread-safe for single process.""" + + def __init__(self, client_id: str, client_secret: str, env: str = "production"): + self._client_id = client_id + self._client_secret = client_secret + self._token_url = EBAY_OAUTH_URLS[env] + self._token: Optional[str] = None + self._expires_at: float = 0.0 + + def get_token(self) -> str: + """Return a valid access token, fetching or refreshing as needed.""" + if self._token and time.time() < self._expires_at - 60: + return self._token + self._fetch_token() + return self._token # type: ignore[return-value] + + def _fetch_token(self) -> None: + credentials = base64.b64encode( + f"{self._client_id}:{self._client_secret}".encode() + ).decode() + resp = requests.post( + self._token_url, + headers={ + "Authorization": f"Basic {credentials}", + "Content-Type": "application/x-www-form-urlencoded", + }, + data={"grant_type": "client_credentials", "scope": "https://api.ebay.com/oauth/api_scope"}, + ) + resp.raise_for_status() + data = resp.json() + self._token = data["access_token"] + self._expires_at = time.time() + data["expires_in"] +``` + +- [ ] **Step 5: Run tests** + +```bash +conda run -n job-seeker pytest tests/platforms/test_ebay_auth.py -v +``` +Expected: 4 PASSED + +- [ ] **Step 6: Commit** + +```bash +git add app/platforms/ tests/platforms/test_ebay_auth.py +git commit -m "feat: add PlatformAdapter base and eBay token manager" +``` + +--- + +## Task 4: eBay adapter and normaliser + +**Files:** `app/platforms/ebay/normaliser.py`, `app/platforms/ebay/adapter.py`, `tests/platforms/test_ebay_normaliser.py` + +- [ ] **Step 1: Write normaliser tests** + +```python +# tests/platforms/test_ebay_normaliser.py +import pytest +from app.platforms.ebay.normaliser import normalise_listing, normalise_seller + + +def test_normalise_listing_maps_fields(): + raw = { + "itemId": "v1|12345|0", + "title": "RTX 4090 GPU", + "price": {"value": "950.00", "currency": "USD"}, + "condition": "USED", + "seller": {"username": "techguy", "feedbackScore": 300, "feedbackPercentage": "99.1"}, + "itemWebUrl": "https://ebay.com/itm/12345", + "image": {"imageUrl": "https://i.ebayimg.com/1.jpg"}, + "additionalImages": [{"imageUrl": "https://i.ebayimg.com/2.jpg"}], + "itemCreationDate": "2026-03-20T00:00:00.000Z", + } + listing = normalise_listing(raw) + assert listing.platform == "ebay" + assert listing.platform_listing_id == "v1|12345|0" + assert listing.title == "RTX 4090 GPU" + assert listing.price == 950.0 + assert listing.condition == "used" + assert listing.seller_platform_id == "techguy" + assert "https://i.ebayimg.com/1.jpg" in listing.photo_urls + assert "https://i.ebayimg.com/2.jpg" in listing.photo_urls + + +def test_normalise_listing_handles_missing_images(): + raw = { + "itemId": "v1|999|0", + "title": "GPU", + "price": {"value": "100.00", "currency": "USD"}, + "condition": "NEW", + "seller": {"username": "u"}, + "itemWebUrl": "https://ebay.com/itm/999", + } + listing = normalise_listing(raw) + assert listing.photo_urls == [] + + +def test_normalise_seller_maps_fields(): + raw = { + "username": "techguy", + "feedbackScore": 300, + "feedbackPercentage": "99.1", + "registrationDate": "2020-03-01T00:00:00.000Z", + "sellerFeedbackSummary": { + "feedbackByCategory": [ + {"transactionPercent": "95.0", "categorySite": "ELECTRONICS", "count": "50"} + ] + } + } + seller = normalise_seller(raw) + assert seller.username == "techguy" + assert seller.feedback_count == 300 + assert seller.feedback_ratio == pytest.approx(0.991, abs=0.001) + assert seller.account_age_days > 0 +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/platforms/test_ebay_normaliser.py -v +``` + +- [ ] **Step 3: Write normaliser.py** + +```python +# app/platforms/ebay/normaliser.py +"""Convert raw eBay API responses into Snipe domain objects.""" +from __future__ import annotations +from datetime import datetime, timezone +from app.db.models import Listing, Seller + + +def normalise_listing(raw: dict) -> Listing: + price_data = raw.get("price", {}) + photos = [] + if "image" in raw: + photos.append(raw["image"].get("imageUrl", "")) + for img in raw.get("additionalImages", []): + url = img.get("imageUrl", "") + if url and url not in photos: + photos.append(url) + photos = [p for p in photos if p] + + listing_age_days = 0 + created_raw = raw.get("itemCreationDate", "") + if created_raw: + try: + created = datetime.fromisoformat(created_raw.replace("Z", "+00:00")) + listing_age_days = (datetime.now(timezone.utc) - created).days + except ValueError: + pass + + seller = raw.get("seller", {}) + return Listing( + platform="ebay", + platform_listing_id=raw["itemId"], + title=raw.get("title", ""), + price=float(price_data.get("value", 0)), + currency=price_data.get("currency", "USD"), + condition=raw.get("condition", "").lower(), + seller_platform_id=seller.get("username", ""), + url=raw.get("itemWebUrl", ""), + photo_urls=photos, + listing_age_days=listing_age_days, + ) + + +def normalise_seller(raw: dict) -> Seller: + feedback_pct = float(raw.get("feedbackPercentage", "0").strip("%")) / 100.0 + + account_age_days = 0 + reg_date_raw = raw.get("registrationDate", "") + if reg_date_raw: + try: + reg_date = datetime.fromisoformat(reg_date_raw.replace("Z", "+00:00")) + account_age_days = (datetime.now(timezone.utc) - reg_date).days + except ValueError: + pass + + import json + category_history = {} + summary = raw.get("sellerFeedbackSummary", {}) + for entry in summary.get("feedbackByCategory", []): + category_history[entry.get("categorySite", "")] = int(entry.get("count", 0)) + + return Seller( + platform="ebay", + platform_seller_id=raw["username"], + username=raw["username"], + account_age_days=account_age_days, + feedback_count=int(raw.get("feedbackScore", 0)), + feedback_ratio=feedback_pct, + category_history_json=json.dumps(category_history), + ) +``` + +- [ ] **Step 4: Write adapter.py** + +```python +# app/platforms/ebay/adapter.py +"""eBay Browse API + Seller API adapter.""" +from __future__ import annotations +import hashlib +from datetime import datetime, timedelta, timezone +from typing import Optional +import requests + +from app.db.models import Listing, Seller, MarketComp +from app.db.store import Store +from app.platforms import PlatformAdapter, SearchFilters +from app.platforms.ebay.auth import EbayTokenManager +from app.platforms.ebay.normaliser import normalise_listing, normalise_seller + +BROWSE_BASE = { + "production": "https://api.ebay.com/buy/browse/v1", + "sandbox": "https://api.sandbox.ebay.com/buy/browse/v1", +} +# Note: seller lookup uses the Browse API with a seller filter, not a separate Seller API. +# The Commerce Identity /user endpoint returns the calling app's own identity (requires +# user OAuth, not app credentials). Seller metadata is extracted from Browse API inline +# seller fields. registrationDate is available in item detail responses via this path. + + +class EbayAdapter(PlatformAdapter): + def __init__(self, token_manager: EbayTokenManager, store: Store, env: str = "production"): + self._tokens = token_manager + self._store = store + self._browse_base = BROWSE_BASE[env] + + def _headers(self) -> dict: + return {"Authorization": f"Bearer {self._tokens.get_token()}"} + + def search(self, query: str, filters: SearchFilters) -> list[Listing]: + params: dict = {"q": query, "limit": 50} + filter_parts = [] + if filters.max_price: + filter_parts.append(f"price:[..{filters.max_price}],priceCurrency:USD") + if filters.condition: + cond_map = {"new": "NEW", "used": "USED", "open box": "OPEN_BOX", "for parts": "FOR_PARTS_NOT_WORKING"} + ebay_conds = [cond_map[c] for c in filters.condition if c in cond_map] + if ebay_conds: + filter_parts.append(f"conditions:{{{','.join(ebay_conds)}}}") + if filter_parts: + params["filter"] = ",".join(filter_parts) + + resp = requests.get(f"{self._browse_base}/item_summary/search", + headers=self._headers(), params=params) + resp.raise_for_status() + items = resp.json().get("itemSummaries", []) + return [normalise_listing(item) for item in items] + + def get_seller(self, seller_platform_id: str) -> Optional[Seller]: + cached = self._store.get_seller("ebay", seller_platform_id) + if cached: + return cached + try: + # Fetch seller data via Browse API: search for one item by this seller. + # The Browse API inline seller field includes username, feedbackScore, + # feedbackPercentage, and (in item detail responses) registrationDate. + # This works with app-level client credentials — no user OAuth required. + resp = requests.get( + f"{self._browse_base}/item_summary/search", + headers={**self._headers(), "X-EBAY-C-MARKETPLACE-ID": "EBAY_US"}, + params={"seller": seller_platform_id, "limit": 1}, + ) + resp.raise_for_status() + items = resp.json().get("itemSummaries", []) + if not items: + return None + seller = normalise_seller(items[0].get("seller", {})) + self._store.save_seller(seller) + return seller + except Exception: + return None # Caller handles None gracefully (partial score) + + def get_completed_sales(self, query: str) -> list[Listing]: + query_hash = hashlib.md5(query.encode()).hexdigest() + cached = self._store.get_market_comp("ebay", query_hash) + if cached: + return [] # Comp data is used directly; return empty to signal cache hit + + params = {"q": query, "limit": 20, "filter": "buyingOptions:{FIXED_PRICE}"} + try: + resp = requests.get(f"{self._browse_base}/item_summary/search", + headers=self._headers(), params=params) + resp.raise_for_status() + items = resp.json().get("itemSummaries", []) + listings = [normalise_listing(item) for item in items] + if listings: + prices = sorted(l.price for l in listings) + median = prices[len(prices) // 2] + comp = MarketComp( + platform="ebay", + query_hash=query_hash, + median_price=median, + sample_count=len(prices), + expires_at=(datetime.now(timezone.utc) + timedelta(hours=6)).isoformat(), + ) + self._store.save_market_comp(comp) + return listings + except Exception: + return [] +``` + +- [ ] **Step 5: Run tests** + +```bash +conda run -n job-seeker pytest tests/platforms/ -v +``` +Expected: All PASSED + +- [ ] **Step 6: Commit** + +```bash +git add app/platforms/ tests/platforms/ +git commit -m "feat: add eBay adapter with Browse API, Seller API, and market comps" +``` + +--- + +## Task 5: Metadata trust scorer + +**Files:** `app/trust/__init__.py`, `app/trust/metadata.py`, `app/trust/photo.py`, `app/trust/aggregator.py`, `tests/trust/__init__.py`, `tests/trust/test_metadata.py`, `tests/trust/test_photo.py`, `tests/trust/test_aggregator.py` + +- [ ] **Step 0: Create trust package directories** + +```bash +mkdir -p /Library/Development/CircuitForge/snipe/app/trust +touch /Library/Development/CircuitForge/snipe/app/trust/__init__.py +mkdir -p /Library/Development/CircuitForge/snipe/tests/trust +touch /Library/Development/CircuitForge/snipe/tests/trust/__init__.py +``` + +- [ ] **Step 1: Write failing tests** + +```python +# tests/trust/test_metadata.py +from app.db.models import Seller +from app.trust.metadata import MetadataScorer + + +def _seller(**kwargs) -> Seller: + defaults = dict( + platform="ebay", platform_seller_id="u", username="u", + account_age_days=730, feedback_count=450, + feedback_ratio=0.991, category_history_json='{"ELECTRONICS": 30}', + ) + defaults.update(kwargs) + return Seller(**defaults) + + +def test_established_seller_scores_high(): + scorer = MetadataScorer() + scores = scorer.score(_seller(), market_median=1000.0, listing_price=950.0) + total = sum(scores.values()) + assert total >= 80 + + +def test_new_account_scores_zero_on_age(): + scorer = MetadataScorer() + scores = scorer.score(_seller(account_age_days=3), market_median=1000.0, listing_price=950.0) + assert scores["account_age"] == 0 + + +def test_low_feedback_count_scores_low(): + scorer = MetadataScorer() + scores = scorer.score(_seller(feedback_count=2), market_median=1000.0, listing_price=950.0) + assert scores["feedback_count"] < 10 + + +def test_suspicious_price_scores_zero(): + scorer = MetadataScorer() + # 60% below market → zero + scores = scorer.score(_seller(), market_median=1000.0, listing_price=400.0) + assert scores["price_vs_market"] == 0 + + +def test_no_market_data_returns_none(): + scorer = MetadataScorer() + scores = scorer.score(_seller(), market_median=None, listing_price=950.0) + # None signals "data unavailable" — aggregator will set score_is_partial=True + assert scores["price_vs_market"] is None +``` + +```python +# tests/trust/test_photo.py +from app.trust.photo import PhotoScorer + + +def test_no_duplicates_in_single_listing_result(): + scorer = PhotoScorer() + photo_urls_per_listing = [ + ["https://img.com/a.jpg", "https://img.com/b.jpg"], + ["https://img.com/c.jpg"], + ] + # All unique images — no duplicates + results = scorer.check_duplicates(photo_urls_per_listing) + assert all(not r for r in results) + + +def test_duplicate_photo_flagged(): + scorer = PhotoScorer() + # Same URL in two listings = trivially duplicate (hash will match) + photo_urls_per_listing = [ + ["https://img.com/same.jpg"], + ["https://img.com/same.jpg"], + ] + results = scorer.check_duplicates(photo_urls_per_listing) + # Both listings should be flagged + assert results[0] is True or results[1] is True +``` + +```python +# tests/trust/test_aggregator.py +from app.db.models import Seller +from app.trust.aggregator import Aggregator + + +def test_composite_sum_of_five_signals(): + agg = Aggregator() + scores = { + "account_age": 18, "feedback_count": 16, + "feedback_ratio": 20, "price_vs_market": 15, + "category_history": 14, + } + result = agg.aggregate(scores, photo_hash_duplicate=False, seller=None) + assert result.composite_score == 83 + + +def test_hard_filter_new_account(): + from app.db.models import Seller + agg = Aggregator() + scores = {k: 20 for k in ["account_age", "feedback_count", + "feedback_ratio", "price_vs_market", "category_history"]} + young_seller = Seller( + platform="ebay", platform_seller_id="u", username="u", + account_age_days=3, feedback_count=0, + feedback_ratio=1.0, category_history_json="{}", + ) + result = agg.aggregate(scores, photo_hash_duplicate=False, seller=young_seller) + assert "new_account" in result.red_flags_json + + +def test_hard_filter_bad_actor_established_account(): + """Established account (count > 20) with very bad ratio → hard filter.""" + from app.db.models import Seller + agg = Aggregator() + scores = {k: 10 for k in ["account_age", "feedback_count", + "feedback_ratio", "price_vs_market", "category_history"]} + bad_seller = Seller( + platform="ebay", platform_seller_id="u", username="u", + account_age_days=730, feedback_count=25, # count > 20 + feedback_ratio=0.70, # ratio < 80% → hard filter + category_history_json="{}", + ) + result = agg.aggregate(scores, photo_hash_duplicate=False, seller=bad_seller) + assert "established_bad_actor" in result.red_flags_json + + +def test_partial_score_flagged_when_signals_missing(): + agg = Aggregator() + scores = { + "account_age": 18, "feedback_count": None, # None = unavailable + "feedback_ratio": 20, "price_vs_market": 15, + "category_history": 14, + } + result = agg.aggregate(scores, photo_hash_duplicate=False, seller=None) + assert result.score_is_partial is True +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/trust/ -v +``` + +- [ ] **Step 3: Write metadata.py** + +```python +# app/trust/metadata.py +"""Five metadata trust signals, each scored 0–20.""" +from __future__ import annotations +import json +from typing import Optional +from app.db.models import Seller + +ELECTRONICS_CATEGORIES = {"ELECTRONICS", "COMPUTERS_TABLETS", "VIDEO_GAMES", "CELL_PHONES"} + + +class MetadataScorer: + def score( + self, + seller: Seller, + market_median: Optional[float], + listing_price: float, + ) -> dict[str, Optional[int]]: + return { + "account_age": self._account_age(seller.account_age_days), + "feedback_count": self._feedback_count(seller.feedback_count), + "feedback_ratio": self._feedback_ratio(seller.feedback_ratio, seller.feedback_count), + "price_vs_market": self._price_vs_market(listing_price, market_median), + "category_history": self._category_history(seller.category_history_json), + } + + def _account_age(self, days: int) -> int: + if days < 7: return 0 + if days < 30: return 5 + if days < 90: return 10 + if days < 365: return 15 + return 20 + + def _feedback_count(self, count: int) -> int: + if count < 3: return 0 + if count < 10: return 5 + if count < 50: return 10 + if count < 200: return 15 + return 20 + + def _feedback_ratio(self, ratio: float, count: int) -> int: + if ratio < 0.80 and count > 20: return 0 + if ratio < 0.90: return 5 + if ratio < 0.95: return 10 + if ratio < 0.98: return 15 + return 20 + + def _price_vs_market(self, price: float, median: Optional[float]) -> Optional[int]: + if median is None: return None # data unavailable → aggregator sets score_is_partial + if median <= 0: return None + ratio = price / median + if ratio < 0.50: return 0 # >50% below = scam + if ratio < 0.70: return 5 # >30% below = suspicious + if ratio < 0.85: return 10 + if ratio <= 1.20: return 20 + return 15 # above market = still ok, just expensive + + def _category_history(self, category_history_json: str) -> int: + try: + history = json.loads(category_history_json) + except (ValueError, TypeError): + return 0 + electronics_sales = sum( + v for k, v in history.items() if k in ELECTRONICS_CATEGORIES + ) + if electronics_sales == 0: return 0 + if electronics_sales < 5: return 8 + if electronics_sales < 20: return 14 + return 20 +``` + +- [ ] **Step 4: Write photo.py** + +```python +# app/trust/photo.py +"""Perceptual hash deduplication within a result set (free tier, v0.1).""" +from __future__ import annotations +from typing import Optional +import io +import requests + +try: + import imagehash + from PIL import Image + _IMAGEHASH_AVAILABLE = True +except ImportError: + _IMAGEHASH_AVAILABLE = False + + +class PhotoScorer: + """ + check_duplicates: compare images within a single result set. + Cross-session dedup (PhotoHash table) is v0.2. + Vision analysis (real/marketing/EM bag) is v0.2 paid tier. + """ + + def check_duplicates(self, photo_urls_per_listing: list[list[str]]) -> list[bool]: + """ + Returns a list of booleans parallel to photo_urls_per_listing. + True = this listing's primary photo is a duplicate of another listing in the set. + Falls back to URL-equality check if imagehash is unavailable or fetch fails. + """ + if not _IMAGEHASH_AVAILABLE: + return self._url_dedup(photo_urls_per_listing) + + primary_urls = [urls[0] if urls else "" for urls in photo_urls_per_listing] + hashes: list[Optional[str]] = [] + for url in primary_urls: + hashes.append(self._fetch_hash(url)) + + results = [False] * len(photo_urls_per_listing) + seen: dict[str, int] = {} + for i, h in enumerate(hashes): + if h is None: + continue + if h in seen: + results[i] = True + results[seen[h]] = True + else: + seen[h] = i + return results + + def _fetch_hash(self, url: str) -> Optional[str]: + if not url: + return None + try: + resp = requests.get(url, timeout=5, stream=True) + resp.raise_for_status() + img = Image.open(io.BytesIO(resp.content)) + return str(imagehash.phash(img)) + except Exception: + return None + + def _url_dedup(self, photo_urls_per_listing: list[list[str]]) -> list[bool]: + seen: set[str] = set() + results = [] + for urls in photo_urls_per_listing: + primary = urls[0] if urls else "" + if primary and primary in seen: + results.append(True) + else: + if primary: + seen.add(primary) + results.append(False) + return results +``` + +- [ ] **Step 5: Write aggregator.py** + +```python +# app/trust/aggregator.py +"""Composite score and red flag extraction.""" +from __future__ import annotations +import json +from typing import Optional +from app.db.models import Seller, TrustScore + +HARD_FILTER_AGE_DAYS = 7 +HARD_FILTER_BAD_RATIO_MIN_COUNT = 20 +HARD_FILTER_BAD_RATIO_THRESHOLD = 0.80 + + +class Aggregator: + def aggregate( + self, + signal_scores: dict[str, Optional[int]], + photo_hash_duplicate: bool, + seller: Optional[Seller], + listing_id: int = 0, + ) -> TrustScore: + is_partial = any(v is None for v in signal_scores.values()) + clean = {k: (v if v is not None else 0) for k, v in signal_scores.items()} + composite = sum(clean.values()) + + red_flags: list[str] = [] + + # Hard filters + if seller and seller.account_age_days < HARD_FILTER_AGE_DAYS: + red_flags.append("new_account") + if seller and ( + seller.feedback_ratio < HARD_FILTER_BAD_RATIO_THRESHOLD + and seller.feedback_count > HARD_FILTER_BAD_RATIO_MIN_COUNT + ): + red_flags.append("established_bad_actor") + + # Soft flags + if seller and seller.account_age_days < 30: + red_flags.append("account_under_30_days") + if seller and seller.feedback_count < 10: + red_flags.append("low_feedback_count") + if clean["price_vs_market"] == 0: + red_flags.append("suspicious_price") + if photo_hash_duplicate: + red_flags.append("duplicate_photo") + + return TrustScore( + listing_id=listing_id, + composite_score=composite, + account_age_score=clean["account_age"], + feedback_count_score=clean["feedback_count"], + feedback_ratio_score=clean["feedback_ratio"], + price_vs_market_score=clean["price_vs_market"], + category_history_score=clean["category_history"], + photo_hash_duplicate=photo_hash_duplicate, + red_flags_json=json.dumps(red_flags), + score_is_partial=is_partial, + ) +``` + +- [ ] **Step 6: Write trust/__init__.py** + +```python +# app/trust/__init__.py +from .metadata import MetadataScorer +from .photo import PhotoScorer +from .aggregator import Aggregator +from app.db.models import Seller, Listing, TrustScore +from app.db.store import Store +import hashlib + + +class TrustScorer: + """Orchestrates metadata + photo scoring for a batch of listings.""" + + def __init__(self, store: Store): + self._store = store + self._meta = MetadataScorer() + self._photo = PhotoScorer() + self._agg = Aggregator() + + def score_batch( + self, + listings: list[Listing], + query: str, + ) -> list[TrustScore]: + query_hash = hashlib.md5(query.encode()).hexdigest() + comp = self._store.get_market_comp("ebay", query_hash) + market_median = comp.median_price if comp else None + + photo_url_sets = [l.photo_urls for l in listings] + duplicates = self._photo.check_duplicates(photo_url_sets) + + scores = [] + for listing, is_dup in zip(listings, duplicates): + seller = self._store.get_seller("ebay", listing.seller_platform_id) + if seller: + signal_scores = self._meta.score(seller, market_median, listing.price) + else: + signal_scores = {k: None for k in + ["account_age", "feedback_count", "feedback_ratio", + "price_vs_market", "category_history"]} + trust = self._agg.aggregate(signal_scores, is_dup, seller, listing.id or 0) + scores.append(trust) + return scores +``` + +- [ ] **Step 7: Run all trust tests** + +```bash +conda run -n job-seeker pytest tests/trust/ -v +``` +Expected: All PASSED + +- [ ] **Step 8: Commit** + +```bash +git add app/trust/ tests/trust/ +git commit -m "feat: add metadata scorer, photo hash dedup, and trust aggregator" +``` + +--- + +## Task 6: Tier gating + +**Files:** `app/tiers.py`, `tests/test_tiers.py` + +- [ ] **Step 1: Write failing tests** + +```python +# tests/test_tiers.py +from app.tiers import can_use, FEATURES, LOCAL_VISION_UNLOCKABLE + + +def test_metadata_scoring_is_free(): + assert can_use("metadata_trust_scoring", tier="free") is True + + +def test_photo_analysis_is_paid(): + assert can_use("photo_analysis", tier="free") is False + assert can_use("photo_analysis", tier="paid") is True + + +def test_local_vision_unlocks_photo_analysis(): + assert can_use("photo_analysis", tier="free", has_local_vision=True) is True + + +def test_byok_does_not_unlock_photo_analysis(): + assert can_use("photo_analysis", tier="free", has_byok=True) is False + + +def test_saved_searches_require_paid(): + assert can_use("saved_searches", tier="free") is False + assert can_use("saved_searches", tier="paid") is True +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/test_tiers.py -v +``` + +- [ ] **Step 3: Write app/tiers.py** + +```python +# app/tiers.py +"""Snipe feature gates. Delegates to circuitforge_core.tiers.""" +from __future__ import annotations +from circuitforge_core.tiers import can_use as _core_can_use, TIERS # noqa: F401 + +# Feature key → minimum tier required. +FEATURES: dict[str, str] = { + # Free tier + "metadata_trust_scoring": "free", + "hash_dedup": "free", + # Paid tier + "photo_analysis": "paid", + "serial_number_check": "paid", + "ai_image_detection": "paid", + "reverse_image_search": "paid", + "saved_searches": "paid", + "background_monitoring": "paid", +} + +# Photo analysis features unlock if user has local vision model (moondream2). +LOCAL_VISION_UNLOCKABLE: frozenset[str] = frozenset({ + "photo_analysis", + "serial_number_check", +}) + + +def can_use( + feature: str, + tier: str = "free", + has_byok: bool = False, + has_local_vision: bool = False, +) -> bool: + if has_local_vision and feature in LOCAL_VISION_UNLOCKABLE: + return True + return _core_can_use(feature, tier, has_byok=has_byok, _features=FEATURES) +``` + +- [ ] **Step 4: Run tests** + +```bash +conda run -n job-seeker pytest tests/test_tiers.py -v +``` +Expected: 5 PASSED + +- [ ] **Step 5: Commit** + +```bash +git add app/tiers.py tests/test_tiers.py +git commit -m "feat: add snipe tier gates with LOCAL_VISION_UNLOCKABLE" +``` + +--- + +## Task 7: Results UI + +**Files:** `app/ui/__init__.py`, `app/ui/components/__init__.py`, `app/ui/components/filters.py`, `app/ui/components/listing_row.py`, `app/ui/Search.py`, `app/app.py`, `tests/ui/__init__.py`, `tests/ui/test_filters.py` + +- [ ] **Step 0: Create UI package directories** + +```bash +mkdir -p /Library/Development/CircuitForge/snipe/app/ui/components +touch /Library/Development/CircuitForge/snipe/app/ui/__init__.py +touch /Library/Development/CircuitForge/snipe/app/ui/components/__init__.py +mkdir -p /Library/Development/CircuitForge/snipe/tests/ui +touch /Library/Development/CircuitForge/snipe/tests/ui/__init__.py +``` + +- [ ] **Step 1: Write failing filter tests** + +```python +# tests/ui/test_filters.py +from app.db.models import Listing, TrustScore +from app.ui.components.filters import build_filter_options + + +def _listing(price, condition, score): + return ( + Listing("ebay", "1", "GPU", price, "USD", condition, "u", "https://ebay.com", [], 1), + TrustScore(0, score, 10, 10, 10, 10, 10), + ) + + +def test_price_range_from_results(): + pairs = [_listing(500, "used", 80), _listing(1200, "new", 60)] + opts = build_filter_options(pairs) + assert opts["price_min"] == 500 + assert opts["price_max"] == 1200 + + +def test_conditions_from_results(): + pairs = [_listing(500, "used", 80), _listing(1200, "new", 60), _listing(800, "used", 70)] + opts = build_filter_options(pairs) + assert "used" in opts["conditions"] + assert opts["conditions"]["used"] == 2 + assert opts["conditions"]["new"] == 1 + + +def test_missing_condition_not_included(): + pairs = [_listing(500, "used", 80)] + opts = build_filter_options(pairs) + assert "new" not in opts["conditions"] + + +def test_trust_score_bands(): + pairs = [_listing(500, "used", 85), _listing(700, "new", 60), _listing(400, "used", 20)] + opts = build_filter_options(pairs) + assert opts["score_bands"]["safe"] == 1 # 80+ + assert opts["score_bands"]["review"] == 1 # 50–79 + assert opts["score_bands"]["skip"] == 1 # <50 +``` + +- [ ] **Step 2: Run to verify failure** + +```bash +conda run -n job-seeker pytest tests/ui/ -v +``` + +- [ ] **Step 3: Write filters.py** + +```python +# app/ui/components/filters.py +"""Build dynamic filter options from a result set and render the Streamlit sidebar.""" +from __future__ import annotations +from dataclasses import dataclass, field +from typing import Optional +import streamlit as st +from app.db.models import Listing, TrustScore + + +@dataclass +class FilterOptions: + price_min: float + price_max: float + conditions: dict[str, int] # condition → count + score_bands: dict[str, int] # safe/review/skip → count + has_real_photo: int = 0 + has_em_bag: int = 0 + duplicate_count: int = 0 + new_account_count: int = 0 + free_shipping_count: int = 0 + + +@dataclass +class FilterState: + min_trust_score: int = 0 + min_price: Optional[float] = None + max_price: Optional[float] = None + min_account_age_days: int = 0 + min_feedback_count: int = 0 + min_feedback_ratio: float = 0.0 + conditions: list[str] = field(default_factory=list) + hide_new_accounts: bool = False + hide_marketing_photos: bool = False + hide_suspicious_price: bool = False + hide_duplicate_photos: bool = False + + +def build_filter_options( + pairs: list[tuple[Listing, TrustScore]], +) -> FilterOptions: + prices = [l.price for l, _ in pairs if l.price > 0] + conditions: dict[str, int] = {} + safe = review = skip = 0 + dup_count = new_acct = 0 + + for listing, ts in pairs: + cond = listing.condition or "unknown" + conditions[cond] = conditions.get(cond, 0) + 1 + if ts.composite_score >= 80: + safe += 1 + elif ts.composite_score >= 50: + review += 1 + else: + skip += 1 + if ts.photo_hash_duplicate: + dup_count += 1 + import json + flags = json.loads(ts.red_flags_json or "[]") + if "new_account" in flags or "account_under_30_days" in flags: + new_acct += 1 + + return FilterOptions( + price_min=min(prices) if prices else 0, + price_max=max(prices) if prices else 0, + conditions=conditions, + score_bands={"safe": safe, "review": review, "skip": skip}, + duplicate_count=dup_count, + new_account_count=new_acct, + ) + + +def render_filter_sidebar( + pairs: list[tuple[Listing, TrustScore]], + opts: FilterOptions, +) -> FilterState: + """Render filter sidebar and return current FilterState.""" + state = FilterState() + + st.sidebar.markdown("### Filters") + st.sidebar.caption(f"{len(pairs)} results") + + state.min_trust_score = st.sidebar.slider("Min trust score", 0, 100, 0, key="min_trust") + st.sidebar.caption( + f"🟢 Safe (80+): {opts.score_bands['safe']} " + f"🟡 Review (50–79): {opts.score_bands['review']} " + f"🔴 Skip (<50): {opts.score_bands['skip']}" + ) + + st.sidebar.markdown("**Price**") + col1, col2 = st.sidebar.columns(2) + state.min_price = col1.number_input("Min $", value=opts.price_min, step=50.0, key="min_p") + state.max_price = col2.number_input("Max $", value=opts.price_max, step=50.0, key="max_p") + + state.min_account_age_days = st.sidebar.slider( + "Account age (min days)", 0, 365, 0, key="age") + state.min_feedback_count = st.sidebar.slider( + "Feedback count (min)", 0, 500, 0, key="fb_count") + state.min_feedback_ratio = st.sidebar.slider( + "Positive feedback % (min)", 0, 100, 0, key="fb_ratio") / 100.0 + + if opts.conditions: + st.sidebar.markdown("**Condition**") + selected = [] + for cond, count in sorted(opts.conditions.items()): + if st.sidebar.checkbox(f"{cond} ({count})", value=True, key=f"cond_{cond}"): + selected.append(cond) + state.conditions = selected + + st.sidebar.markdown("**Hide if flagged**") + state.hide_new_accounts = st.sidebar.checkbox( + f"New account (<30d) ({opts.new_account_count})", key="hide_new") + state.hide_suspicious_price = st.sidebar.checkbox("Suspicious price", key="hide_price") + state.hide_duplicate_photos = st.sidebar.checkbox( + f"Duplicate photo ({opts.duplicate_count})", key="hide_dup") + + if st.sidebar.button("Reset filters", key="reset"): + st.rerun() + + return state +``` + +- [ ] **Step 4: Run filter tests** + +```bash +conda run -n job-seeker pytest tests/ui/test_filters.py -v +``` +Expected: 4 PASSED + +- [ ] **Step 5: Write listing_row.py** + +```python +# app/ui/components/listing_row.py +"""Render a single listing row with trust score, badges, and error states.""" +from __future__ import annotations +import json +import streamlit as st +from app.db.models import Listing, TrustScore, Seller +from typing import Optional + + +def _score_colour(score: int) -> str: + if score >= 80: return "🟢" + if score >= 50: return "🟡" + return "🔴" + + +def _flag_label(flag: str) -> str: + labels = { + "new_account": "✗ New account", + "account_under_30_days": "⚠ Account <30d", + "low_feedback_count": "⚠ Low feedback", + "suspicious_price": "✗ Suspicious price", + "duplicate_photo": "✗ Duplicate photo", + "established_bad_actor": "✗ Bad actor", + "marketing_photo": "✗ Marketing photo", + } + return labels.get(flag, f"⚠ {flag}") + + +def render_listing_row( + listing: Listing, + trust: Optional[TrustScore], + seller: Optional[Seller] = None, +) -> None: + col_img, col_info, col_score = st.columns([1, 5, 2]) + + with col_img: + if listing.photo_urls: + # Spec requires graceful 404 handling: show placeholder on failure + try: + import requests as _req + r = _req.head(listing.photo_urls[0], timeout=3, allow_redirects=True) + if r.status_code == 200: + st.image(listing.photo_urls[0], width=80) + else: + st.markdown("📷 *Photo unavailable*") + except Exception: + st.markdown("📷 *Photo unavailable*") + else: + st.markdown("📷 *No photo*") + + with col_info: + st.markdown(f"**{listing.title}**") + if seller: + age_str = f"{seller.account_age_days // 365}yr" if seller.account_age_days >= 365 \ + else f"{seller.account_age_days}d" + st.caption( + f"{seller.username} · {seller.feedback_count} fb · " + f"{seller.feedback_ratio*100:.1f}% · member {age_str}" + ) + else: + st.caption(f"{listing.seller_platform_id} · *Seller data unavailable*") + + if trust: + flags = json.loads(trust.red_flags_json or "[]") + if flags: + badge_html = " ".join( + f'{_flag_label(f)}' + for f in flags + ) + st.markdown(badge_html, unsafe_allow_html=True) + if trust.score_is_partial: + st.caption("⚠ Partial score — some data unavailable") + else: + st.caption("⚠ Could not score this listing") + + with col_score: + if trust: + icon = _score_colour(trust.composite_score) + st.metric(label="Trust", value=f"{icon} {trust.composite_score}") + else: + st.metric(label="Trust", value="?") + st.markdown(f"**${listing.price:,.0f}**") + st.markdown(f"[Open eBay ↗]({listing.url})") + + st.divider() +``` + +- [ ] **Step 6: Write Search.py** + +```python +# app/ui/Search.py +"""Main search + results page.""" +from __future__ import annotations +import os +from pathlib import Path +import streamlit as st +from circuitforge_core.config import load_env +from app.db.store import Store +from app.platforms import SearchFilters +from app.platforms.ebay.auth import EbayTokenManager +from app.platforms.ebay.adapter import EbayAdapter +from app.trust import TrustScorer +from app.ui.components.filters import build_filter_options, render_filter_sidebar, FilterState +from app.ui.components.listing_row import render_listing_row + +load_env(Path(".env")) +_DB_PATH = Path(os.environ.get("SNIPE_DB", "data/snipe.db")) +_DB_PATH.parent.mkdir(exist_ok=True) + + +def _get_adapter() -> EbayAdapter: + store = Store(_DB_PATH) + tokens = EbayTokenManager( + client_id=os.environ.get("EBAY_CLIENT_ID", ""), + client_secret=os.environ.get("EBAY_CLIENT_SECRET", ""), + env=os.environ.get("EBAY_ENV", "production"), + ) + return EbayAdapter(tokens, store, env=os.environ.get("EBAY_ENV", "production")) + + +def _passes_filter(listing, trust, seller, state: FilterState) -> bool: + import json + if trust and trust.composite_score < state.min_trust_score: + return False + if state.min_price and listing.price < state.min_price: + return False + if state.max_price and listing.price > state.max_price: + return False + if state.conditions and listing.condition not in state.conditions: + return False + if seller: + if seller.account_age_days < state.min_account_age_days: + return False + if seller.feedback_count < state.min_feedback_count: + return False + if seller.feedback_ratio < state.min_feedback_ratio: + return False + if trust: + flags = json.loads(trust.red_flags_json or "[]") + if state.hide_new_accounts and "account_under_30_days" in flags: + return False + if state.hide_suspicious_price and "suspicious_price" in flags: + return False + if state.hide_duplicate_photos and "duplicate_photo" in flags: + return False + return True + + +def render() -> None: + st.title("🔍 Snipe — eBay Listing Search") + + col_q, col_price, col_btn = st.columns([4, 2, 1]) + query = col_q.text_input("Search", placeholder="RTX 4090 GPU", label_visibility="collapsed") + max_price = col_price.number_input("Max price $", min_value=0.0, value=0.0, + step=50.0, label_visibility="collapsed") + search_clicked = col_btn.button("Search", use_container_width=True) + + if not search_clicked or not query: + st.info("Enter a search term and click Search.") + return + + with st.spinner("Fetching listings..."): + try: + adapter = _get_adapter() + filters = SearchFilters(max_price=max_price if max_price > 0 else None) + listings = adapter.search(query, filters) + adapter.get_completed_sales(query) # warm the comps cache + except Exception as e: + st.error(f"eBay search failed: {e}") + return + + if not listings: + st.warning("No listings found.") + return + + store = Store(_DB_PATH) + for listing in listings: + store.save_listing(listing) + if listing.seller_platform_id: + seller = adapter.get_seller(listing.seller_platform_id) + if seller: + store.save_seller(seller) + + scorer = TrustScorer(store) + trust_scores = scorer.score_batch(listings, query) + pairs = list(zip(listings, trust_scores)) + + opts = build_filter_options(pairs) + filter_state = render_filter_sidebar(pairs, opts) + + sort_col = st.selectbox("Sort by", ["Trust score", "Price ↑", "Price ↓", "Newest"], + label_visibility="collapsed") + + def sort_key(pair): + l, t = pair + if sort_col == "Trust score": return -(t.composite_score if t else 0) + if sort_col == "Price ↑": return l.price + if sort_col == "Price ↓": return -l.price + return l.listing_age_days + + sorted_pairs = sorted(pairs, key=sort_key) + visible = [(l, t) for l, t in sorted_pairs + if _passes_filter(l, t, store.get_seller("ebay", l.seller_platform_id), filter_state)] + hidden_count = len(sorted_pairs) - len(visible) + + st.caption(f"{len(visible)} results · {hidden_count} hidden by filters") + + for listing, trust in visible: + seller = store.get_seller("ebay", listing.seller_platform_id) + render_listing_row(listing, trust, seller) + + if hidden_count: + if st.button(f"Show {hidden_count} hidden results"): + # Track visible by (platform, platform_listing_id) to avoid object-identity comparison + visible_ids = {(l.platform, l.platform_listing_id) for l, _ in visible} + for listing, trust in sorted_pairs: + if (listing.platform, listing.platform_listing_id) not in visible_ids: + seller = store.get_seller("ebay", listing.seller_platform_id) + render_listing_row(listing, trust, seller) +``` + +- [ ] **Step 7: Write app/app.py** + +```python +# app/app.py +"""Streamlit entrypoint.""" +import streamlit as st + +st.set_page_config( + page_title="Snipe", + page_icon="🎯", + layout="wide", + initial_sidebar_state="expanded", +) + +from app.ui.Search import render +render() +``` + +- [ ] **Step 8: Run all tests** + +```bash +conda run -n job-seeker pytest tests/ -v --tb=short +``` +Expected: All PASSED + +- [ ] **Step 9: Smoke-test the UI** + +```bash +cd /Library/Development/CircuitForge/snipe +cp .env.example .env +# Fill in real EBAY_CLIENT_ID and EBAY_CLIENT_SECRET in .env +conda run -n job-seeker streamlit run app/app.py --server.port 8506 +``` +Open http://localhost:8506, search for "RTX 4090", verify results appear with trust scores. + +- [ ] **Step 10: Commit** + +```bash +git add app/ui/ app/app.py tests/ui/ +git commit -m "feat: add search UI with dynamic filter sidebar and listing rows" +``` + +--- + +## Task 7b: First-run wizard stub + +**Files:** `app/wizard/__init__.py`, `app/wizard/setup.py` + +The spec (section 3.4) includes `app/wizard/` in the directory structure. This task creates a stub that collects eBay credentials on first run and writes `.env`. Full wizard UX (multi-step onboarding flow) is wired in a later pass; this stub ensures the import path exists and the first-run gate works. + +- [ ] **Step 1: Write wizard/setup.py** + +```python +# app/wizard/setup.py +"""First-run wizard: collect eBay credentials and write .env.""" +from __future__ import annotations +from pathlib import Path +import streamlit as st +from circuitforge_core.wizard import BaseWizard + + +class SnipeSetupWizard(BaseWizard): + """ + Guides the user through first-run setup: + 1. Enter eBay Client ID + Secret + 2. Choose sandbox vs production + 3. Verify connection (token fetch) + 4. Write .env file + """ + + def __init__(self, env_path: Path = Path(".env")): + self._env_path = env_path + + def run(self) -> bool: + """Run the setup wizard. Returns True if setup completed successfully.""" + st.title("🎯 Snipe — First Run Setup") + st.info( + "To use Snipe, you need eBay developer credentials. " + "Register at https://developer.ebay.com and create an app to get your Client ID and Secret." + ) + + client_id = st.text_input("eBay Client ID", type="password") + client_secret = st.text_input("eBay Client Secret", type="password") + env = st.selectbox("eBay Environment", ["production", "sandbox"]) + + if st.button("Save and verify"): + if not client_id or not client_secret: + st.error("Both Client ID and Secret are required.") + return False + # Write .env + self._env_path.write_text( + f"EBAY_CLIENT_ID={client_id}\n" + f"EBAY_CLIENT_SECRET={client_secret}\n" + f"EBAY_ENV={env}\n" + f"SNIPE_DB=data/snipe.db\n" + ) + st.success(f".env written to {self._env_path}. Reload the app to begin searching.") + return True + return False + + def is_configured(self) -> bool: + """Return True if .env exists and has eBay credentials.""" + if not self._env_path.exists(): + return False + text = self._env_path.read_text() + return "EBAY_CLIENT_ID=" in text and "EBAY_CLIENT_SECRET=" in text +``` + +```python +# app/wizard/__init__.py +from .setup import SnipeSetupWizard + +__all__ = ["SnipeSetupWizard"] +``` + +- [ ] **Step 2: Wire wizard gate into app.py** + +Update `app/app.py` to show the wizard on first run: + +```python +# app/app.py +"""Streamlit entrypoint.""" +from pathlib import Path +import streamlit as st +from app.wizard import SnipeSetupWizard + +st.set_page_config( + page_title="Snipe", + page_icon="🎯", + layout="wide", + initial_sidebar_state="expanded", +) + +wizard = SnipeSetupWizard(env_path=Path(".env")) +if not wizard.is_configured(): + wizard.run() + st.stop() + +from app.ui.Search import render +render() +``` + +- [ ] **Step 3: Run all tests** + +```bash +conda run -n job-seeker pytest tests/ -v --tb=short +``` +Expected: All PASSED (no new tests needed — wizard is UI-only code) + +- [ ] **Step 4: Commit** + +```bash +git add app/wizard/ app/app.py +git commit -m "feat: add first-run setup wizard stub" +``` + +--- + +## Task 8: Docker build and manage.sh + +- [ ] **Step 1: Test Docker build** + +```bash +cd /Library/Development/CircuitForge +docker compose -f snipe/compose.yml build +``` +Expected: Build succeeds + +- [ ] **Step 2: Test Docker run** + +```bash +cd /Library/Development/CircuitForge/snipe +docker compose up -d +``` +Open http://localhost:8506, verify UI loads. + +- [ ] **Step 3: Test manage.sh** + +```bash +./manage.sh status +./manage.sh logs +./manage.sh stop +./manage.sh start +./manage.sh open # should open http://localhost:8506 +``` + +- [ ] **Step 4: Final commit and push** + +```bash +git add . +git commit -m "feat: Snipe MVP v0.1 — eBay trust scorer with faceted filter UI" +git remote add origin https://git.opensourcesolarpunk.com/Circuit-Forge/snipe.git +git push -u origin main +``` diff --git a/docs/superpowers/specs/2026-03-25-snipe-circuitforge-core-design.md b/docs/superpowers/specs/2026-03-25-snipe-circuitforge-core-design.md new file mode 100644 index 0000000..31a26d5 --- /dev/null +++ b/docs/superpowers/specs/2026-03-25-snipe-circuitforge-core-design.md @@ -0,0 +1,322 @@ +# Snipe MVP + circuitforge-core Extraction — Design Spec +**Date:** 2026-03-25 +**Status:** Approved +**Products:** `snipe` (new), `circuitforge-core` (new), `peregrine` (updated) + +--- + +## 1. Overview + +This spec covers two parallel workstreams: + +1. **circuitforge-core extraction** — hoist the shared scaffold from Peregrine into a private, locally-installable Python package. Peregrine becomes the first downstream consumer. All future CF products depend on it. +2. **Snipe MVP** — eBay listing monitor + seller trust scorer, built on top of circuitforge-core. Solves the immediate problem: filtering scam accounts when searching for used GPU listings on eBay. + +Design principle: *cry once*. Pay the extraction cost now while there are only two products; every product after this benefits for free. + +--- + +## 2. circuitforge-core + +### 2.1 Repository + +- **Repo:** `git.opensourcesolarpunk.com/Circuit-Forge/circuitforge-core` (private) +- **Local path:** `/Library/Development/CircuitForge/circuitforge-core/` +- **Install method:** `pip install -e ../circuitforge-core` (editable local package; graduate to Forgejo Packages private PyPI at product #3) +- **License:** BSL 1.1 for AI features, MIT for pipeline/utility layers + +### 2.2 Package Structure + +``` +circuitforge-core/ + circuitforge_core/ + pipeline/ # SQLite staging DB, status machine, background task runner + llm/ # LLM router: fallback chain, BYOK support, vision-aware routing + vision/ # Vision model wrapper — moondream2 (local) + Claude vision (cloud) [NET-NEW] + wizard/ # First-run onboarding framework, tier gating, crash recovery + tiers/ # Tier system (Free/Paid/Premium/Ultra) + Heimdall license client + db/ # SQLite base class, migration runner + config/ # Settings loader, env validation, secrets management + pyproject.toml + README.md +``` + +### 2.3 Extraction from Peregrine + +The following Peregrine modules are **extracted** (migrated from Peregrine, not net-new): + +| Peregrine source | → Core module | Notes | +|---|---|---| +| `app/wizard/` | `circuitforge_core/wizard/` | | +| `scripts/llm_router.py` | `circuitforge_core/llm/router.py` | Path is `scripts/`, not `app/` | +| `app/wizard/tiers.py` | `circuitforge_core/tiers/` | | +| SQLite pipeline base | `circuitforge_core/pipeline/` | | + +**`circuitforge_core/vision/`** is **net-new** — no vision module exists in Peregrine to extract. It is built fresh in core. + +**Peregrine dependency management:** Peregrine uses `requirements.txt`, not `pyproject.toml`. The migration adds `circuitforge-core` to `requirements.txt` as a local path entry: `-e ../circuitforge-core`. Snipe is greenfield and uses `pyproject.toml` from the start. There is no requirement to migrate Peregrine to `pyproject.toml` as part of this work. + +### 2.4 Docker Build Strategy + +Docker build contexts cannot reference paths outside the context directory (`COPY ../` is forbidden). Both Peregrine and Snipe resolve this by setting the compose build context to the parent directory: + +```yaml +# compose.yml (snipe or peregrine) +services: + app: + build: + context: .. # /Library/Development/CircuitForge/ + dockerfile: snipe/Dockerfile +``` + +```dockerfile +# snipe/Dockerfile +COPY circuitforge-core/ ./circuitforge-core/ +RUN pip install -e ./circuitforge-core +COPY snipe/ ./snipe/ +RUN pip install -e ./snipe +``` + +In development, `compose.override.yml` bind-mounts `../circuitforge-core` so local edits to core are immediately live without rebuild. + +--- + +## 3. Snipe MVP + +### 3.1 Scope + +**In (v0.1 MVP):** +- eBay listing search (Browse API + Seller API) +- Metadata trust scoring (free tier) +- Perceptual hash duplicate photo detection within a search result set (free tier) +- Faceted filter UI with dynamic, data-driven filter options and sliders +- On-demand search only +- `SavedSearch` DB schema scaffolded but monitoring not wired up + +**Out (future versions):** +- Background polling / saved search alerts (v0.2) +- Photo analysis via vision model — real vs marketing shot, EM bag detection (v0.2, paid) +- Serial number consistency check (v0.2, paid) +- AI-generated image detection (v0.3, paid) +- Reverse image search (v0.4, paid) +- Additional platforms: HiBid, CT Bids, AuctionZip (v0.3+) +- Bid scheduling / snipe execution (v0.4+) + +### 3.2 Repository + +- **Repo:** `git.opensourcesolarpunk.com/Circuit-Forge/snipe` (public discovery layer) +- **Local path:** `/Library/Development/CircuitForge/snipe/` +- **License:** MIT (discovery/pipeline), BSL 1.1 (AI features) +- **Product code:** `CFG-SNPE` +- **Port:** 8506 + +### 3.3 Tech Stack + +Follows Peregrine as the reference implementation: + +- **UI:** Streamlit (Python) +- **DB:** SQLite via `circuitforge_core.db` +- **LLM/Vision:** `circuitforge_core.llm` / `circuitforge_core.vision` +- **Tiers:** `circuitforge_core.tiers` +- **Containerisation:** Docker + `compose.yml`, managed via `manage.sh` +- **Python env:** `conda run -n job-seeker` (shared CF env) + +### 3.4 Application Structure + +``` +snipe/ + app/ + platforms/ + __init__.py # PlatformAdapter abstract base class + ebay/ + adapter.py # eBay Browse API + Seller API client + auth.py # OAuth2 client credentials token manager + normaliser.py # Raw API response → Listing / Seller schema + trust/ + __init__.py # TrustScorer orchestrator + metadata.py # Account age, feedback, price vs market, category history + photo.py # Perceptual hash dedup (free); vision analysis (paid, v0.2+) + aggregator.py # Weighted composite score + red flag extraction + ui/ + Search.py # Main search + results page + components/ + filters.py # Dynamic faceted filter sidebar + listing_row.py # Listing card with trust badge + red flags + error state + db/ + models.py # Listing, Seller, Search, TrustScore, SavedSearch schemas + migrations/ + wizard/ # First-run onboarding (thin wrapper on core wizard) + snipe/ # Bid engine placeholder (v0.4) + manage.sh + compose.yml + compose.override.yml + Dockerfile + pyproject.toml +``` + +### 3.5 eBay API Credentials + +eBay Browse API and Seller API require OAuth 2.0 app-level tokens (client credentials flow — no user auth needed, but a registered eBay developer account and app credentials are required). + +**Token lifecycle:** +- App token fetched at startup and cached in memory with expiry +- `auth.py` handles refresh automatically on expiry (tokens last 2 hours) +- On token fetch failure: search fails with a user-visible error; no silent fallback + +**Credentials storage:** `.env` file (gitignored), never hardcoded. +``` +EBAY_CLIENT_ID=... +EBAY_CLIENT_SECRET=... +EBAY_ENV=production # or sandbox +``` + +**Rate limits:** eBay Browse API — 5,000 calls/day (sandbox), higher on production. Completed sales comps results are cached in SQLite with a 6-hour TTL to avoid redundant calls and stay within limits. Cache miss triggers a fresh fetch; fetch failure degrades gracefully (price vs market signal skipped, score noted as partial). + +**API split:** `get_seller()` uses the eBay Seller API (different endpoint, same app token). Rate limits are tracked separately. The `PlatformAdapter` interface does not expose this distinction; it is an internal concern of the eBay adapter. + +### 3.6 Data Model + +**`Listing`** +``` +id, platform, platform_listing_id, title, price, currency, +condition, seller_id, url, photo_urls (JSON), listing_age_days, +fetched_at, trust_score_id +``` + +**`Seller`** +``` +id, platform, platform_seller_id, username, +account_age_days, feedback_count, feedback_ratio, +category_history_json, fetched_at +``` + +**`TrustScore`** +``` +id, listing_id, composite_score, +account_age_score, feedback_count_score, feedback_ratio_score, +price_vs_market_score, category_history_score, +photo_hash_duplicate (bool), +photo_analysis_json (paid, nullable), +red_flags_json, scored_at, score_is_partial (bool) +``` + +**`MarketComp`** *(price comps cache)* +``` +id, platform, query_hash, median_price, sample_count, fetched_at, expires_at +``` + +**`SavedSearch`** *(schema scaffolded in v0.1; monitoring not wired until v0.2)* +``` +id, name, query, platform, filters_json, created_at, last_run_at +``` + +**`PhotoHash`** *(perceptual hash store for cross-search dedup, v0.2+)* +``` +id, listing_id, photo_url, phash, first_seen_at +``` + +### 3.7 Platform Adapter Interface + +```python +class PlatformAdapter: + def search(self, query: str, filters: SearchFilters) -> list[Listing]: ... + def get_seller(self, seller_id: str) -> Seller: ... + def get_completed_sales(self, query: str) -> list[Listing]: ... +``` + +Adding HiBid or CT Bids later = new adapter, zero changes to trust scorer or UI. + +### 3.8 Trust Scorer + +#### Metadata Signals (Free) + +Five signals, each scored 0–20, equal weight. Composite = sum (0–100). + +| Signal | Source | Red flag threshold | Score 0 condition | +|---|---|---|---| +| Account age | eBay Seller API | < 30 days | < 7 days (also hard-filter) | +| Feedback count | eBay Seller API | < 10 | < 3 | +| Feedback ratio | eBay Seller API | < 95% | < 80% with count > 20 | +| Price vs market | Completed sales comps | > 30% below median | > 50% below median | +| Category history | Seller past sales | No prior electronics sales | No prior sales at all | + +**Hard filters** (auto-hide regardless of composite score): +- Account age < 7 days +- Feedback ratio < 80% with feedback count > 20 + +**Partial scores:** If any signal's data source is unavailable (API failure, rate limit), that signal contributes 0 and `score_is_partial = True` is set on the `TrustScore` record. The UI surfaces a "⚠ Partial score" indicator on affected listings. + +#### Photo Signals — Anti-Gotcha Layer + +| Signal | Tier | Version | Method | +|---|---|---|---| +| Perceptual hash dedup within result set | **Free** | v0.1 MVP | Compare phashes across all listings in the current search response; flag duplicates | +| Real photo vs marketing shot | **Paid / Local vision** | v0.2 | Vision model classification | +| Open box + EM antistatic bag (proof of possession) | **Paid / Local vision** | v0.2 | Vision model classification | +| Serial number consistency across photos | **Paid / Local vision** | v0.2 | Vision model OCR + comparison | +| AI-generated image detection | **Paid** | v0.3 | Classifier model | +| Reverse image search | **Paid** | v0.4 | Google Lens / TinEye API | + +**v0.1 dedup scope:** Perceptual hash comparison is within the current search result set only (not across historical searches). Cross-session dedup uses the `PhotoHash` table and is a v0.2 feature. Photos are not downloaded to disk in v0.1 — hashes are computed from the image bytes in memory during the search request. + +### 3.9 Tier Gating + +Photo analysis features use `LOCAL_VISION_UNLOCKABLE` (analogous to `BYOK_UNLOCKABLE` in Peregrine's `tiers.py`) — they unlock for free-tier users who have a local vision model (moondream2) configured. This is distinct from BYOK (text LLM key), which does not unlock vision features. + +| Feature | Free | Paid | Local vision unlock | +|---|---|---|---| +| Metadata trust scoring | ✓ | ✓ | — | +| Perceptual hash dedup (within result set) | ✓ | ✓ | — | +| Photo analysis (real/marketing/EM bag) | — | ✓ | ✓ | +| Serial number consistency | — | ✓ | ✓ | +| AI generation detection | — | ✓ | — | +| Reverse image search | — | ✓ | — | +| Saved searches + background monitoring | — | ✓ | — | + +Locked features are shown (disabled) in the filter sidebar so free users see what's available. Clicking a locked filter shows a tier upgrade prompt. + +### 3.10 UI — Results Page + +**Search bar:** keywords, max price, condition selector, search button. Sort: trust score (default), price ↑/↓, listing age. + +**Filter sidebar** — all options and counts generated dynamically from the result set. Options with 0 results are hidden (not greyed): +- Trust score — range slider (min/max from results); colour-band summary (safe/review/skip + counts) +- Price — min/max text inputs + market avg/median annotation +- Seller account age — min slider +- Feedback count — min slider +- Positive feedback % — min slider +- Condition — checkboxes (options from data: New, Open Box, Used, For Parts) +- Photo signals — checkboxes: Real photo, EM bag visible, Open box, No AI-generated (locked, paid) +- Hide if flagged — checkboxes: New account (<30d), Marketing photo, >30% below market, Duplicate photo +- Shipping — Free shipping, Local pickup +- Reset filters button + +**Listing row (happy path):** thumbnail · title · seller summary (username, feedback count, ratio, tenure) · red flag badges · trust score badge (colour-coded: green 80+, amber 50–79, red <50) · `score_is_partial` indicator if applicable · price · "Open eBay ↗" link. Left border colour matches score band. + +**Listing row (error states):** +- Seller data unavailable: seller summary shows "Seller data unavailable" in muted text; affected signals show "–" and partial score indicator is set +- Photo URL 404: thumbnail shows placeholder icon; hash dedup skipped for that photo +- Trust scoring failed entirely: listing shown with score "?" badge in neutral grey; error logged; "Could not score this listing" tooltip + +**Hidden results:** count shown at bottom ("N results hidden by filters · show anyway"). Clicking reveals them in-place at reduced opacity. + +--- + +## 4. Build Order + +1. **circuitforge-core** — scaffold repo, extract wizard/llm/tiers/pipeline from Peregrine, build vision module net-new, update Peregrine `requirements.txt` +2. **Snipe scaffold** — repo init, Dockerfile, compose.yml (parent context), manage.sh, DB migrations, wizard first-run, `.env` template +3. **eBay adapter** — OAuth2 token manager, Browse API search, Seller API, completed sales comps with cache +4. **Metadata trust scorer** — all five signals, aggregator, hard filters, partial score handling +5. **Perceptual hash dedup** — in-memory within-result-set comparison +6. **Results UI** — search page, listing rows (happy + error states), dynamic filter sidebar +7. **Tier gating** — lock photo signals, `LOCAL_VISION_UNLOCKABLE` gate, upsell prompts in UI + +--- + +## 5. Documentation Locations + +- Product spec: `snipe/docs/superpowers/specs/2026-03-25-snipe-circuitforge-core-design.md` *(this file)* +- Internal copy: `circuitforge-plans/snipe/2026-03-25-snipe-circuitforge-core-design.md` +- Roadmap: `Circuit-Forge/roadmap` issues #14 (snipe) and #21 (circuitforge-core) +- Org-level context: `/Library/Development/CircuitForge/CLAUDE.md`