2227 lines
71 KiB
Markdown
2227 lines
71 KiB
Markdown
# Snipe MVP Implementation Plan
|
||
|
||
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
||
|
||
**Goal:** Build the Snipe MVP — an eBay listing monitor with seller trust scoring and a faceted-filter Streamlit UI — on top of `circuitforge-core`.
|
||
|
||
**Architecture:** Streamlit app following Peregrine's patterns. eBay Browse + Seller APIs behind a `PlatformAdapter` interface. Trust scorer runs metadata signals (account age, feedback, price vs market, category history) and perceptual hash dedup within the result set. Dynamic filter sidebar generated from live result data. Tier gating uses `circuitforge_core.tiers` with `LOCAL_VISION_UNLOCKABLE` for future photo analysis.
|
||
|
||
**Prerequisite:** The `circuitforge-core` plan must be complete and `circuitforge-core` installed in the `job-seeker` conda env before starting this plan.
|
||
|
||
**Tech Stack:** Python 3.11+, Streamlit, SQLite, eBay Browse API, eBay Seller API, imagehash (perceptual hashing), Pillow, pytest, Docker
|
||
|
||
---
|
||
|
||
## File Map
|
||
|
||
| File | Responsibility |
|
||
|---|---|
|
||
| `app/platforms/__init__.py` | `PlatformAdapter` abstract base class + `SearchFilters` dataclass |
|
||
| `app/platforms/ebay/__init__.py` | Package init |
|
||
| `app/platforms/ebay/auth.py` | OAuth2 client credentials token manager (fetch, cache, auto-refresh) |
|
||
| `app/platforms/ebay/adapter.py` | `EbayAdapter(PlatformAdapter)` — `search()`, `get_seller()`, `get_completed_sales()` |
|
||
| `app/platforms/ebay/normaliser.py` | Raw eBay API JSON → `Listing` / `Seller` dataclasses |
|
||
| `app/trust/__init__.py` | `TrustScorer` orchestrator — calls metadata + photo scorers, returns `TrustScore` |
|
||
| `app/trust/metadata.py` | Five metadata signals → per-signal 0–20 scores |
|
||
| `app/trust/photo.py` | Perceptual hash dedup within result set (free); vision analysis stub (paid) |
|
||
| `app/trust/aggregator.py` | Weighted sum → composite 0–100, red flag extraction, hard filter logic |
|
||
| `app/db/models.py` | `Listing`, `Seller`, `TrustScore`, `MarketComp`, `SavedSearch` dataclasses + SQLite schema strings |
|
||
| `app/db/migrations/001_init.sql` | Initial schema: all tables |
|
||
| `app/db/store.py` | `Store` — thin SQLite read/write layer for all models |
|
||
| `app/ui/Search.py` | Streamlit main page: search bar, results, listing rows |
|
||
| `app/ui/components/filters.py` | `render_filter_sidebar(results)` → `FilterState` |
|
||
| `app/ui/components/listing_row.py` | `render_listing_row(listing, trust_score)` |
|
||
| `app/tiers.py` | Snipe-specific `FEATURES` dict + `LOCAL_VISION_UNLOCKABLE`; delegates `can_use` to core |
|
||
| `app/app.py` | Streamlit entrypoint — page config, routing |
|
||
| `app/wizard/setup.py` | First-run: collect eBay credentials, verify connection, write `.env` |
|
||
| `tests/platforms/test_ebay_auth.py` | Token fetch, cache, expiry, refresh |
|
||
| `tests/platforms/test_ebay_normaliser.py` | API JSON → dataclass conversion |
|
||
| `tests/trust/test_metadata.py` | All five metadata signal scorers |
|
||
| `tests/trust/test_photo.py` | Perceptual hash dedup |
|
||
| `tests/trust/test_aggregator.py` | Composite score, hard filters, partial score flag |
|
||
| `tests/db/test_store.py` | Store read/write round-trips |
|
||
| `tests/ui/test_filters.py` | Dynamic filter generation from result set |
|
||
| `Dockerfile` | Parent-context build (`context: ..`) |
|
||
| `compose.yml` | App service, port 8506 |
|
||
| `compose.override.yml` | Dev: bind-mount circuitforge-core, hot reload |
|
||
| `manage.sh` | start/stop/restart/status/logs/open |
|
||
| `pyproject.toml` | Package deps including `circuitforge-core` |
|
||
| `.env.example` | Template with `EBAY_CLIENT_ID`, `EBAY_CLIENT_SECRET`, `EBAY_ENV` |
|
||
|
||
---
|
||
|
||
## Task 1: Scaffold repo
|
||
|
||
**Files:** `pyproject.toml`, `manage.sh`, `compose.yml`, `compose.override.yml`, `Dockerfile`, `.env.example`, `.gitignore`, `app/__init__.py`
|
||
|
||
- [ ] **Step 0: Initialize git repo**
|
||
|
||
```bash
|
||
cd /Library/Development/CircuitForge/snipe
|
||
git init
|
||
```
|
||
|
||
- [ ] **Step 1: Write pyproject.toml**
|
||
|
||
```toml
|
||
# /Library/Development/CircuitForge/snipe/pyproject.toml
|
||
[build-system]
|
||
requires = ["setuptools>=68"]
|
||
build-backend = "setuptools.build_meta"
|
||
|
||
[project]
|
||
name = "snipe"
|
||
version = "0.1.0"
|
||
description = "Auction listing monitor and trust scorer"
|
||
requires-python = ">=3.11"
|
||
dependencies = [
|
||
"circuitforge-core",
|
||
"streamlit>=1.32",
|
||
"requests>=2.31",
|
||
"imagehash>=4.3",
|
||
"Pillow>=10.0",
|
||
"python-dotenv>=1.0",
|
||
]
|
||
|
||
[tool.setuptools.packages.find]
|
||
where = ["."]
|
||
include = ["app*"]
|
||
|
||
[tool.pytest.ini_options]
|
||
testpaths = ["tests"]
|
||
```
|
||
|
||
- [ ] **Step 2: Write .env.example**
|
||
|
||
```bash
|
||
# /Library/Development/CircuitForge/snipe/.env.example
|
||
EBAY_CLIENT_ID=your-client-id-here
|
||
EBAY_CLIENT_SECRET=your-client-secret-here
|
||
EBAY_ENV=production # or: sandbox
|
||
SNIPE_DB=data/snipe.db
|
||
```
|
||
|
||
- [ ] **Step 3: Write Dockerfile**
|
||
|
||
```dockerfile
|
||
# /Library/Development/CircuitForge/snipe/Dockerfile
|
||
FROM python:3.11-slim
|
||
|
||
WORKDIR /app
|
||
|
||
# Install circuitforge-core from sibling directory (compose sets context: ..)
|
||
COPY circuitforge-core/ ./circuitforge-core/
|
||
RUN pip install --no-cache-dir -e ./circuitforge-core
|
||
|
||
# Install snipe
|
||
COPY snipe/ ./snipe/
|
||
WORKDIR /app/snipe
|
||
RUN pip install --no-cache-dir -e .
|
||
|
||
EXPOSE 8506
|
||
CMD ["streamlit", "run", "app/app.py", "--server.port=8506", "--server.address=0.0.0.0"]
|
||
```
|
||
|
||
- [ ] **Step 4: Write compose.yml**
|
||
|
||
```yaml
|
||
# /Library/Development/CircuitForge/snipe/compose.yml
|
||
services:
|
||
snipe:
|
||
build:
|
||
context: ..
|
||
dockerfile: snipe/Dockerfile
|
||
ports:
|
||
- "8506:8506"
|
||
env_file: .env
|
||
volumes:
|
||
- ./data:/app/snipe/data
|
||
```
|
||
|
||
- [ ] **Step 5: Write compose.override.yml**
|
||
|
||
```yaml
|
||
# /Library/Development/CircuitForge/snipe/compose.override.yml
|
||
services:
|
||
snipe:
|
||
volumes:
|
||
- ../circuitforge-core:/app/circuitforge-core
|
||
- ./app:/app/snipe/app
|
||
- ./data:/app/snipe/data
|
||
environment:
|
||
- STREAMLIT_SERVER_RUN_ON_SAVE=true
|
||
```
|
||
|
||
- [ ] **Step 6: Write .gitignore**
|
||
|
||
```
|
||
__pycache__/
|
||
*.pyc
|
||
*.pyo
|
||
.env
|
||
*.egg-info/
|
||
dist/
|
||
.pytest_cache/
|
||
data/
|
||
.superpowers/
|
||
```
|
||
|
||
- [ ] **Step 6b: Write manage.sh**
|
||
|
||
```bash
|
||
# /Library/Development/CircuitForge/snipe/manage.sh
|
||
#!/usr/bin/env bash
|
||
set -euo pipefail
|
||
|
||
SERVICE=snipe
|
||
PORT=8506
|
||
COMPOSE_FILE="compose.yml"
|
||
|
||
usage() {
|
||
echo "Usage: $0 {start|stop|restart|status|logs|open|update}"
|
||
exit 1
|
||
}
|
||
|
||
cmd="${1:-help}"
|
||
shift || true
|
||
|
||
case "$cmd" in
|
||
start)
|
||
docker compose -f "$COMPOSE_FILE" up -d
|
||
echo "$SERVICE started on http://localhost:$PORT"
|
||
;;
|
||
stop)
|
||
docker compose -f "$COMPOSE_FILE" down
|
||
;;
|
||
restart)
|
||
docker compose -f "$COMPOSE_FILE" down
|
||
docker compose -f "$COMPOSE_FILE" up -d
|
||
echo "$SERVICE restarted on http://localhost:$PORT"
|
||
;;
|
||
status)
|
||
docker compose -f "$COMPOSE_FILE" ps
|
||
;;
|
||
logs)
|
||
docker compose -f "$COMPOSE_FILE" logs -f "${@:-$SERVICE}"
|
||
;;
|
||
open)
|
||
xdg-open "http://localhost:$PORT" 2>/dev/null || open "http://localhost:$PORT"
|
||
;;
|
||
update)
|
||
docker compose -f "$COMPOSE_FILE" pull
|
||
docker compose -f "$COMPOSE_FILE" up -d --build
|
||
;;
|
||
*)
|
||
usage
|
||
;;
|
||
esac
|
||
```
|
||
|
||
```bash
|
||
chmod +x /Library/Development/CircuitForge/snipe/manage.sh
|
||
```
|
||
|
||
- [ ] **Step 7: Create package __init__.py files**
|
||
|
||
```bash
|
||
mkdir -p /Library/Development/CircuitForge/snipe/app
|
||
touch /Library/Development/CircuitForge/snipe/app/__init__.py
|
||
mkdir -p /Library/Development/CircuitForge/snipe/tests
|
||
touch /Library/Development/CircuitForge/snipe/tests/__init__.py
|
||
```
|
||
|
||
- [ ] **Step 8: Install and verify**
|
||
|
||
```bash
|
||
cd /Library/Development/CircuitForge/snipe
|
||
conda run -n job-seeker pip install -e .
|
||
conda run -n job-seeker python -c "import app; print('ok')"
|
||
```
|
||
Expected: `ok`
|
||
|
||
- [ ] **Step 9: Commit**
|
||
|
||
```bash
|
||
git add pyproject.toml Dockerfile compose.yml compose.override.yml manage.sh .env.example .gitignore app/__init__.py tests/__init__.py
|
||
git commit -m "feat: scaffold snipe repo"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 2: Data models and DB
|
||
|
||
**Files:** `app/db/__init__.py`, `app/db/models.py`, `app/db/migrations/001_init.sql`, `app/db/store.py`, `tests/db/__init__.py`, `tests/db/test_store.py`
|
||
|
||
- [ ] **Step 0: Create package directories**
|
||
|
||
```bash
|
||
mkdir -p /Library/Development/CircuitForge/snipe/app/db/migrations
|
||
touch /Library/Development/CircuitForge/snipe/app/db/__init__.py
|
||
mkdir -p /Library/Development/CircuitForge/snipe/tests/db
|
||
touch /Library/Development/CircuitForge/snipe/tests/db/__init__.py
|
||
```
|
||
|
||
- [ ] **Step 1: Write failing tests**
|
||
|
||
```python
|
||
# tests/db/test_store.py
|
||
import pytest
|
||
from pathlib import Path
|
||
from app.db.store import Store
|
||
from app.db.models import Listing, Seller, TrustScore, MarketComp
|
||
|
||
|
||
@pytest.fixture
|
||
def store(tmp_path):
|
||
return Store(tmp_path / "test.db")
|
||
|
||
|
||
def test_store_creates_tables(store):
|
||
# If no exception on init, tables exist
|
||
pass
|
||
|
||
|
||
def test_save_and_get_seller(store):
|
||
seller = Seller(
|
||
platform="ebay",
|
||
platform_seller_id="user123",
|
||
username="techseller",
|
||
account_age_days=730,
|
||
feedback_count=450,
|
||
feedback_ratio=0.991,
|
||
category_history_json="{}",
|
||
)
|
||
store.save_seller(seller)
|
||
result = store.get_seller("ebay", "user123")
|
||
assert result is not None
|
||
assert result.username == "techseller"
|
||
assert result.feedback_count == 450
|
||
|
||
|
||
def test_save_and_get_listing(store):
|
||
listing = Listing(
|
||
platform="ebay",
|
||
platform_listing_id="ebay-123",
|
||
title="RTX 4090 FE",
|
||
price=950.00,
|
||
currency="USD",
|
||
condition="used",
|
||
seller_platform_id="user123",
|
||
url="https://ebay.com/itm/123",
|
||
photo_urls=["https://i.ebayimg.com/1.jpg"],
|
||
listing_age_days=3,
|
||
)
|
||
store.save_listing(listing)
|
||
result = store.get_listing("ebay", "ebay-123")
|
||
assert result is not None
|
||
assert result.title == "RTX 4090 FE"
|
||
assert result.price == 950.00
|
||
|
||
|
||
def test_save_and_get_market_comp(store):
|
||
comp = MarketComp(
|
||
platform="ebay",
|
||
query_hash="abc123",
|
||
median_price=1050.0,
|
||
sample_count=12,
|
||
expires_at="2026-03-26T00:00:00",
|
||
)
|
||
store.save_market_comp(comp)
|
||
result = store.get_market_comp("ebay", "abc123")
|
||
assert result is not None
|
||
assert result.median_price == 1050.0
|
||
|
||
|
||
def test_get_market_comp_returns_none_for_expired(store):
|
||
comp = MarketComp(
|
||
platform="ebay",
|
||
query_hash="expired",
|
||
median_price=900.0,
|
||
sample_count=5,
|
||
expires_at="2020-01-01T00:00:00", # past
|
||
)
|
||
store.save_market_comp(comp)
|
||
result = store.get_market_comp("ebay", "expired")
|
||
assert result is None
|
||
```
|
||
|
||
- [ ] **Step 2: Run to verify failure**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/db/test_store.py -v
|
||
```
|
||
Expected: ImportError
|
||
|
||
- [ ] **Step 3: Write app/db/models.py**
|
||
|
||
```python
|
||
# app/db/models.py
|
||
"""Dataclasses for all Snipe domain objects."""
|
||
from __future__ import annotations
|
||
from dataclasses import dataclass, field
|
||
from typing import Optional
|
||
|
||
|
||
@dataclass
|
||
class Seller:
|
||
platform: str
|
||
platform_seller_id: str
|
||
username: str
|
||
account_age_days: int
|
||
feedback_count: int
|
||
feedback_ratio: float # 0.0–1.0
|
||
category_history_json: str # JSON blob of past category sales
|
||
id: Optional[int] = None
|
||
fetched_at: Optional[str] = None
|
||
|
||
|
||
@dataclass
|
||
class Listing:
|
||
platform: str
|
||
platform_listing_id: str
|
||
title: str
|
||
price: float
|
||
currency: str
|
||
condition: str
|
||
seller_platform_id: str
|
||
url: str
|
||
photo_urls: list[str] = field(default_factory=list)
|
||
listing_age_days: int = 0
|
||
id: Optional[int] = None
|
||
fetched_at: Optional[str] = None
|
||
trust_score_id: Optional[int] = None
|
||
|
||
|
||
@dataclass
|
||
class TrustScore:
|
||
listing_id: int
|
||
composite_score: int # 0–100
|
||
account_age_score: int # 0–20
|
||
feedback_count_score: int # 0–20
|
||
feedback_ratio_score: int # 0–20
|
||
price_vs_market_score: int # 0–20
|
||
category_history_score: int # 0–20
|
||
photo_hash_duplicate: bool = False
|
||
photo_analysis_json: Optional[str] = None
|
||
red_flags_json: str = "[]"
|
||
score_is_partial: bool = False
|
||
id: Optional[int] = None
|
||
scored_at: Optional[str] = None
|
||
|
||
|
||
@dataclass
|
||
class MarketComp:
|
||
platform: str
|
||
query_hash: str
|
||
median_price: float
|
||
sample_count: int
|
||
expires_at: str # ISO8601 — checked against current time
|
||
id: Optional[int] = None
|
||
fetched_at: Optional[str] = None
|
||
|
||
|
||
@dataclass
|
||
class SavedSearch:
|
||
"""Schema scaffolded in v0.1; background monitoring wired in v0.2."""
|
||
name: str
|
||
query: str
|
||
platform: str
|
||
filters_json: str = "{}"
|
||
id: Optional[int] = None
|
||
created_at: Optional[str] = None
|
||
last_run_at: Optional[str] = None
|
||
|
||
|
||
@dataclass
|
||
class PhotoHash:
|
||
"""Perceptual hash store for cross-search dedup (v0.2+). Schema scaffolded in v0.1."""
|
||
listing_id: int
|
||
photo_url: str
|
||
phash: str # hex string from imagehash
|
||
id: Optional[int] = None
|
||
first_seen_at: Optional[str] = None
|
||
```
|
||
|
||
- [ ] **Step 4: Write app/db/migrations/001_init.sql**
|
||
|
||
```sql
|
||
-- app/db/migrations/001_init.sql
|
||
CREATE TABLE IF NOT EXISTS sellers (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
platform TEXT NOT NULL,
|
||
platform_seller_id TEXT NOT NULL,
|
||
username TEXT NOT NULL,
|
||
account_age_days INTEGER NOT NULL,
|
||
feedback_count INTEGER NOT NULL,
|
||
feedback_ratio REAL NOT NULL,
|
||
category_history_json TEXT NOT NULL DEFAULT '{}',
|
||
fetched_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||
UNIQUE(platform, platform_seller_id)
|
||
);
|
||
|
||
CREATE TABLE IF NOT EXISTS listings (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
platform TEXT NOT NULL,
|
||
platform_listing_id TEXT NOT NULL,
|
||
title TEXT NOT NULL,
|
||
price REAL NOT NULL,
|
||
currency TEXT NOT NULL DEFAULT 'USD',
|
||
condition TEXT,
|
||
seller_platform_id TEXT,
|
||
url TEXT,
|
||
photo_urls TEXT NOT NULL DEFAULT '[]',
|
||
listing_age_days INTEGER DEFAULT 0,
|
||
fetched_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||
trust_score_id INTEGER REFERENCES trust_scores(id),
|
||
UNIQUE(platform, platform_listing_id)
|
||
);
|
||
|
||
CREATE TABLE IF NOT EXISTS trust_scores (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
listing_id INTEGER NOT NULL REFERENCES listings(id),
|
||
composite_score INTEGER NOT NULL,
|
||
account_age_score INTEGER NOT NULL DEFAULT 0,
|
||
feedback_count_score INTEGER NOT NULL DEFAULT 0,
|
||
feedback_ratio_score INTEGER NOT NULL DEFAULT 0,
|
||
price_vs_market_score INTEGER NOT NULL DEFAULT 0,
|
||
category_history_score INTEGER NOT NULL DEFAULT 0,
|
||
photo_hash_duplicate INTEGER NOT NULL DEFAULT 0,
|
||
photo_analysis_json TEXT,
|
||
red_flags_json TEXT NOT NULL DEFAULT '[]',
|
||
score_is_partial INTEGER NOT NULL DEFAULT 0,
|
||
scored_at TEXT DEFAULT CURRENT_TIMESTAMP
|
||
);
|
||
|
||
CREATE TABLE IF NOT EXISTS market_comps (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
platform TEXT NOT NULL,
|
||
query_hash TEXT NOT NULL,
|
||
median_price REAL NOT NULL,
|
||
sample_count INTEGER NOT NULL,
|
||
fetched_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||
expires_at TEXT NOT NULL,
|
||
UNIQUE(platform, query_hash)
|
||
);
|
||
|
||
CREATE TABLE IF NOT EXISTS saved_searches (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
name TEXT NOT NULL,
|
||
query TEXT NOT NULL,
|
||
platform TEXT NOT NULL DEFAULT 'ebay',
|
||
filters_json TEXT NOT NULL DEFAULT '{}',
|
||
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||
last_run_at TEXT
|
||
);
|
||
|
||
-- PhotoHash: perceptual hash store for cross-search dedup (v0.2+). Schema present in v0.1.
|
||
CREATE TABLE IF NOT EXISTS photo_hashes (
|
||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||
listing_id INTEGER NOT NULL REFERENCES listings(id),
|
||
photo_url TEXT NOT NULL,
|
||
phash TEXT NOT NULL,
|
||
first_seen_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||
UNIQUE(listing_id, photo_url)
|
||
);
|
||
```
|
||
|
||
- [ ] **Step 5: Write app/db/store.py**
|
||
|
||
```python
|
||
# app/db/store.py
|
||
"""Thin SQLite read/write layer for all Snipe models."""
|
||
from __future__ import annotations
|
||
import json
|
||
from datetime import datetime, timezone
|
||
from pathlib import Path
|
||
from typing import Optional
|
||
|
||
from circuitforge_core.db import get_connection, run_migrations
|
||
|
||
from .models import Listing, Seller, TrustScore, MarketComp
|
||
|
||
MIGRATIONS_DIR = Path(__file__).parent / "migrations"
|
||
|
||
|
||
class Store:
|
||
def __init__(self, db_path: Path):
|
||
self._conn = get_connection(db_path)
|
||
run_migrations(self._conn, MIGRATIONS_DIR)
|
||
|
||
# --- Seller ---
|
||
|
||
def save_seller(self, seller: Seller) -> None:
|
||
self._conn.execute(
|
||
"INSERT OR REPLACE INTO sellers "
|
||
"(platform, platform_seller_id, username, account_age_days, "
|
||
"feedback_count, feedback_ratio, category_history_json) "
|
||
"VALUES (?,?,?,?,?,?,?)",
|
||
(seller.platform, seller.platform_seller_id, seller.username,
|
||
seller.account_age_days, seller.feedback_count, seller.feedback_ratio,
|
||
seller.category_history_json),
|
||
)
|
||
self._conn.commit()
|
||
|
||
def get_seller(self, platform: str, platform_seller_id: str) -> Optional[Seller]:
|
||
row = self._conn.execute(
|
||
"SELECT platform, platform_seller_id, username, account_age_days, "
|
||
"feedback_count, feedback_ratio, category_history_json, id, fetched_at "
|
||
"FROM sellers WHERE platform=? AND platform_seller_id=?",
|
||
(platform, platform_seller_id),
|
||
).fetchone()
|
||
if not row:
|
||
return None
|
||
return Seller(*row[:7], id=row[7], fetched_at=row[8])
|
||
|
||
# --- Listing ---
|
||
|
||
def save_listing(self, listing: Listing) -> None:
|
||
self._conn.execute(
|
||
"INSERT OR REPLACE INTO listings "
|
||
"(platform, platform_listing_id, title, price, currency, condition, "
|
||
"seller_platform_id, url, photo_urls, listing_age_days) "
|
||
"VALUES (?,?,?,?,?,?,?,?,?,?)",
|
||
(listing.platform, listing.platform_listing_id, listing.title,
|
||
listing.price, listing.currency, listing.condition,
|
||
listing.seller_platform_id, listing.url,
|
||
json.dumps(listing.photo_urls), listing.listing_age_days),
|
||
)
|
||
self._conn.commit()
|
||
|
||
def get_listing(self, platform: str, platform_listing_id: str) -> Optional[Listing]:
|
||
row = self._conn.execute(
|
||
"SELECT platform, platform_listing_id, title, price, currency, condition, "
|
||
"seller_platform_id, url, photo_urls, listing_age_days, id, fetched_at "
|
||
"FROM listings WHERE platform=? AND platform_listing_id=?",
|
||
(platform, platform_listing_id),
|
||
).fetchone()
|
||
if not row:
|
||
return None
|
||
return Listing(
|
||
*row[:8],
|
||
photo_urls=json.loads(row[8]),
|
||
listing_age_days=row[9],
|
||
id=row[10],
|
||
fetched_at=row[11],
|
||
)
|
||
|
||
# --- MarketComp ---
|
||
|
||
def save_market_comp(self, comp: MarketComp) -> None:
|
||
self._conn.execute(
|
||
"INSERT OR REPLACE INTO market_comps "
|
||
"(platform, query_hash, median_price, sample_count, expires_at) "
|
||
"VALUES (?,?,?,?,?)",
|
||
(comp.platform, comp.query_hash, comp.median_price,
|
||
comp.sample_count, comp.expires_at),
|
||
)
|
||
self._conn.commit()
|
||
|
||
def get_market_comp(self, platform: str, query_hash: str) -> Optional[MarketComp]:
|
||
row = self._conn.execute(
|
||
"SELECT platform, query_hash, median_price, sample_count, expires_at, id, fetched_at "
|
||
"FROM market_comps WHERE platform=? AND query_hash=? AND expires_at > ?",
|
||
(platform, query_hash, datetime.now(timezone.utc).isoformat()),
|
||
).fetchone()
|
||
if not row:
|
||
return None
|
||
return MarketComp(*row[:5], id=row[5], fetched_at=row[6])
|
||
```
|
||
|
||
- [ ] **Step 6: Run tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/db/test_store.py -v
|
||
```
|
||
Expected: 5 PASSED
|
||
|
||
- [ ] **Step 7: Commit**
|
||
|
||
```bash
|
||
git add app/db/ tests/db/
|
||
git commit -m "feat: add data models, migrations, and store"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 3: eBay OAuth token manager
|
||
|
||
**Files:** `app/platforms/__init__.py`, `app/platforms/ebay/__init__.py`, `app/platforms/ebay/auth.py`, `tests/platforms/__init__.py`, `tests/platforms/test_ebay_auth.py`
|
||
|
||
- [ ] **Step 0: Create platform package directories**
|
||
|
||
```bash
|
||
mkdir -p /Library/Development/CircuitForge/snipe/app/platforms/ebay
|
||
touch /Library/Development/CircuitForge/snipe/app/platforms/ebay/__init__.py
|
||
mkdir -p /Library/Development/CircuitForge/snipe/tests/platforms
|
||
touch /Library/Development/CircuitForge/snipe/tests/platforms/__init__.py
|
||
```
|
||
|
||
- [ ] **Step 1: Write failing tests**
|
||
|
||
```python
|
||
# tests/platforms/test_ebay_auth.py
|
||
import time
|
||
import requests
|
||
from unittest.mock import patch, MagicMock
|
||
import pytest
|
||
from app.platforms.ebay.auth import EbayTokenManager
|
||
|
||
|
||
def test_fetches_token_on_first_call():
|
||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||
mock_resp = MagicMock()
|
||
mock_resp.json.return_value = {"access_token": "tok123", "expires_in": 7200}
|
||
mock_resp.raise_for_status = MagicMock()
|
||
with patch("app.platforms.ebay.auth.requests.post", return_value=mock_resp) as mock_post:
|
||
token = manager.get_token()
|
||
assert token == "tok123"
|
||
assert mock_post.called
|
||
|
||
|
||
def test_returns_cached_token_before_expiry():
|
||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||
manager._token = "cached"
|
||
manager._expires_at = time.time() + 3600
|
||
with patch("app.platforms.ebay.auth.requests.post") as mock_post:
|
||
token = manager.get_token()
|
||
assert token == "cached"
|
||
assert not mock_post.called
|
||
|
||
|
||
def test_refreshes_token_after_expiry():
|
||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||
manager._token = "old"
|
||
manager._expires_at = time.time() - 1 # expired
|
||
mock_resp = MagicMock()
|
||
mock_resp.json.return_value = {"access_token": "new_tok", "expires_in": 7200}
|
||
mock_resp.raise_for_status = MagicMock()
|
||
with patch("app.platforms.ebay.auth.requests.post", return_value=mock_resp):
|
||
token = manager.get_token()
|
||
assert token == "new_tok"
|
||
|
||
|
||
def test_token_fetch_failure_raises():
|
||
"""Spec requires: on token fetch failure, raise immediately — no silent fallback."""
|
||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||
with patch("app.platforms.ebay.auth.requests.post", side_effect=requests.RequestException("network error")):
|
||
with pytest.raises(requests.RequestException):
|
||
manager.get_token()
|
||
```
|
||
|
||
- [ ] **Step 2: Run to verify failure**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/platforms/test_ebay_auth.py -v
|
||
```
|
||
|
||
- [ ] **Step 3: Write platform adapter base class**
|
||
|
||
```python
|
||
# app/platforms/__init__.py
|
||
"""PlatformAdapter abstract base and shared types."""
|
||
from __future__ import annotations
|
||
from abc import ABC, abstractmethod
|
||
from dataclasses import dataclass, field
|
||
from typing import Optional
|
||
from app.db.models import Listing, Seller
|
||
|
||
|
||
@dataclass
|
||
class SearchFilters:
|
||
max_price: Optional[float] = None
|
||
min_price: Optional[float] = None
|
||
condition: Optional[list[str]] = field(default_factory=list)
|
||
location_radius_km: Optional[int] = None
|
||
|
||
|
||
class PlatformAdapter(ABC):
|
||
@abstractmethod
|
||
def search(self, query: str, filters: SearchFilters) -> list[Listing]: ...
|
||
|
||
@abstractmethod
|
||
def get_seller(self, seller_platform_id: str) -> Optional[Seller]: ...
|
||
|
||
@abstractmethod
|
||
def get_completed_sales(self, query: str) -> list[Listing]:
|
||
"""Fetch recently completed/sold listings for price comp data."""
|
||
...
|
||
```
|
||
|
||
- [ ] **Step 4: Write auth.py**
|
||
|
||
```python
|
||
# app/platforms/ebay/auth.py
|
||
"""eBay OAuth2 client credentials token manager."""
|
||
from __future__ import annotations
|
||
import base64
|
||
import time
|
||
from typing import Optional
|
||
import requests
|
||
|
||
EBAY_OAUTH_URLS = {
|
||
"production": "https://api.ebay.com/identity/v1/oauth2/token",
|
||
"sandbox": "https://api.sandbox.ebay.com/identity/v1/oauth2/token",
|
||
}
|
||
|
||
|
||
class EbayTokenManager:
|
||
"""Fetches and caches eBay app-level OAuth tokens. Thread-safe for single process."""
|
||
|
||
def __init__(self, client_id: str, client_secret: str, env: str = "production"):
|
||
self._client_id = client_id
|
||
self._client_secret = client_secret
|
||
self._token_url = EBAY_OAUTH_URLS[env]
|
||
self._token: Optional[str] = None
|
||
self._expires_at: float = 0.0
|
||
|
||
def get_token(self) -> str:
|
||
"""Return a valid access token, fetching or refreshing as needed."""
|
||
if self._token and time.time() < self._expires_at - 60:
|
||
return self._token
|
||
self._fetch_token()
|
||
return self._token # type: ignore[return-value]
|
||
|
||
def _fetch_token(self) -> None:
|
||
credentials = base64.b64encode(
|
||
f"{self._client_id}:{self._client_secret}".encode()
|
||
).decode()
|
||
resp = requests.post(
|
||
self._token_url,
|
||
headers={
|
||
"Authorization": f"Basic {credentials}",
|
||
"Content-Type": "application/x-www-form-urlencoded",
|
||
},
|
||
data={"grant_type": "client_credentials", "scope": "https://api.ebay.com/oauth/api_scope"},
|
||
)
|
||
resp.raise_for_status()
|
||
data = resp.json()
|
||
self._token = data["access_token"]
|
||
self._expires_at = time.time() + data["expires_in"]
|
||
```
|
||
|
||
- [ ] **Step 5: Run tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/platforms/test_ebay_auth.py -v
|
||
```
|
||
Expected: 4 PASSED
|
||
|
||
- [ ] **Step 6: Commit**
|
||
|
||
```bash
|
||
git add app/platforms/ tests/platforms/test_ebay_auth.py
|
||
git commit -m "feat: add PlatformAdapter base and eBay token manager"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 4: eBay adapter and normaliser
|
||
|
||
**Files:** `app/platforms/ebay/normaliser.py`, `app/platforms/ebay/adapter.py`, `tests/platforms/test_ebay_normaliser.py`
|
||
|
||
- [ ] **Step 1: Write normaliser tests**
|
||
|
||
```python
|
||
# tests/platforms/test_ebay_normaliser.py
|
||
import pytest
|
||
from app.platforms.ebay.normaliser import normalise_listing, normalise_seller
|
||
|
||
|
||
def test_normalise_listing_maps_fields():
|
||
raw = {
|
||
"itemId": "v1|12345|0",
|
||
"title": "RTX 4090 GPU",
|
||
"price": {"value": "950.00", "currency": "USD"},
|
||
"condition": "USED",
|
||
"seller": {"username": "techguy", "feedbackScore": 300, "feedbackPercentage": "99.1"},
|
||
"itemWebUrl": "https://ebay.com/itm/12345",
|
||
"image": {"imageUrl": "https://i.ebayimg.com/1.jpg"},
|
||
"additionalImages": [{"imageUrl": "https://i.ebayimg.com/2.jpg"}],
|
||
"itemCreationDate": "2026-03-20T00:00:00.000Z",
|
||
}
|
||
listing = normalise_listing(raw)
|
||
assert listing.platform == "ebay"
|
||
assert listing.platform_listing_id == "v1|12345|0"
|
||
assert listing.title == "RTX 4090 GPU"
|
||
assert listing.price == 950.0
|
||
assert listing.condition == "used"
|
||
assert listing.seller_platform_id == "techguy"
|
||
assert "https://i.ebayimg.com/1.jpg" in listing.photo_urls
|
||
assert "https://i.ebayimg.com/2.jpg" in listing.photo_urls
|
||
|
||
|
||
def test_normalise_listing_handles_missing_images():
|
||
raw = {
|
||
"itemId": "v1|999|0",
|
||
"title": "GPU",
|
||
"price": {"value": "100.00", "currency": "USD"},
|
||
"condition": "NEW",
|
||
"seller": {"username": "u"},
|
||
"itemWebUrl": "https://ebay.com/itm/999",
|
||
}
|
||
listing = normalise_listing(raw)
|
||
assert listing.photo_urls == []
|
||
|
||
|
||
def test_normalise_seller_maps_fields():
|
||
raw = {
|
||
"username": "techguy",
|
||
"feedbackScore": 300,
|
||
"feedbackPercentage": "99.1",
|
||
"registrationDate": "2020-03-01T00:00:00.000Z",
|
||
"sellerFeedbackSummary": {
|
||
"feedbackByCategory": [
|
||
{"transactionPercent": "95.0", "categorySite": "ELECTRONICS", "count": "50"}
|
||
]
|
||
}
|
||
}
|
||
seller = normalise_seller(raw)
|
||
assert seller.username == "techguy"
|
||
assert seller.feedback_count == 300
|
||
assert seller.feedback_ratio == pytest.approx(0.991, abs=0.001)
|
||
assert seller.account_age_days > 0
|
||
```
|
||
|
||
- [ ] **Step 2: Run to verify failure**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/platforms/test_ebay_normaliser.py -v
|
||
```
|
||
|
||
- [ ] **Step 3: Write normaliser.py**
|
||
|
||
```python
|
||
# app/platforms/ebay/normaliser.py
|
||
"""Convert raw eBay API responses into Snipe domain objects."""
|
||
from __future__ import annotations
|
||
from datetime import datetime, timezone
|
||
from app.db.models import Listing, Seller
|
||
|
||
|
||
def normalise_listing(raw: dict) -> Listing:
|
||
price_data = raw.get("price", {})
|
||
photos = []
|
||
if "image" in raw:
|
||
photos.append(raw["image"].get("imageUrl", ""))
|
||
for img in raw.get("additionalImages", []):
|
||
url = img.get("imageUrl", "")
|
||
if url and url not in photos:
|
||
photos.append(url)
|
||
photos = [p for p in photos if p]
|
||
|
||
listing_age_days = 0
|
||
created_raw = raw.get("itemCreationDate", "")
|
||
if created_raw:
|
||
try:
|
||
created = datetime.fromisoformat(created_raw.replace("Z", "+00:00"))
|
||
listing_age_days = (datetime.now(timezone.utc) - created).days
|
||
except ValueError:
|
||
pass
|
||
|
||
seller = raw.get("seller", {})
|
||
return Listing(
|
||
platform="ebay",
|
||
platform_listing_id=raw["itemId"],
|
||
title=raw.get("title", ""),
|
||
price=float(price_data.get("value", 0)),
|
||
currency=price_data.get("currency", "USD"),
|
||
condition=raw.get("condition", "").lower(),
|
||
seller_platform_id=seller.get("username", ""),
|
||
url=raw.get("itemWebUrl", ""),
|
||
photo_urls=photos,
|
||
listing_age_days=listing_age_days,
|
||
)
|
||
|
||
|
||
def normalise_seller(raw: dict) -> Seller:
|
||
feedback_pct = float(raw.get("feedbackPercentage", "0").strip("%")) / 100.0
|
||
|
||
account_age_days = 0
|
||
reg_date_raw = raw.get("registrationDate", "")
|
||
if reg_date_raw:
|
||
try:
|
||
reg_date = datetime.fromisoformat(reg_date_raw.replace("Z", "+00:00"))
|
||
account_age_days = (datetime.now(timezone.utc) - reg_date).days
|
||
except ValueError:
|
||
pass
|
||
|
||
import json
|
||
category_history = {}
|
||
summary = raw.get("sellerFeedbackSummary", {})
|
||
for entry in summary.get("feedbackByCategory", []):
|
||
category_history[entry.get("categorySite", "")] = int(entry.get("count", 0))
|
||
|
||
return Seller(
|
||
platform="ebay",
|
||
platform_seller_id=raw["username"],
|
||
username=raw["username"],
|
||
account_age_days=account_age_days,
|
||
feedback_count=int(raw.get("feedbackScore", 0)),
|
||
feedback_ratio=feedback_pct,
|
||
category_history_json=json.dumps(category_history),
|
||
)
|
||
```
|
||
|
||
- [ ] **Step 4: Write adapter.py**
|
||
|
||
```python
|
||
# app/platforms/ebay/adapter.py
|
||
"""eBay Browse API + Seller API adapter."""
|
||
from __future__ import annotations
|
||
import hashlib
|
||
from datetime import datetime, timedelta, timezone
|
||
from typing import Optional
|
||
import requests
|
||
|
||
from app.db.models import Listing, Seller, MarketComp
|
||
from app.db.store import Store
|
||
from app.platforms import PlatformAdapter, SearchFilters
|
||
from app.platforms.ebay.auth import EbayTokenManager
|
||
from app.platforms.ebay.normaliser import normalise_listing, normalise_seller
|
||
|
||
BROWSE_BASE = {
|
||
"production": "https://api.ebay.com/buy/browse/v1",
|
||
"sandbox": "https://api.sandbox.ebay.com/buy/browse/v1",
|
||
}
|
||
# Note: seller lookup uses the Browse API with a seller filter, not a separate Seller API.
|
||
# The Commerce Identity /user endpoint returns the calling app's own identity (requires
|
||
# user OAuth, not app credentials). Seller metadata is extracted from Browse API inline
|
||
# seller fields. registrationDate is available in item detail responses via this path.
|
||
|
||
|
||
class EbayAdapter(PlatformAdapter):
|
||
def __init__(self, token_manager: EbayTokenManager, store: Store, env: str = "production"):
|
||
self._tokens = token_manager
|
||
self._store = store
|
||
self._browse_base = BROWSE_BASE[env]
|
||
|
||
def _headers(self) -> dict:
|
||
return {"Authorization": f"Bearer {self._tokens.get_token()}"}
|
||
|
||
def search(self, query: str, filters: SearchFilters) -> list[Listing]:
|
||
params: dict = {"q": query, "limit": 50}
|
||
filter_parts = []
|
||
if filters.max_price:
|
||
filter_parts.append(f"price:[..{filters.max_price}],priceCurrency:USD")
|
||
if filters.condition:
|
||
cond_map = {"new": "NEW", "used": "USED", "open box": "OPEN_BOX", "for parts": "FOR_PARTS_NOT_WORKING"}
|
||
ebay_conds = [cond_map[c] for c in filters.condition if c in cond_map]
|
||
if ebay_conds:
|
||
filter_parts.append(f"conditions:{{{','.join(ebay_conds)}}}")
|
||
if filter_parts:
|
||
params["filter"] = ",".join(filter_parts)
|
||
|
||
resp = requests.get(f"{self._browse_base}/item_summary/search",
|
||
headers=self._headers(), params=params)
|
||
resp.raise_for_status()
|
||
items = resp.json().get("itemSummaries", [])
|
||
return [normalise_listing(item) for item in items]
|
||
|
||
def get_seller(self, seller_platform_id: str) -> Optional[Seller]:
|
||
cached = self._store.get_seller("ebay", seller_platform_id)
|
||
if cached:
|
||
return cached
|
||
try:
|
||
# Fetch seller data via Browse API: search for one item by this seller.
|
||
# The Browse API inline seller field includes username, feedbackScore,
|
||
# feedbackPercentage, and (in item detail responses) registrationDate.
|
||
# This works with app-level client credentials — no user OAuth required.
|
||
resp = requests.get(
|
||
f"{self._browse_base}/item_summary/search",
|
||
headers={**self._headers(), "X-EBAY-C-MARKETPLACE-ID": "EBAY_US"},
|
||
params={"seller": seller_platform_id, "limit": 1},
|
||
)
|
||
resp.raise_for_status()
|
||
items = resp.json().get("itemSummaries", [])
|
||
if not items:
|
||
return None
|
||
seller = normalise_seller(items[0].get("seller", {}))
|
||
self._store.save_seller(seller)
|
||
return seller
|
||
except Exception:
|
||
return None # Caller handles None gracefully (partial score)
|
||
|
||
def get_completed_sales(self, query: str) -> list[Listing]:
|
||
query_hash = hashlib.md5(query.encode()).hexdigest()
|
||
cached = self._store.get_market_comp("ebay", query_hash)
|
||
if cached:
|
||
return [] # Comp data is used directly; return empty to signal cache hit
|
||
|
||
params = {"q": query, "limit": 20, "filter": "buyingOptions:{FIXED_PRICE}"}
|
||
try:
|
||
resp = requests.get(f"{self._browse_base}/item_summary/search",
|
||
headers=self._headers(), params=params)
|
||
resp.raise_for_status()
|
||
items = resp.json().get("itemSummaries", [])
|
||
listings = [normalise_listing(item) for item in items]
|
||
if listings:
|
||
prices = sorted(l.price for l in listings)
|
||
median = prices[len(prices) // 2]
|
||
comp = MarketComp(
|
||
platform="ebay",
|
||
query_hash=query_hash,
|
||
median_price=median,
|
||
sample_count=len(prices),
|
||
expires_at=(datetime.now(timezone.utc) + timedelta(hours=6)).isoformat(),
|
||
)
|
||
self._store.save_market_comp(comp)
|
||
return listings
|
||
except Exception:
|
||
return []
|
||
```
|
||
|
||
- [ ] **Step 5: Run tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/platforms/ -v
|
||
```
|
||
Expected: All PASSED
|
||
|
||
- [ ] **Step 6: Commit**
|
||
|
||
```bash
|
||
git add app/platforms/ tests/platforms/
|
||
git commit -m "feat: add eBay adapter with Browse API, Seller API, and market comps"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 5: Metadata trust scorer
|
||
|
||
**Files:** `app/trust/__init__.py`, `app/trust/metadata.py`, `app/trust/photo.py`, `app/trust/aggregator.py`, `tests/trust/__init__.py`, `tests/trust/test_metadata.py`, `tests/trust/test_photo.py`, `tests/trust/test_aggregator.py`
|
||
|
||
- [ ] **Step 0: Create trust package directories**
|
||
|
||
```bash
|
||
mkdir -p /Library/Development/CircuitForge/snipe/app/trust
|
||
touch /Library/Development/CircuitForge/snipe/app/trust/__init__.py
|
||
mkdir -p /Library/Development/CircuitForge/snipe/tests/trust
|
||
touch /Library/Development/CircuitForge/snipe/tests/trust/__init__.py
|
||
```
|
||
|
||
- [ ] **Step 1: Write failing tests**
|
||
|
||
```python
|
||
# tests/trust/test_metadata.py
|
||
from app.db.models import Seller
|
||
from app.trust.metadata import MetadataScorer
|
||
|
||
|
||
def _seller(**kwargs) -> Seller:
|
||
defaults = dict(
|
||
platform="ebay", platform_seller_id="u", username="u",
|
||
account_age_days=730, feedback_count=450,
|
||
feedback_ratio=0.991, category_history_json='{"ELECTRONICS": 30}',
|
||
)
|
||
defaults.update(kwargs)
|
||
return Seller(**defaults)
|
||
|
||
|
||
def test_established_seller_scores_high():
|
||
scorer = MetadataScorer()
|
||
scores = scorer.score(_seller(), market_median=1000.0, listing_price=950.0)
|
||
total = sum(scores.values())
|
||
assert total >= 80
|
||
|
||
|
||
def test_new_account_scores_zero_on_age():
|
||
scorer = MetadataScorer()
|
||
scores = scorer.score(_seller(account_age_days=3), market_median=1000.0, listing_price=950.0)
|
||
assert scores["account_age"] == 0
|
||
|
||
|
||
def test_low_feedback_count_scores_low():
|
||
scorer = MetadataScorer()
|
||
scores = scorer.score(_seller(feedback_count=2), market_median=1000.0, listing_price=950.0)
|
||
assert scores["feedback_count"] < 10
|
||
|
||
|
||
def test_suspicious_price_scores_zero():
|
||
scorer = MetadataScorer()
|
||
# 60% below market → zero
|
||
scores = scorer.score(_seller(), market_median=1000.0, listing_price=400.0)
|
||
assert scores["price_vs_market"] == 0
|
||
|
||
|
||
def test_no_market_data_returns_none():
|
||
scorer = MetadataScorer()
|
||
scores = scorer.score(_seller(), market_median=None, listing_price=950.0)
|
||
# None signals "data unavailable" — aggregator will set score_is_partial=True
|
||
assert scores["price_vs_market"] is None
|
||
```
|
||
|
||
```python
|
||
# tests/trust/test_photo.py
|
||
from app.trust.photo import PhotoScorer
|
||
|
||
|
||
def test_no_duplicates_in_single_listing_result():
|
||
scorer = PhotoScorer()
|
||
photo_urls_per_listing = [
|
||
["https://img.com/a.jpg", "https://img.com/b.jpg"],
|
||
["https://img.com/c.jpg"],
|
||
]
|
||
# All unique images — no duplicates
|
||
results = scorer.check_duplicates(photo_urls_per_listing)
|
||
assert all(not r for r in results)
|
||
|
||
|
||
def test_duplicate_photo_flagged():
|
||
scorer = PhotoScorer()
|
||
# Same URL in two listings = trivially duplicate (hash will match)
|
||
photo_urls_per_listing = [
|
||
["https://img.com/same.jpg"],
|
||
["https://img.com/same.jpg"],
|
||
]
|
||
results = scorer.check_duplicates(photo_urls_per_listing)
|
||
# Both listings should be flagged
|
||
assert results[0] is True or results[1] is True
|
||
```
|
||
|
||
```python
|
||
# tests/trust/test_aggregator.py
|
||
from app.db.models import Seller
|
||
from app.trust.aggregator import Aggregator
|
||
|
||
|
||
def test_composite_sum_of_five_signals():
|
||
agg = Aggregator()
|
||
scores = {
|
||
"account_age": 18, "feedback_count": 16,
|
||
"feedback_ratio": 20, "price_vs_market": 15,
|
||
"category_history": 14,
|
||
}
|
||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=None)
|
||
assert result.composite_score == 83
|
||
|
||
|
||
def test_hard_filter_new_account():
|
||
from app.db.models import Seller
|
||
agg = Aggregator()
|
||
scores = {k: 20 for k in ["account_age", "feedback_count",
|
||
"feedback_ratio", "price_vs_market", "category_history"]}
|
||
young_seller = Seller(
|
||
platform="ebay", platform_seller_id="u", username="u",
|
||
account_age_days=3, feedback_count=0,
|
||
feedback_ratio=1.0, category_history_json="{}",
|
||
)
|
||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=young_seller)
|
||
assert "new_account" in result.red_flags_json
|
||
|
||
|
||
def test_hard_filter_bad_actor_established_account():
|
||
"""Established account (count > 20) with very bad ratio → hard filter."""
|
||
from app.db.models import Seller
|
||
agg = Aggregator()
|
||
scores = {k: 10 for k in ["account_age", "feedback_count",
|
||
"feedback_ratio", "price_vs_market", "category_history"]}
|
||
bad_seller = Seller(
|
||
platform="ebay", platform_seller_id="u", username="u",
|
||
account_age_days=730, feedback_count=25, # count > 20
|
||
feedback_ratio=0.70, # ratio < 80% → hard filter
|
||
category_history_json="{}",
|
||
)
|
||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=bad_seller)
|
||
assert "established_bad_actor" in result.red_flags_json
|
||
|
||
|
||
def test_partial_score_flagged_when_signals_missing():
|
||
agg = Aggregator()
|
||
scores = {
|
||
"account_age": 18, "feedback_count": None, # None = unavailable
|
||
"feedback_ratio": 20, "price_vs_market": 15,
|
||
"category_history": 14,
|
||
}
|
||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=None)
|
||
assert result.score_is_partial is True
|
||
```
|
||
|
||
- [ ] **Step 2: Run to verify failure**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/trust/ -v
|
||
```
|
||
|
||
- [ ] **Step 3: Write metadata.py**
|
||
|
||
```python
|
||
# app/trust/metadata.py
|
||
"""Five metadata trust signals, each scored 0–20."""
|
||
from __future__ import annotations
|
||
import json
|
||
from typing import Optional
|
||
from app.db.models import Seller
|
||
|
||
ELECTRONICS_CATEGORIES = {"ELECTRONICS", "COMPUTERS_TABLETS", "VIDEO_GAMES", "CELL_PHONES"}
|
||
|
||
|
||
class MetadataScorer:
|
||
def score(
|
||
self,
|
||
seller: Seller,
|
||
market_median: Optional[float],
|
||
listing_price: float,
|
||
) -> dict[str, Optional[int]]:
|
||
return {
|
||
"account_age": self._account_age(seller.account_age_days),
|
||
"feedback_count": self._feedback_count(seller.feedback_count),
|
||
"feedback_ratio": self._feedback_ratio(seller.feedback_ratio, seller.feedback_count),
|
||
"price_vs_market": self._price_vs_market(listing_price, market_median),
|
||
"category_history": self._category_history(seller.category_history_json),
|
||
}
|
||
|
||
def _account_age(self, days: int) -> int:
|
||
if days < 7: return 0
|
||
if days < 30: return 5
|
||
if days < 90: return 10
|
||
if days < 365: return 15
|
||
return 20
|
||
|
||
def _feedback_count(self, count: int) -> int:
|
||
if count < 3: return 0
|
||
if count < 10: return 5
|
||
if count < 50: return 10
|
||
if count < 200: return 15
|
||
return 20
|
||
|
||
def _feedback_ratio(self, ratio: float, count: int) -> int:
|
||
if ratio < 0.80 and count > 20: return 0
|
||
if ratio < 0.90: return 5
|
||
if ratio < 0.95: return 10
|
||
if ratio < 0.98: return 15
|
||
return 20
|
||
|
||
def _price_vs_market(self, price: float, median: Optional[float]) -> Optional[int]:
|
||
if median is None: return None # data unavailable → aggregator sets score_is_partial
|
||
if median <= 0: return None
|
||
ratio = price / median
|
||
if ratio < 0.50: return 0 # >50% below = scam
|
||
if ratio < 0.70: return 5 # >30% below = suspicious
|
||
if ratio < 0.85: return 10
|
||
if ratio <= 1.20: return 20
|
||
return 15 # above market = still ok, just expensive
|
||
|
||
def _category_history(self, category_history_json: str) -> int:
|
||
try:
|
||
history = json.loads(category_history_json)
|
||
except (ValueError, TypeError):
|
||
return 0
|
||
electronics_sales = sum(
|
||
v for k, v in history.items() if k in ELECTRONICS_CATEGORIES
|
||
)
|
||
if electronics_sales == 0: return 0
|
||
if electronics_sales < 5: return 8
|
||
if electronics_sales < 20: return 14
|
||
return 20
|
||
```
|
||
|
||
- [ ] **Step 4: Write photo.py**
|
||
|
||
```python
|
||
# app/trust/photo.py
|
||
"""Perceptual hash deduplication within a result set (free tier, v0.1)."""
|
||
from __future__ import annotations
|
||
from typing import Optional
|
||
import io
|
||
import requests
|
||
|
||
try:
|
||
import imagehash
|
||
from PIL import Image
|
||
_IMAGEHASH_AVAILABLE = True
|
||
except ImportError:
|
||
_IMAGEHASH_AVAILABLE = False
|
||
|
||
|
||
class PhotoScorer:
|
||
"""
|
||
check_duplicates: compare images within a single result set.
|
||
Cross-session dedup (PhotoHash table) is v0.2.
|
||
Vision analysis (real/marketing/EM bag) is v0.2 paid tier.
|
||
"""
|
||
|
||
def check_duplicates(self, photo_urls_per_listing: list[list[str]]) -> list[bool]:
|
||
"""
|
||
Returns a list of booleans parallel to photo_urls_per_listing.
|
||
True = this listing's primary photo is a duplicate of another listing in the set.
|
||
Falls back to URL-equality check if imagehash is unavailable or fetch fails.
|
||
"""
|
||
if not _IMAGEHASH_AVAILABLE:
|
||
return self._url_dedup(photo_urls_per_listing)
|
||
|
||
primary_urls = [urls[0] if urls else "" for urls in photo_urls_per_listing]
|
||
hashes: list[Optional[str]] = []
|
||
for url in primary_urls:
|
||
hashes.append(self._fetch_hash(url))
|
||
|
||
results = [False] * len(photo_urls_per_listing)
|
||
seen: dict[str, int] = {}
|
||
for i, h in enumerate(hashes):
|
||
if h is None:
|
||
continue
|
||
if h in seen:
|
||
results[i] = True
|
||
results[seen[h]] = True
|
||
else:
|
||
seen[h] = i
|
||
return results
|
||
|
||
def _fetch_hash(self, url: str) -> Optional[str]:
|
||
if not url:
|
||
return None
|
||
try:
|
||
resp = requests.get(url, timeout=5, stream=True)
|
||
resp.raise_for_status()
|
||
img = Image.open(io.BytesIO(resp.content))
|
||
return str(imagehash.phash(img))
|
||
except Exception:
|
||
return None
|
||
|
||
def _url_dedup(self, photo_urls_per_listing: list[list[str]]) -> list[bool]:
|
||
seen: set[str] = set()
|
||
results = []
|
||
for urls in photo_urls_per_listing:
|
||
primary = urls[0] if urls else ""
|
||
if primary and primary in seen:
|
||
results.append(True)
|
||
else:
|
||
if primary:
|
||
seen.add(primary)
|
||
results.append(False)
|
||
return results
|
||
```
|
||
|
||
- [ ] **Step 5: Write aggregator.py**
|
||
|
||
```python
|
||
# app/trust/aggregator.py
|
||
"""Composite score and red flag extraction."""
|
||
from __future__ import annotations
|
||
import json
|
||
from typing import Optional
|
||
from app.db.models import Seller, TrustScore
|
||
|
||
HARD_FILTER_AGE_DAYS = 7
|
||
HARD_FILTER_BAD_RATIO_MIN_COUNT = 20
|
||
HARD_FILTER_BAD_RATIO_THRESHOLD = 0.80
|
||
|
||
|
||
class Aggregator:
|
||
def aggregate(
|
||
self,
|
||
signal_scores: dict[str, Optional[int]],
|
||
photo_hash_duplicate: bool,
|
||
seller: Optional[Seller],
|
||
listing_id: int = 0,
|
||
) -> TrustScore:
|
||
is_partial = any(v is None for v in signal_scores.values())
|
||
clean = {k: (v if v is not None else 0) for k, v in signal_scores.items()}
|
||
composite = sum(clean.values())
|
||
|
||
red_flags: list[str] = []
|
||
|
||
# Hard filters
|
||
if seller and seller.account_age_days < HARD_FILTER_AGE_DAYS:
|
||
red_flags.append("new_account")
|
||
if seller and (
|
||
seller.feedback_ratio < HARD_FILTER_BAD_RATIO_THRESHOLD
|
||
and seller.feedback_count > HARD_FILTER_BAD_RATIO_MIN_COUNT
|
||
):
|
||
red_flags.append("established_bad_actor")
|
||
|
||
# Soft flags
|
||
if seller and seller.account_age_days < 30:
|
||
red_flags.append("account_under_30_days")
|
||
if seller and seller.feedback_count < 10:
|
||
red_flags.append("low_feedback_count")
|
||
if clean["price_vs_market"] == 0:
|
||
red_flags.append("suspicious_price")
|
||
if photo_hash_duplicate:
|
||
red_flags.append("duplicate_photo")
|
||
|
||
return TrustScore(
|
||
listing_id=listing_id,
|
||
composite_score=composite,
|
||
account_age_score=clean["account_age"],
|
||
feedback_count_score=clean["feedback_count"],
|
||
feedback_ratio_score=clean["feedback_ratio"],
|
||
price_vs_market_score=clean["price_vs_market"],
|
||
category_history_score=clean["category_history"],
|
||
photo_hash_duplicate=photo_hash_duplicate,
|
||
red_flags_json=json.dumps(red_flags),
|
||
score_is_partial=is_partial,
|
||
)
|
||
```
|
||
|
||
- [ ] **Step 6: Write trust/__init__.py**
|
||
|
||
```python
|
||
# app/trust/__init__.py
|
||
from .metadata import MetadataScorer
|
||
from .photo import PhotoScorer
|
||
from .aggregator import Aggregator
|
||
from app.db.models import Seller, Listing, TrustScore
|
||
from app.db.store import Store
|
||
import hashlib
|
||
|
||
|
||
class TrustScorer:
|
||
"""Orchestrates metadata + photo scoring for a batch of listings."""
|
||
|
||
def __init__(self, store: Store):
|
||
self._store = store
|
||
self._meta = MetadataScorer()
|
||
self._photo = PhotoScorer()
|
||
self._agg = Aggregator()
|
||
|
||
def score_batch(
|
||
self,
|
||
listings: list[Listing],
|
||
query: str,
|
||
) -> list[TrustScore]:
|
||
query_hash = hashlib.md5(query.encode()).hexdigest()
|
||
comp = self._store.get_market_comp("ebay", query_hash)
|
||
market_median = comp.median_price if comp else None
|
||
|
||
photo_url_sets = [l.photo_urls for l in listings]
|
||
duplicates = self._photo.check_duplicates(photo_url_sets)
|
||
|
||
scores = []
|
||
for listing, is_dup in zip(listings, duplicates):
|
||
seller = self._store.get_seller("ebay", listing.seller_platform_id)
|
||
if seller:
|
||
signal_scores = self._meta.score(seller, market_median, listing.price)
|
||
else:
|
||
signal_scores = {k: None for k in
|
||
["account_age", "feedback_count", "feedback_ratio",
|
||
"price_vs_market", "category_history"]}
|
||
trust = self._agg.aggregate(signal_scores, is_dup, seller, listing.id or 0)
|
||
scores.append(trust)
|
||
return scores
|
||
```
|
||
|
||
- [ ] **Step 7: Run all trust tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/trust/ -v
|
||
```
|
||
Expected: All PASSED
|
||
|
||
- [ ] **Step 8: Commit**
|
||
|
||
```bash
|
||
git add app/trust/ tests/trust/
|
||
git commit -m "feat: add metadata scorer, photo hash dedup, and trust aggregator"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 6: Tier gating
|
||
|
||
**Files:** `app/tiers.py`, `tests/test_tiers.py`
|
||
|
||
- [ ] **Step 1: Write failing tests**
|
||
|
||
```python
|
||
# tests/test_tiers.py
|
||
from app.tiers import can_use, FEATURES, LOCAL_VISION_UNLOCKABLE
|
||
|
||
|
||
def test_metadata_scoring_is_free():
|
||
assert can_use("metadata_trust_scoring", tier="free") is True
|
||
|
||
|
||
def test_photo_analysis_is_paid():
|
||
assert can_use("photo_analysis", tier="free") is False
|
||
assert can_use("photo_analysis", tier="paid") is True
|
||
|
||
|
||
def test_local_vision_unlocks_photo_analysis():
|
||
assert can_use("photo_analysis", tier="free", has_local_vision=True) is True
|
||
|
||
|
||
def test_byok_does_not_unlock_photo_analysis():
|
||
assert can_use("photo_analysis", tier="free", has_byok=True) is False
|
||
|
||
|
||
def test_saved_searches_require_paid():
|
||
assert can_use("saved_searches", tier="free") is False
|
||
assert can_use("saved_searches", tier="paid") is True
|
||
```
|
||
|
||
- [ ] **Step 2: Run to verify failure**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/test_tiers.py -v
|
||
```
|
||
|
||
- [ ] **Step 3: Write app/tiers.py**
|
||
|
||
```python
|
||
# app/tiers.py
|
||
"""Snipe feature gates. Delegates to circuitforge_core.tiers."""
|
||
from __future__ import annotations
|
||
from circuitforge_core.tiers import can_use as _core_can_use, TIERS # noqa: F401
|
||
|
||
# Feature key → minimum tier required.
|
||
FEATURES: dict[str, str] = {
|
||
# Free tier
|
||
"metadata_trust_scoring": "free",
|
||
"hash_dedup": "free",
|
||
# Paid tier
|
||
"photo_analysis": "paid",
|
||
"serial_number_check": "paid",
|
||
"ai_image_detection": "paid",
|
||
"reverse_image_search": "paid",
|
||
"saved_searches": "paid",
|
||
"background_monitoring": "paid",
|
||
}
|
||
|
||
# Photo analysis features unlock if user has local vision model (moondream2).
|
||
LOCAL_VISION_UNLOCKABLE: frozenset[str] = frozenset({
|
||
"photo_analysis",
|
||
"serial_number_check",
|
||
})
|
||
|
||
|
||
def can_use(
|
||
feature: str,
|
||
tier: str = "free",
|
||
has_byok: bool = False,
|
||
has_local_vision: bool = False,
|
||
) -> bool:
|
||
if has_local_vision and feature in LOCAL_VISION_UNLOCKABLE:
|
||
return True
|
||
return _core_can_use(feature, tier, has_byok=has_byok, _features=FEATURES)
|
||
```
|
||
|
||
- [ ] **Step 4: Run tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/test_tiers.py -v
|
||
```
|
||
Expected: 5 PASSED
|
||
|
||
- [ ] **Step 5: Commit**
|
||
|
||
```bash
|
||
git add app/tiers.py tests/test_tiers.py
|
||
git commit -m "feat: add snipe tier gates with LOCAL_VISION_UNLOCKABLE"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 7: Results UI
|
||
|
||
**Files:** `app/ui/__init__.py`, `app/ui/components/__init__.py`, `app/ui/components/filters.py`, `app/ui/components/listing_row.py`, `app/ui/Search.py`, `app/app.py`, `tests/ui/__init__.py`, `tests/ui/test_filters.py`
|
||
|
||
- [ ] **Step 0: Create UI package directories**
|
||
|
||
```bash
|
||
mkdir -p /Library/Development/CircuitForge/snipe/app/ui/components
|
||
touch /Library/Development/CircuitForge/snipe/app/ui/__init__.py
|
||
touch /Library/Development/CircuitForge/snipe/app/ui/components/__init__.py
|
||
mkdir -p /Library/Development/CircuitForge/snipe/tests/ui
|
||
touch /Library/Development/CircuitForge/snipe/tests/ui/__init__.py
|
||
```
|
||
|
||
- [ ] **Step 1: Write failing filter tests**
|
||
|
||
```python
|
||
# tests/ui/test_filters.py
|
||
from app.db.models import Listing, TrustScore
|
||
from app.ui.components.filters import build_filter_options
|
||
|
||
|
||
def _listing(price, condition, score):
|
||
return (
|
||
Listing("ebay", "1", "GPU", price, "USD", condition, "u", "https://ebay.com", [], 1),
|
||
TrustScore(0, score, 10, 10, 10, 10, 10),
|
||
)
|
||
|
||
|
||
def test_price_range_from_results():
|
||
pairs = [_listing(500, "used", 80), _listing(1200, "new", 60)]
|
||
opts = build_filter_options(pairs)
|
||
assert opts["price_min"] == 500
|
||
assert opts["price_max"] == 1200
|
||
|
||
|
||
def test_conditions_from_results():
|
||
pairs = [_listing(500, "used", 80), _listing(1200, "new", 60), _listing(800, "used", 70)]
|
||
opts = build_filter_options(pairs)
|
||
assert "used" in opts["conditions"]
|
||
assert opts["conditions"]["used"] == 2
|
||
assert opts["conditions"]["new"] == 1
|
||
|
||
|
||
def test_missing_condition_not_included():
|
||
pairs = [_listing(500, "used", 80)]
|
||
opts = build_filter_options(pairs)
|
||
assert "new" not in opts["conditions"]
|
||
|
||
|
||
def test_trust_score_bands():
|
||
pairs = [_listing(500, "used", 85), _listing(700, "new", 60), _listing(400, "used", 20)]
|
||
opts = build_filter_options(pairs)
|
||
assert opts["score_bands"]["safe"] == 1 # 80+
|
||
assert opts["score_bands"]["review"] == 1 # 50–79
|
||
assert opts["score_bands"]["skip"] == 1 # <50
|
||
```
|
||
|
||
- [ ] **Step 2: Run to verify failure**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/ui/ -v
|
||
```
|
||
|
||
- [ ] **Step 3: Write filters.py**
|
||
|
||
```python
|
||
# app/ui/components/filters.py
|
||
"""Build dynamic filter options from a result set and render the Streamlit sidebar."""
|
||
from __future__ import annotations
|
||
from dataclasses import dataclass, field
|
||
from typing import Optional
|
||
import streamlit as st
|
||
from app.db.models import Listing, TrustScore
|
||
|
||
|
||
@dataclass
|
||
class FilterOptions:
|
||
price_min: float
|
||
price_max: float
|
||
conditions: dict[str, int] # condition → count
|
||
score_bands: dict[str, int] # safe/review/skip → count
|
||
has_real_photo: int = 0
|
||
has_em_bag: int = 0
|
||
duplicate_count: int = 0
|
||
new_account_count: int = 0
|
||
free_shipping_count: int = 0
|
||
|
||
|
||
@dataclass
|
||
class FilterState:
|
||
min_trust_score: int = 0
|
||
min_price: Optional[float] = None
|
||
max_price: Optional[float] = None
|
||
min_account_age_days: int = 0
|
||
min_feedback_count: int = 0
|
||
min_feedback_ratio: float = 0.0
|
||
conditions: list[str] = field(default_factory=list)
|
||
hide_new_accounts: bool = False
|
||
hide_marketing_photos: bool = False
|
||
hide_suspicious_price: bool = False
|
||
hide_duplicate_photos: bool = False
|
||
|
||
|
||
def build_filter_options(
|
||
pairs: list[tuple[Listing, TrustScore]],
|
||
) -> FilterOptions:
|
||
prices = [l.price for l, _ in pairs if l.price > 0]
|
||
conditions: dict[str, int] = {}
|
||
safe = review = skip = 0
|
||
dup_count = new_acct = 0
|
||
|
||
for listing, ts in pairs:
|
||
cond = listing.condition or "unknown"
|
||
conditions[cond] = conditions.get(cond, 0) + 1
|
||
if ts.composite_score >= 80:
|
||
safe += 1
|
||
elif ts.composite_score >= 50:
|
||
review += 1
|
||
else:
|
||
skip += 1
|
||
if ts.photo_hash_duplicate:
|
||
dup_count += 1
|
||
import json
|
||
flags = json.loads(ts.red_flags_json or "[]")
|
||
if "new_account" in flags or "account_under_30_days" in flags:
|
||
new_acct += 1
|
||
|
||
return FilterOptions(
|
||
price_min=min(prices) if prices else 0,
|
||
price_max=max(prices) if prices else 0,
|
||
conditions=conditions,
|
||
score_bands={"safe": safe, "review": review, "skip": skip},
|
||
duplicate_count=dup_count,
|
||
new_account_count=new_acct,
|
||
)
|
||
|
||
|
||
def render_filter_sidebar(
|
||
pairs: list[tuple[Listing, TrustScore]],
|
||
opts: FilterOptions,
|
||
) -> FilterState:
|
||
"""Render filter sidebar and return current FilterState."""
|
||
state = FilterState()
|
||
|
||
st.sidebar.markdown("### Filters")
|
||
st.sidebar.caption(f"{len(pairs)} results")
|
||
|
||
state.min_trust_score = st.sidebar.slider("Min trust score", 0, 100, 0, key="min_trust")
|
||
st.sidebar.caption(
|
||
f"🟢 Safe (80+): {opts.score_bands['safe']} "
|
||
f"🟡 Review (50–79): {opts.score_bands['review']} "
|
||
f"🔴 Skip (<50): {opts.score_bands['skip']}"
|
||
)
|
||
|
||
st.sidebar.markdown("**Price**")
|
||
col1, col2 = st.sidebar.columns(2)
|
||
state.min_price = col1.number_input("Min $", value=opts.price_min, step=50.0, key="min_p")
|
||
state.max_price = col2.number_input("Max $", value=opts.price_max, step=50.0, key="max_p")
|
||
|
||
state.min_account_age_days = st.sidebar.slider(
|
||
"Account age (min days)", 0, 365, 0, key="age")
|
||
state.min_feedback_count = st.sidebar.slider(
|
||
"Feedback count (min)", 0, 500, 0, key="fb_count")
|
||
state.min_feedback_ratio = st.sidebar.slider(
|
||
"Positive feedback % (min)", 0, 100, 0, key="fb_ratio") / 100.0
|
||
|
||
if opts.conditions:
|
||
st.sidebar.markdown("**Condition**")
|
||
selected = []
|
||
for cond, count in sorted(opts.conditions.items()):
|
||
if st.sidebar.checkbox(f"{cond} ({count})", value=True, key=f"cond_{cond}"):
|
||
selected.append(cond)
|
||
state.conditions = selected
|
||
|
||
st.sidebar.markdown("**Hide if flagged**")
|
||
state.hide_new_accounts = st.sidebar.checkbox(
|
||
f"New account (<30d) ({opts.new_account_count})", key="hide_new")
|
||
state.hide_suspicious_price = st.sidebar.checkbox("Suspicious price", key="hide_price")
|
||
state.hide_duplicate_photos = st.sidebar.checkbox(
|
||
f"Duplicate photo ({opts.duplicate_count})", key="hide_dup")
|
||
|
||
if st.sidebar.button("Reset filters", key="reset"):
|
||
st.rerun()
|
||
|
||
return state
|
||
```
|
||
|
||
- [ ] **Step 4: Run filter tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/ui/test_filters.py -v
|
||
```
|
||
Expected: 4 PASSED
|
||
|
||
- [ ] **Step 5: Write listing_row.py**
|
||
|
||
```python
|
||
# app/ui/components/listing_row.py
|
||
"""Render a single listing row with trust score, badges, and error states."""
|
||
from __future__ import annotations
|
||
import json
|
||
import streamlit as st
|
||
from app.db.models import Listing, TrustScore, Seller
|
||
from typing import Optional
|
||
|
||
|
||
def _score_colour(score: int) -> str:
|
||
if score >= 80: return "🟢"
|
||
if score >= 50: return "🟡"
|
||
return "🔴"
|
||
|
||
|
||
def _flag_label(flag: str) -> str:
|
||
labels = {
|
||
"new_account": "✗ New account",
|
||
"account_under_30_days": "⚠ Account <30d",
|
||
"low_feedback_count": "⚠ Low feedback",
|
||
"suspicious_price": "✗ Suspicious price",
|
||
"duplicate_photo": "✗ Duplicate photo",
|
||
"established_bad_actor": "✗ Bad actor",
|
||
"marketing_photo": "✗ Marketing photo",
|
||
}
|
||
return labels.get(flag, f"⚠ {flag}")
|
||
|
||
|
||
def render_listing_row(
|
||
listing: Listing,
|
||
trust: Optional[TrustScore],
|
||
seller: Optional[Seller] = None,
|
||
) -> None:
|
||
col_img, col_info, col_score = st.columns([1, 5, 2])
|
||
|
||
with col_img:
|
||
if listing.photo_urls:
|
||
# Spec requires graceful 404 handling: show placeholder on failure
|
||
try:
|
||
import requests as _req
|
||
r = _req.head(listing.photo_urls[0], timeout=3, allow_redirects=True)
|
||
if r.status_code == 200:
|
||
st.image(listing.photo_urls[0], width=80)
|
||
else:
|
||
st.markdown("📷 *Photo unavailable*")
|
||
except Exception:
|
||
st.markdown("📷 *Photo unavailable*")
|
||
else:
|
||
st.markdown("📷 *No photo*")
|
||
|
||
with col_info:
|
||
st.markdown(f"**{listing.title}**")
|
||
if seller:
|
||
age_str = f"{seller.account_age_days // 365}yr" if seller.account_age_days >= 365 \
|
||
else f"{seller.account_age_days}d"
|
||
st.caption(
|
||
f"{seller.username} · {seller.feedback_count} fb · "
|
||
f"{seller.feedback_ratio*100:.1f}% · member {age_str}"
|
||
)
|
||
else:
|
||
st.caption(f"{listing.seller_platform_id} · *Seller data unavailable*")
|
||
|
||
if trust:
|
||
flags = json.loads(trust.red_flags_json or "[]")
|
||
if flags:
|
||
badge_html = " ".join(
|
||
f'<span style="background:#c33;color:#fff;padding:1px 5px;'
|
||
f'border-radius:3px;font-size:11px">{_flag_label(f)}</span>'
|
||
for f in flags
|
||
)
|
||
st.markdown(badge_html, unsafe_allow_html=True)
|
||
if trust.score_is_partial:
|
||
st.caption("⚠ Partial score — some data unavailable")
|
||
else:
|
||
st.caption("⚠ Could not score this listing")
|
||
|
||
with col_score:
|
||
if trust:
|
||
icon = _score_colour(trust.composite_score)
|
||
st.metric(label="Trust", value=f"{icon} {trust.composite_score}")
|
||
else:
|
||
st.metric(label="Trust", value="?")
|
||
st.markdown(f"**${listing.price:,.0f}**")
|
||
st.markdown(f"[Open eBay ↗]({listing.url})")
|
||
|
||
st.divider()
|
||
```
|
||
|
||
- [ ] **Step 6: Write Search.py**
|
||
|
||
```python
|
||
# app/ui/Search.py
|
||
"""Main search + results page."""
|
||
from __future__ import annotations
|
||
import os
|
||
from pathlib import Path
|
||
import streamlit as st
|
||
from circuitforge_core.config import load_env
|
||
from app.db.store import Store
|
||
from app.platforms import SearchFilters
|
||
from app.platforms.ebay.auth import EbayTokenManager
|
||
from app.platforms.ebay.adapter import EbayAdapter
|
||
from app.trust import TrustScorer
|
||
from app.ui.components.filters import build_filter_options, render_filter_sidebar, FilterState
|
||
from app.ui.components.listing_row import render_listing_row
|
||
|
||
load_env(Path(".env"))
|
||
_DB_PATH = Path(os.environ.get("SNIPE_DB", "data/snipe.db"))
|
||
_DB_PATH.parent.mkdir(exist_ok=True)
|
||
|
||
|
||
def _get_adapter() -> EbayAdapter:
|
||
store = Store(_DB_PATH)
|
||
tokens = EbayTokenManager(
|
||
client_id=os.environ.get("EBAY_CLIENT_ID", ""),
|
||
client_secret=os.environ.get("EBAY_CLIENT_SECRET", ""),
|
||
env=os.environ.get("EBAY_ENV", "production"),
|
||
)
|
||
return EbayAdapter(tokens, store, env=os.environ.get("EBAY_ENV", "production"))
|
||
|
||
|
||
def _passes_filter(listing, trust, seller, state: FilterState) -> bool:
|
||
import json
|
||
if trust and trust.composite_score < state.min_trust_score:
|
||
return False
|
||
if state.min_price and listing.price < state.min_price:
|
||
return False
|
||
if state.max_price and listing.price > state.max_price:
|
||
return False
|
||
if state.conditions and listing.condition not in state.conditions:
|
||
return False
|
||
if seller:
|
||
if seller.account_age_days < state.min_account_age_days:
|
||
return False
|
||
if seller.feedback_count < state.min_feedback_count:
|
||
return False
|
||
if seller.feedback_ratio < state.min_feedback_ratio:
|
||
return False
|
||
if trust:
|
||
flags = json.loads(trust.red_flags_json or "[]")
|
||
if state.hide_new_accounts and "account_under_30_days" in flags:
|
||
return False
|
||
if state.hide_suspicious_price and "suspicious_price" in flags:
|
||
return False
|
||
if state.hide_duplicate_photos and "duplicate_photo" in flags:
|
||
return False
|
||
return True
|
||
|
||
|
||
def render() -> None:
|
||
st.title("🔍 Snipe — eBay Listing Search")
|
||
|
||
col_q, col_price, col_btn = st.columns([4, 2, 1])
|
||
query = col_q.text_input("Search", placeholder="RTX 4090 GPU", label_visibility="collapsed")
|
||
max_price = col_price.number_input("Max price $", min_value=0.0, value=0.0,
|
||
step=50.0, label_visibility="collapsed")
|
||
search_clicked = col_btn.button("Search", use_container_width=True)
|
||
|
||
if not search_clicked or not query:
|
||
st.info("Enter a search term and click Search.")
|
||
return
|
||
|
||
with st.spinner("Fetching listings..."):
|
||
try:
|
||
adapter = _get_adapter()
|
||
filters = SearchFilters(max_price=max_price if max_price > 0 else None)
|
||
listings = adapter.search(query, filters)
|
||
adapter.get_completed_sales(query) # warm the comps cache
|
||
except Exception as e:
|
||
st.error(f"eBay search failed: {e}")
|
||
return
|
||
|
||
if not listings:
|
||
st.warning("No listings found.")
|
||
return
|
||
|
||
store = Store(_DB_PATH)
|
||
for listing in listings:
|
||
store.save_listing(listing)
|
||
if listing.seller_platform_id:
|
||
seller = adapter.get_seller(listing.seller_platform_id)
|
||
if seller:
|
||
store.save_seller(seller)
|
||
|
||
scorer = TrustScorer(store)
|
||
trust_scores = scorer.score_batch(listings, query)
|
||
pairs = list(zip(listings, trust_scores))
|
||
|
||
opts = build_filter_options(pairs)
|
||
filter_state = render_filter_sidebar(pairs, opts)
|
||
|
||
sort_col = st.selectbox("Sort by", ["Trust score", "Price ↑", "Price ↓", "Newest"],
|
||
label_visibility="collapsed")
|
||
|
||
def sort_key(pair):
|
||
l, t = pair
|
||
if sort_col == "Trust score": return -(t.composite_score if t else 0)
|
||
if sort_col == "Price ↑": return l.price
|
||
if sort_col == "Price ↓": return -l.price
|
||
return l.listing_age_days
|
||
|
||
sorted_pairs = sorted(pairs, key=sort_key)
|
||
visible = [(l, t) for l, t in sorted_pairs
|
||
if _passes_filter(l, t, store.get_seller("ebay", l.seller_platform_id), filter_state)]
|
||
hidden_count = len(sorted_pairs) - len(visible)
|
||
|
||
st.caption(f"{len(visible)} results · {hidden_count} hidden by filters")
|
||
|
||
for listing, trust in visible:
|
||
seller = store.get_seller("ebay", listing.seller_platform_id)
|
||
render_listing_row(listing, trust, seller)
|
||
|
||
if hidden_count:
|
||
if st.button(f"Show {hidden_count} hidden results"):
|
||
# Track visible by (platform, platform_listing_id) to avoid object-identity comparison
|
||
visible_ids = {(l.platform, l.platform_listing_id) for l, _ in visible}
|
||
for listing, trust in sorted_pairs:
|
||
if (listing.platform, listing.platform_listing_id) not in visible_ids:
|
||
seller = store.get_seller("ebay", listing.seller_platform_id)
|
||
render_listing_row(listing, trust, seller)
|
||
```
|
||
|
||
- [ ] **Step 7: Write app/app.py**
|
||
|
||
```python
|
||
# app/app.py
|
||
"""Streamlit entrypoint."""
|
||
import streamlit as st
|
||
|
||
st.set_page_config(
|
||
page_title="Snipe",
|
||
page_icon="🎯",
|
||
layout="wide",
|
||
initial_sidebar_state="expanded",
|
||
)
|
||
|
||
from app.ui.Search import render
|
||
render()
|
||
```
|
||
|
||
- [ ] **Step 8: Run all tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/ -v --tb=short
|
||
```
|
||
Expected: All PASSED
|
||
|
||
- [ ] **Step 9: Smoke-test the UI**
|
||
|
||
```bash
|
||
cd /Library/Development/CircuitForge/snipe
|
||
cp .env.example .env
|
||
# Fill in real EBAY_CLIENT_ID and EBAY_CLIENT_SECRET in .env
|
||
conda run -n job-seeker streamlit run app/app.py --server.port 8506
|
||
```
|
||
Open http://localhost:8506, search for "RTX 4090", verify results appear with trust scores.
|
||
|
||
- [ ] **Step 10: Commit**
|
||
|
||
```bash
|
||
git add app/ui/ app/app.py tests/ui/
|
||
git commit -m "feat: add search UI with dynamic filter sidebar and listing rows"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 7b: First-run wizard stub
|
||
|
||
**Files:** `app/wizard/__init__.py`, `app/wizard/setup.py`
|
||
|
||
The spec (section 3.4) includes `app/wizard/` in the directory structure. This task creates a stub that collects eBay credentials on first run and writes `.env`. Full wizard UX (multi-step onboarding flow) is wired in a later pass; this stub ensures the import path exists and the first-run gate works.
|
||
|
||
- [ ] **Step 1: Write wizard/setup.py**
|
||
|
||
```python
|
||
# app/wizard/setup.py
|
||
"""First-run wizard: collect eBay credentials and write .env."""
|
||
from __future__ import annotations
|
||
from pathlib import Path
|
||
import streamlit as st
|
||
from circuitforge_core.wizard import BaseWizard
|
||
|
||
|
||
class SnipeSetupWizard(BaseWizard):
|
||
"""
|
||
Guides the user through first-run setup:
|
||
1. Enter eBay Client ID + Secret
|
||
2. Choose sandbox vs production
|
||
3. Verify connection (token fetch)
|
||
4. Write .env file
|
||
"""
|
||
|
||
def __init__(self, env_path: Path = Path(".env")):
|
||
self._env_path = env_path
|
||
|
||
def run(self) -> bool:
|
||
"""Run the setup wizard. Returns True if setup completed successfully."""
|
||
st.title("🎯 Snipe — First Run Setup")
|
||
st.info(
|
||
"To use Snipe, you need eBay developer credentials. "
|
||
"Register at https://developer.ebay.com and create an app to get your Client ID and Secret."
|
||
)
|
||
|
||
client_id = st.text_input("eBay Client ID", type="password")
|
||
client_secret = st.text_input("eBay Client Secret", type="password")
|
||
env = st.selectbox("eBay Environment", ["production", "sandbox"])
|
||
|
||
if st.button("Save and verify"):
|
||
if not client_id or not client_secret:
|
||
st.error("Both Client ID and Secret are required.")
|
||
return False
|
||
# Write .env
|
||
self._env_path.write_text(
|
||
f"EBAY_CLIENT_ID={client_id}\n"
|
||
f"EBAY_CLIENT_SECRET={client_secret}\n"
|
||
f"EBAY_ENV={env}\n"
|
||
f"SNIPE_DB=data/snipe.db\n"
|
||
)
|
||
st.success(f".env written to {self._env_path}. Reload the app to begin searching.")
|
||
return True
|
||
return False
|
||
|
||
def is_configured(self) -> bool:
|
||
"""Return True if .env exists and has eBay credentials."""
|
||
if not self._env_path.exists():
|
||
return False
|
||
text = self._env_path.read_text()
|
||
return "EBAY_CLIENT_ID=" in text and "EBAY_CLIENT_SECRET=" in text
|
||
```
|
||
|
||
```python
|
||
# app/wizard/__init__.py
|
||
from .setup import SnipeSetupWizard
|
||
|
||
__all__ = ["SnipeSetupWizard"]
|
||
```
|
||
|
||
- [ ] **Step 2: Wire wizard gate into app.py**
|
||
|
||
Update `app/app.py` to show the wizard on first run:
|
||
|
||
```python
|
||
# app/app.py
|
||
"""Streamlit entrypoint."""
|
||
from pathlib import Path
|
||
import streamlit as st
|
||
from app.wizard import SnipeSetupWizard
|
||
|
||
st.set_page_config(
|
||
page_title="Snipe",
|
||
page_icon="🎯",
|
||
layout="wide",
|
||
initial_sidebar_state="expanded",
|
||
)
|
||
|
||
wizard = SnipeSetupWizard(env_path=Path(".env"))
|
||
if not wizard.is_configured():
|
||
wizard.run()
|
||
st.stop()
|
||
|
||
from app.ui.Search import render
|
||
render()
|
||
```
|
||
|
||
- [ ] **Step 3: Run all tests**
|
||
|
||
```bash
|
||
conda run -n job-seeker pytest tests/ -v --tb=short
|
||
```
|
||
Expected: All PASSED (no new tests needed — wizard is UI-only code)
|
||
|
||
- [ ] **Step 4: Commit**
|
||
|
||
```bash
|
||
git add app/wizard/ app/app.py
|
||
git commit -m "feat: add first-run setup wizard stub"
|
||
```
|
||
|
||
---
|
||
|
||
## Task 8: Docker build and manage.sh
|
||
|
||
- [ ] **Step 1: Test Docker build**
|
||
|
||
```bash
|
||
cd /Library/Development/CircuitForge
|
||
docker compose -f snipe/compose.yml build
|
||
```
|
||
Expected: Build succeeds
|
||
|
||
- [ ] **Step 2: Test Docker run**
|
||
|
||
```bash
|
||
cd /Library/Development/CircuitForge/snipe
|
||
docker compose up -d
|
||
```
|
||
Open http://localhost:8506, verify UI loads.
|
||
|
||
- [ ] **Step 3: Test manage.sh**
|
||
|
||
```bash
|
||
./manage.sh status
|
||
./manage.sh logs
|
||
./manage.sh stop
|
||
./manage.sh start
|
||
./manage.sh open # should open http://localhost:8506
|
||
```
|
||
|
||
- [ ] **Step 4: Final commit and push**
|
||
|
||
```bash
|
||
git add .
|
||
git commit -m "feat: Snipe MVP v0.1 — eBay trust scorer with faceted filter UI"
|
||
git remote add origin https://git.opensourcesolarpunk.com/Circuit-Forge/snipe.git
|
||
git push -u origin main
|
||
```
|