Compare commits
8 commits
3053285ba5
...
997eb6143e
| Author | SHA1 | Date | |
|---|---|---|---|
| 997eb6143e | |||
| 59791fd163 | |||
| 95ccd8f1b3 | |||
| ee3c85bfb0 | |||
| 1672e215b2 | |||
| a8eb11dc46 | |||
| 675146ff1a | |||
| ac114da5e7 |
48 changed files with 5324 additions and 2 deletions
4
.env.example
Normal file
4
.env.example
Normal file
|
|
@ -0,0 +1,4 @@
|
|||
EBAY_CLIENT_ID=your-client-id-here
|
||||
EBAY_CLIENT_SECRET=your-client-secret-here
|
||||
EBAY_ENV=production # or: sandbox
|
||||
SNIPE_DB=data/snipe.db
|
||||
9
.gitignore
vendored
Normal file
9
.gitignore
vendored
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
__pycache__/
|
||||
*.pyc
|
||||
*.pyo
|
||||
.env
|
||||
*.egg-info/
|
||||
dist/
|
||||
.pytest_cache/
|
||||
data/
|
||||
.superpowers/
|
||||
15
Dockerfile
Normal file
15
Dockerfile
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install circuitforge-core from sibling directory (compose sets context: ..)
|
||||
COPY circuitforge-core/ ./circuitforge-core/
|
||||
RUN pip install --no-cache-dir -e ./circuitforge-core
|
||||
|
||||
# Install snipe
|
||||
COPY snipe/ ./snipe/
|
||||
WORKDIR /app/snipe
|
||||
RUN pip install --no-cache-dir -e .
|
||||
|
||||
EXPOSE 8506
|
||||
CMD ["streamlit", "run", "app/app.py", "--server.port=8506", "--server.address=0.0.0.0"]
|
||||
7
PRIVACY.md
Normal file
7
PRIVACY.md
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
# Privacy Policy
|
||||
|
||||
CircuitForge LLC's privacy policy applies to this product and is published at:
|
||||
|
||||
**<https://circuitforge.tech/privacy>**
|
||||
|
||||
Last reviewed: March 2026.
|
||||
75
README.md
75
README.md
|
|
@ -1,3 +1,74 @@
|
|||
# snipe
|
||||
# Snipe — Auction Sniping & Bid Management
|
||||
|
||||
snipe by Circuit Forge LLC — Auction sniping — CT Bids, antiques, estate auctions, eBay
|
||||
> *Part of the Circuit Forge LLC "AI for the tasks you hate most" suite.*
|
||||
|
||||
**Status:** Backlog — not yet started. Peregrine must prove the model first.
|
||||
|
||||
## What it does
|
||||
|
||||
Snipe manages online auction participation: monitoring listings across platforms, scheduling last-second bids, tracking price history to avoid overpaying, and managing the post-win logistics (payment, shipping coordination, provenance documentation for antiques).
|
||||
|
||||
The name is the origin of the word "sniping" — common snipes are notoriously elusive birds, secretive and camouflaged, that flush suddenly from cover. Shooting one required extreme patience, stillness, and a precise last-second shot. That's the auction strategy.
|
||||
|
||||
## Primary platforms
|
||||
|
||||
- **CT Bids** — Connecticut state surplus and municipal auctions
|
||||
- **GovPlanet / IronPlanet** — government surplus equipment
|
||||
- **AuctionZip** — antique auction house aggregator (1,000+ houses)
|
||||
- **Invaluable / LiveAuctioneers** — fine art and antiques
|
||||
- **Bidsquare** — antiques and collectibles
|
||||
- **eBay** — general + collectibles
|
||||
- **HiBid** — estate auctions
|
||||
- **Proxibid** — industrial and collector auctions
|
||||
|
||||
## Why it's hard
|
||||
|
||||
Online auctions are frustrating because:
|
||||
- Winning requires being present at the exact closing moment — sometimes 2 AM
|
||||
- Platforms vary wildly: some allow proxy bids, some don't; closing times extend on activity
|
||||
- Price history is hidden — you don't know if an item is underpriced or a trap
|
||||
- Shipping logistics for large / fragile antiques require coordination with auction house
|
||||
- Provenance documentation is inconsistent across auction houses
|
||||
|
||||
## Core pipeline
|
||||
|
||||
```
|
||||
Configure search (categories, keywords, platforms, max price, location)
|
||||
→ Monitor listings → Alert on matching items
|
||||
→ Human review: approve or skip
|
||||
→ Price research: comparable sales history, condition assessment via photos
|
||||
→ Schedule snipe bid (configurable: X seconds before close, Y% above current)
|
||||
→ Execute bid → Monitor for counter-bid (soft-close extension handling)
|
||||
→ Win notification → Payment + shipping coordination workflow
|
||||
→ Provenance documentation for antiques
|
||||
```
|
||||
|
||||
## Bidding strategy engine
|
||||
|
||||
- **Hard snipe**: submit bid N seconds before close (default: 8s)
|
||||
- **Soft-close handling**: detect if platform extends on last-minute bids; adjust strategy
|
||||
- **Proxy ladder**: set max and let the engine bid in increments, reserve snipe for final window
|
||||
- **Reserve detection**: identify likely reserve price from bid history patterns
|
||||
- **Comparable sales**: pull recent auction results for same/similar items across platforms
|
||||
|
||||
## Post-win workflow
|
||||
|
||||
1. Payment method routing (platform-specific: CC, wire, check)
|
||||
2. Shipping quote requests to approved carriers (for freight / large items)
|
||||
3. Condition report request from auction house
|
||||
4. Provenance packet generation (for antiques / fine art resale or insurance)
|
||||
5. Add to inventory (for dealers / collectors tracking portfolio value)
|
||||
|
||||
## Product code (license key)
|
||||
|
||||
`CFG-SNPE-XXXX-XXXX-XXXX`
|
||||
|
||||
## Tech notes
|
||||
|
||||
- Shared `circuitforge-core` scaffold
|
||||
- Platform adapters: AuctionZip, Invaluable, HiBid, eBay, CT Bids (Playwright + API where available)
|
||||
- Bid execution: Playwright automation with precise timing (NTP-synchronized)
|
||||
- Soft-close detection: platform-specific rules engine
|
||||
- Comparable sales: scrape completed auctions, normalize by condition/provenance
|
||||
- Vision module: condition assessment from listing photos (moondream2 / Claude vision)
|
||||
- Shipping quote integration: uShip API for freight, FedEx / UPS for parcel
|
||||
|
|
|
|||
0
app/__init__.py
Normal file
0
app/__init__.py
Normal file
19
app/app.py
Normal file
19
app/app.py
Normal file
|
|
@ -0,0 +1,19 @@
|
|||
"""Streamlit entrypoint."""
|
||||
from pathlib import Path
|
||||
import streamlit as st
|
||||
from app.wizard import SnipeSetupWizard
|
||||
|
||||
st.set_page_config(
|
||||
page_title="Snipe",
|
||||
page_icon="🎯",
|
||||
layout="wide",
|
||||
initial_sidebar_state="expanded",
|
||||
)
|
||||
|
||||
wizard = SnipeSetupWizard(env_path=Path(".env"))
|
||||
if not wizard.is_configured():
|
||||
wizard.run()
|
||||
st.stop()
|
||||
|
||||
from app.ui.Search import render
|
||||
render()
|
||||
0
app/db/__init__.py
Normal file
0
app/db/__init__.py
Normal file
76
app/db/migrations/001_init.sql
Normal file
76
app/db/migrations/001_init.sql
Normal file
|
|
@ -0,0 +1,76 @@
|
|||
CREATE TABLE IF NOT EXISTS sellers (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
platform TEXT NOT NULL,
|
||||
platform_seller_id TEXT NOT NULL,
|
||||
username TEXT NOT NULL,
|
||||
account_age_days INTEGER NOT NULL,
|
||||
feedback_count INTEGER NOT NULL,
|
||||
feedback_ratio REAL NOT NULL,
|
||||
category_history_json TEXT NOT NULL DEFAULT '{}',
|
||||
fetched_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE(platform, platform_seller_id)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS listings (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
platform TEXT NOT NULL,
|
||||
platform_listing_id TEXT NOT NULL,
|
||||
title TEXT NOT NULL,
|
||||
price REAL NOT NULL,
|
||||
currency TEXT NOT NULL DEFAULT 'USD',
|
||||
condition TEXT,
|
||||
seller_platform_id TEXT,
|
||||
url TEXT,
|
||||
photo_urls TEXT NOT NULL DEFAULT '[]',
|
||||
listing_age_days INTEGER DEFAULT 0,
|
||||
fetched_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||||
trust_score_id INTEGER REFERENCES trust_scores(id),
|
||||
UNIQUE(platform, platform_listing_id)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS trust_scores (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
listing_id INTEGER NOT NULL REFERENCES listings(id),
|
||||
composite_score INTEGER NOT NULL,
|
||||
account_age_score INTEGER NOT NULL DEFAULT 0,
|
||||
feedback_count_score INTEGER NOT NULL DEFAULT 0,
|
||||
feedback_ratio_score INTEGER NOT NULL DEFAULT 0,
|
||||
price_vs_market_score INTEGER NOT NULL DEFAULT 0,
|
||||
category_history_score INTEGER NOT NULL DEFAULT 0,
|
||||
photo_hash_duplicate INTEGER NOT NULL DEFAULT 0,
|
||||
photo_analysis_json TEXT,
|
||||
red_flags_json TEXT NOT NULL DEFAULT '[]',
|
||||
score_is_partial INTEGER NOT NULL DEFAULT 0,
|
||||
scored_at TEXT DEFAULT CURRENT_TIMESTAMP
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS market_comps (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
platform TEXT NOT NULL,
|
||||
query_hash TEXT NOT NULL,
|
||||
median_price REAL NOT NULL,
|
||||
sample_count INTEGER NOT NULL,
|
||||
fetched_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||||
expires_at TEXT NOT NULL,
|
||||
UNIQUE(platform, query_hash)
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS saved_searches (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
name TEXT NOT NULL,
|
||||
query TEXT NOT NULL,
|
||||
platform TEXT NOT NULL DEFAULT 'ebay',
|
||||
filters_json TEXT NOT NULL DEFAULT '{}',
|
||||
created_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||||
last_run_at TEXT
|
||||
);
|
||||
|
||||
-- PhotoHash: perceptual hash store for cross-search dedup (v0.2+). Schema present in v0.1.
|
||||
CREATE TABLE IF NOT EXISTS photo_hashes (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
listing_id INTEGER NOT NULL REFERENCES listings(id),
|
||||
photo_url TEXT NOT NULL,
|
||||
phash TEXT NOT NULL,
|
||||
first_seen_at TEXT DEFAULT CURRENT_TIMESTAMP,
|
||||
UNIQUE(listing_id, photo_url)
|
||||
);
|
||||
84
app/db/models.py
Normal file
84
app/db/models.py
Normal file
|
|
@ -0,0 +1,84 @@
|
|||
"""Dataclasses for all Snipe domain objects."""
|
||||
from __future__ import annotations
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
|
||||
|
||||
@dataclass
|
||||
class Seller:
|
||||
platform: str
|
||||
platform_seller_id: str
|
||||
username: str
|
||||
account_age_days: int
|
||||
feedback_count: int
|
||||
feedback_ratio: float # 0.0–1.0
|
||||
category_history_json: str # JSON blob of past category sales
|
||||
id: Optional[int] = None
|
||||
fetched_at: Optional[str] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class Listing:
|
||||
platform: str
|
||||
platform_listing_id: str
|
||||
title: str
|
||||
price: float
|
||||
currency: str
|
||||
condition: str
|
||||
seller_platform_id: str
|
||||
url: str
|
||||
photo_urls: list[str] = field(default_factory=list)
|
||||
listing_age_days: int = 0
|
||||
id: Optional[int] = None
|
||||
fetched_at: Optional[str] = None
|
||||
trust_score_id: Optional[int] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class TrustScore:
|
||||
listing_id: int
|
||||
composite_score: int # 0–100
|
||||
account_age_score: int # 0–20
|
||||
feedback_count_score: int # 0–20
|
||||
feedback_ratio_score: int # 0–20
|
||||
price_vs_market_score: int # 0–20
|
||||
category_history_score: int # 0–20
|
||||
photo_hash_duplicate: bool = False
|
||||
photo_analysis_json: Optional[str] = None
|
||||
red_flags_json: str = "[]"
|
||||
score_is_partial: bool = False
|
||||
id: Optional[int] = None
|
||||
scored_at: Optional[str] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class MarketComp:
|
||||
platform: str
|
||||
query_hash: str
|
||||
median_price: float
|
||||
sample_count: int
|
||||
expires_at: str # ISO8601 — checked against current time
|
||||
id: Optional[int] = None
|
||||
fetched_at: Optional[str] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class SavedSearch:
|
||||
"""Schema scaffolded in v0.1; background monitoring wired in v0.2."""
|
||||
name: str
|
||||
query: str
|
||||
platform: str
|
||||
filters_json: str = "{}"
|
||||
id: Optional[int] = None
|
||||
created_at: Optional[str] = None
|
||||
last_run_at: Optional[str] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class PhotoHash:
|
||||
"""Perceptual hash store for cross-search dedup (v0.2+). Schema scaffolded in v0.1."""
|
||||
listing_id: int
|
||||
photo_url: str
|
||||
phash: str # hex string from imagehash
|
||||
id: Optional[int] = None
|
||||
first_seen_at: Optional[str] = None
|
||||
97
app/db/store.py
Normal file
97
app/db/store.py
Normal file
|
|
@ -0,0 +1,97 @@
|
|||
"""Thin SQLite read/write layer for all Snipe models."""
|
||||
from __future__ import annotations
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from circuitforge_core.db import get_connection, run_migrations
|
||||
|
||||
from .models import Listing, Seller, TrustScore, MarketComp
|
||||
|
||||
MIGRATIONS_DIR = Path(__file__).parent / "migrations"
|
||||
|
||||
|
||||
class Store:
|
||||
def __init__(self, db_path: Path):
|
||||
self._conn = get_connection(db_path)
|
||||
run_migrations(self._conn, MIGRATIONS_DIR)
|
||||
|
||||
# --- Seller ---
|
||||
|
||||
def save_seller(self, seller: Seller) -> None:
|
||||
self._conn.execute(
|
||||
"INSERT OR REPLACE INTO sellers "
|
||||
"(platform, platform_seller_id, username, account_age_days, "
|
||||
"feedback_count, feedback_ratio, category_history_json) "
|
||||
"VALUES (?,?,?,?,?,?,?)",
|
||||
(seller.platform, seller.platform_seller_id, seller.username,
|
||||
seller.account_age_days, seller.feedback_count, seller.feedback_ratio,
|
||||
seller.category_history_json),
|
||||
)
|
||||
self._conn.commit()
|
||||
|
||||
def get_seller(self, platform: str, platform_seller_id: str) -> Optional[Seller]:
|
||||
row = self._conn.execute(
|
||||
"SELECT platform, platform_seller_id, username, account_age_days, "
|
||||
"feedback_count, feedback_ratio, category_history_json, id, fetched_at "
|
||||
"FROM sellers WHERE platform=? AND platform_seller_id=?",
|
||||
(platform, platform_seller_id),
|
||||
).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
return Seller(*row[:7], id=row[7], fetched_at=row[8])
|
||||
|
||||
# --- Listing ---
|
||||
|
||||
def save_listing(self, listing: Listing) -> None:
|
||||
self._conn.execute(
|
||||
"INSERT OR REPLACE INTO listings "
|
||||
"(platform, platform_listing_id, title, price, currency, condition, "
|
||||
"seller_platform_id, url, photo_urls, listing_age_days) "
|
||||
"VALUES (?,?,?,?,?,?,?,?,?,?)",
|
||||
(listing.platform, listing.platform_listing_id, listing.title,
|
||||
listing.price, listing.currency, listing.condition,
|
||||
listing.seller_platform_id, listing.url,
|
||||
json.dumps(listing.photo_urls), listing.listing_age_days),
|
||||
)
|
||||
self._conn.commit()
|
||||
|
||||
def get_listing(self, platform: str, platform_listing_id: str) -> Optional[Listing]:
|
||||
row = self._conn.execute(
|
||||
"SELECT platform, platform_listing_id, title, price, currency, condition, "
|
||||
"seller_platform_id, url, photo_urls, listing_age_days, id, fetched_at "
|
||||
"FROM listings WHERE platform=? AND platform_listing_id=?",
|
||||
(platform, platform_listing_id),
|
||||
).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
return Listing(
|
||||
*row[:8],
|
||||
photo_urls=json.loads(row[8]),
|
||||
listing_age_days=row[9],
|
||||
id=row[10],
|
||||
fetched_at=row[11],
|
||||
)
|
||||
|
||||
# --- MarketComp ---
|
||||
|
||||
def save_market_comp(self, comp: MarketComp) -> None:
|
||||
self._conn.execute(
|
||||
"INSERT OR REPLACE INTO market_comps "
|
||||
"(platform, query_hash, median_price, sample_count, expires_at) "
|
||||
"VALUES (?,?,?,?,?)",
|
||||
(comp.platform, comp.query_hash, comp.median_price,
|
||||
comp.sample_count, comp.expires_at),
|
||||
)
|
||||
self._conn.commit()
|
||||
|
||||
def get_market_comp(self, platform: str, query_hash: str) -> Optional[MarketComp]:
|
||||
row = self._conn.execute(
|
||||
"SELECT platform, query_hash, median_price, sample_count, expires_at, id, fetched_at "
|
||||
"FROM market_comps WHERE platform=? AND query_hash=? AND expires_at > ?",
|
||||
(platform, query_hash, datetime.now(timezone.utc).isoformat()),
|
||||
).fetchone()
|
||||
if not row:
|
||||
return None
|
||||
return MarketComp(*row[:5], id=row[5], fetched_at=row[6])
|
||||
27
app/platforms/__init__.py
Normal file
27
app/platforms/__init__.py
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
"""PlatformAdapter abstract base and shared types."""
|
||||
from __future__ import annotations
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
from app.db.models import Listing, Seller
|
||||
|
||||
|
||||
@dataclass
|
||||
class SearchFilters:
|
||||
max_price: Optional[float] = None
|
||||
min_price: Optional[float] = None
|
||||
condition: Optional[list[str]] = field(default_factory=list)
|
||||
location_radius_km: Optional[int] = None
|
||||
|
||||
|
||||
class PlatformAdapter(ABC):
|
||||
@abstractmethod
|
||||
def search(self, query: str, filters: SearchFilters) -> list[Listing]: ...
|
||||
|
||||
@abstractmethod
|
||||
def get_seller(self, seller_platform_id: str) -> Optional[Seller]: ...
|
||||
|
||||
@abstractmethod
|
||||
def get_completed_sales(self, query: str) -> list[Listing]:
|
||||
"""Fetch recently completed/sold listings for price comp data."""
|
||||
...
|
||||
0
app/platforms/ebay/__init__.py
Normal file
0
app/platforms/ebay/__init__.py
Normal file
98
app/platforms/ebay/adapter.py
Normal file
98
app/platforms/ebay/adapter.py
Normal file
|
|
@ -0,0 +1,98 @@
|
|||
"""eBay Browse API adapter."""
|
||||
from __future__ import annotations
|
||||
import hashlib
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import Optional
|
||||
import requests
|
||||
|
||||
from app.db.models import Listing, Seller, MarketComp
|
||||
from app.db.store import Store
|
||||
from app.platforms import PlatformAdapter, SearchFilters
|
||||
from app.platforms.ebay.auth import EbayTokenManager
|
||||
from app.platforms.ebay.normaliser import normalise_listing, normalise_seller
|
||||
|
||||
BROWSE_BASE = {
|
||||
"production": "https://api.ebay.com/buy/browse/v1",
|
||||
"sandbox": "https://api.sandbox.ebay.com/buy/browse/v1",
|
||||
}
|
||||
# Note: seller lookup uses the Browse API with a seller filter, not a separate Seller API.
|
||||
# The Commerce Identity /user endpoint returns the calling app's own identity (requires
|
||||
# user OAuth, not app credentials). Seller metadata is extracted from Browse API inline
|
||||
# seller fields. registrationDate is available in item detail responses via this path.
|
||||
|
||||
|
||||
class EbayAdapter(PlatformAdapter):
|
||||
def __init__(self, token_manager: EbayTokenManager, store: Store, env: str = "production"):
|
||||
self._tokens = token_manager
|
||||
self._store = store
|
||||
self._browse_base = BROWSE_BASE[env]
|
||||
|
||||
def _headers(self) -> dict:
|
||||
return {"Authorization": f"Bearer {self._tokens.get_token()}"}
|
||||
|
||||
def search(self, query: str, filters: SearchFilters) -> list[Listing]:
|
||||
params: dict = {"q": query, "limit": 50}
|
||||
filter_parts = []
|
||||
if filters.max_price:
|
||||
filter_parts.append(f"price:[..{filters.max_price}],priceCurrency:USD")
|
||||
if filters.condition:
|
||||
cond_map = {"new": "NEW", "used": "USED", "open box": "OPEN_BOX", "for parts": "FOR_PARTS_NOT_WORKING"}
|
||||
ebay_conds = [cond_map[c] for c in filters.condition if c in cond_map]
|
||||
if ebay_conds:
|
||||
filter_parts.append(f"conditions:{{{','.join(ebay_conds)}}}")
|
||||
if filter_parts:
|
||||
params["filter"] = ",".join(filter_parts)
|
||||
|
||||
resp = requests.get(f"{self._browse_base}/item_summary/search",
|
||||
headers=self._headers(), params=params)
|
||||
resp.raise_for_status()
|
||||
items = resp.json().get("itemSummaries", [])
|
||||
return [normalise_listing(item) for item in items]
|
||||
|
||||
def get_seller(self, seller_platform_id: str) -> Optional[Seller]:
|
||||
cached = self._store.get_seller("ebay", seller_platform_id)
|
||||
if cached:
|
||||
return cached
|
||||
try:
|
||||
resp = requests.get(
|
||||
f"{self._browse_base}/item_summary/search",
|
||||
headers={**self._headers(), "X-EBAY-C-MARKETPLACE-ID": "EBAY_US"},
|
||||
params={"seller": seller_platform_id, "limit": 1},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
items = resp.json().get("itemSummaries", [])
|
||||
if not items:
|
||||
return None
|
||||
seller = normalise_seller(items[0].get("seller", {}))
|
||||
self._store.save_seller(seller)
|
||||
return seller
|
||||
except Exception:
|
||||
return None # Caller handles None gracefully (partial score)
|
||||
|
||||
def get_completed_sales(self, query: str) -> list[Listing]:
|
||||
query_hash = hashlib.md5(query.encode()).hexdigest()
|
||||
cached = self._store.get_market_comp("ebay", query_hash)
|
||||
if cached:
|
||||
return [] # Comp data is used directly; return empty to signal cache hit
|
||||
|
||||
params = {"q": query, "limit": 20, "filter": "buyingOptions:{FIXED_PRICE}"}
|
||||
try:
|
||||
resp = requests.get(f"{self._browse_base}/item_summary/search",
|
||||
headers=self._headers(), params=params)
|
||||
resp.raise_for_status()
|
||||
items = resp.json().get("itemSummaries", [])
|
||||
listings = [normalise_listing(item) for item in items]
|
||||
if listings:
|
||||
prices = sorted(l.price for l in listings)
|
||||
median = prices[len(prices) // 2]
|
||||
comp = MarketComp(
|
||||
platform="ebay",
|
||||
query_hash=query_hash,
|
||||
median_price=median,
|
||||
sample_count=len(prices),
|
||||
expires_at=(datetime.now(timezone.utc) + timedelta(hours=6)).isoformat(),
|
||||
)
|
||||
self._store.save_market_comp(comp)
|
||||
return listings
|
||||
except Exception:
|
||||
return []
|
||||
46
app/platforms/ebay/auth.py
Normal file
46
app/platforms/ebay/auth.py
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
"""eBay OAuth2 client credentials token manager."""
|
||||
from __future__ import annotations
|
||||
import base64
|
||||
import time
|
||||
from typing import Optional
|
||||
import requests
|
||||
|
||||
EBAY_OAUTH_URLS = {
|
||||
"production": "https://api.ebay.com/identity/v1/oauth2/token",
|
||||
"sandbox": "https://api.sandbox.ebay.com/identity/v1/oauth2/token",
|
||||
}
|
||||
|
||||
|
||||
class EbayTokenManager:
|
||||
"""Fetches and caches eBay app-level OAuth tokens. Thread-safe for single process."""
|
||||
|
||||
def __init__(self, client_id: str, client_secret: str, env: str = "production"):
|
||||
self._client_id = client_id
|
||||
self._client_secret = client_secret
|
||||
self._token_url = EBAY_OAUTH_URLS[env]
|
||||
self._token: Optional[str] = None
|
||||
self._expires_at: float = 0.0
|
||||
|
||||
def get_token(self) -> str:
|
||||
"""Return a valid access token, fetching or refreshing as needed."""
|
||||
if self._token and time.time() < self._expires_at - 60:
|
||||
return self._token
|
||||
self._fetch_token()
|
||||
return self._token # type: ignore[return-value]
|
||||
|
||||
def _fetch_token(self) -> None:
|
||||
credentials = base64.b64encode(
|
||||
f"{self._client_id}:{self._client_secret}".encode()
|
||||
).decode()
|
||||
resp = requests.post(
|
||||
self._token_url,
|
||||
headers={
|
||||
"Authorization": f"Basic {credentials}",
|
||||
"Content-Type": "application/x-www-form-urlencoded",
|
||||
},
|
||||
data={"grant_type": "client_credentials", "scope": "https://api.ebay.com/oauth/api_scope"},
|
||||
)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
self._token = data["access_token"]
|
||||
self._expires_at = time.time() + data["expires_in"]
|
||||
68
app/platforms/ebay/normaliser.py
Normal file
68
app/platforms/ebay/normaliser.py
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
"""Convert raw eBay API responses into Snipe domain objects."""
|
||||
from __future__ import annotations
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from app.db.models import Listing, Seller
|
||||
|
||||
|
||||
def normalise_listing(raw: dict) -> Listing:
|
||||
price_data = raw.get("price", {})
|
||||
photos = []
|
||||
if "image" in raw:
|
||||
photos.append(raw["image"].get("imageUrl", ""))
|
||||
for img in raw.get("additionalImages", []):
|
||||
url = img.get("imageUrl", "")
|
||||
if url and url not in photos:
|
||||
photos.append(url)
|
||||
photos = [p for p in photos if p]
|
||||
|
||||
listing_age_days = 0
|
||||
created_raw = raw.get("itemCreationDate", "")
|
||||
if created_raw:
|
||||
try:
|
||||
created = datetime.fromisoformat(created_raw.replace("Z", "+00:00"))
|
||||
listing_age_days = (datetime.now(timezone.utc) - created).days
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
seller = raw.get("seller", {})
|
||||
return Listing(
|
||||
platform="ebay",
|
||||
platform_listing_id=raw["itemId"],
|
||||
title=raw.get("title", ""),
|
||||
price=float(price_data.get("value", 0)),
|
||||
currency=price_data.get("currency", "USD"),
|
||||
condition=raw.get("condition", "").lower(),
|
||||
seller_platform_id=seller.get("username", ""),
|
||||
url=raw.get("itemWebUrl", ""),
|
||||
photo_urls=photos,
|
||||
listing_age_days=listing_age_days,
|
||||
)
|
||||
|
||||
|
||||
def normalise_seller(raw: dict) -> Seller:
|
||||
feedback_pct = float(raw.get("feedbackPercentage", "0").strip("%")) / 100.0
|
||||
|
||||
account_age_days = 0
|
||||
reg_date_raw = raw.get("registrationDate", "")
|
||||
if reg_date_raw:
|
||||
try:
|
||||
reg_date = datetime.fromisoformat(reg_date_raw.replace("Z", "+00:00"))
|
||||
account_age_days = (datetime.now(timezone.utc) - reg_date).days
|
||||
except ValueError:
|
||||
pass
|
||||
|
||||
category_history = {}
|
||||
summary = raw.get("sellerFeedbackSummary", {})
|
||||
for entry in summary.get("feedbackByCategory", []):
|
||||
category_history[entry.get("categorySite", "")] = int(entry.get("count", 0))
|
||||
|
||||
return Seller(
|
||||
platform="ebay",
|
||||
platform_seller_id=raw["username"],
|
||||
username=raw["username"],
|
||||
account_age_days=account_age_days,
|
||||
feedback_count=int(raw.get("feedbackScore", 0)),
|
||||
feedback_ratio=feedback_pct,
|
||||
category_history_json=json.dumps(category_history),
|
||||
)
|
||||
34
app/tiers.py
Normal file
34
app/tiers.py
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
"""Snipe feature gates. Delegates to circuitforge_core.tiers."""
|
||||
from __future__ import annotations
|
||||
from circuitforge_core.tiers import can_use as _core_can_use, TIERS # noqa: F401
|
||||
|
||||
# Feature key → minimum tier required.
|
||||
FEATURES: dict[str, str] = {
|
||||
# Free tier
|
||||
"metadata_trust_scoring": "free",
|
||||
"hash_dedup": "free",
|
||||
# Paid tier
|
||||
"photo_analysis": "paid",
|
||||
"serial_number_check": "paid",
|
||||
"ai_image_detection": "paid",
|
||||
"reverse_image_search": "paid",
|
||||
"saved_searches": "paid",
|
||||
"background_monitoring": "paid",
|
||||
}
|
||||
|
||||
# Photo analysis features unlock if user has local vision model (moondream2 (MD2) or similar).
|
||||
LOCAL_VISION_UNLOCKABLE: frozenset[str] = frozenset({
|
||||
"photo_analysis",
|
||||
"serial_number_check",
|
||||
})
|
||||
|
||||
|
||||
def can_use(
|
||||
feature: str,
|
||||
tier: str = "free",
|
||||
has_byok: bool = False,
|
||||
has_local_vision: bool = False,
|
||||
) -> bool:
|
||||
if has_local_vision and feature in LOCAL_VISION_UNLOCKABLE:
|
||||
return True
|
||||
return _core_can_use(feature, tier, has_byok=has_byok, _features=FEATURES)
|
||||
41
app/trust/__init__.py
Normal file
41
app/trust/__init__.py
Normal file
|
|
@ -0,0 +1,41 @@
|
|||
from .metadata import MetadataScorer
|
||||
from .photo import PhotoScorer
|
||||
from .aggregator import Aggregator
|
||||
from app.db.models import Seller, Listing, TrustScore
|
||||
from app.db.store import Store
|
||||
import hashlib
|
||||
|
||||
|
||||
class TrustScorer:
|
||||
"""Orchestrates metadata + photo scoring for a batch of listings."""
|
||||
|
||||
def __init__(self, store: Store):
|
||||
self._store = store
|
||||
self._meta = MetadataScorer()
|
||||
self._photo = PhotoScorer()
|
||||
self._agg = Aggregator()
|
||||
|
||||
def score_batch(
|
||||
self,
|
||||
listings: list[Listing],
|
||||
query: str,
|
||||
) -> list[TrustScore]:
|
||||
query_hash = hashlib.md5(query.encode()).hexdigest()
|
||||
comp = self._store.get_market_comp("ebay", query_hash)
|
||||
market_median = comp.median_price if comp else None
|
||||
|
||||
photo_url_sets = [l.photo_urls for l in listings]
|
||||
duplicates = self._photo.check_duplicates(photo_url_sets)
|
||||
|
||||
scores = []
|
||||
for listing, is_dup in zip(listings, duplicates):
|
||||
seller = self._store.get_seller("ebay", listing.seller_platform_id)
|
||||
if seller:
|
||||
signal_scores = self._meta.score(seller, market_median, listing.price)
|
||||
else:
|
||||
signal_scores = {k: None for k in
|
||||
["account_age", "feedback_count", "feedback_ratio",
|
||||
"price_vs_market", "category_history"]}
|
||||
trust = self._agg.aggregate(signal_scores, is_dup, seller, listing.id or 0)
|
||||
scores.append(trust)
|
||||
return scores
|
||||
56
app/trust/aggregator.py
Normal file
56
app/trust/aggregator.py
Normal file
|
|
@ -0,0 +1,56 @@
|
|||
"""Composite score and red flag extraction."""
|
||||
from __future__ import annotations
|
||||
import json
|
||||
from typing import Optional
|
||||
from app.db.models import Seller, TrustScore
|
||||
|
||||
HARD_FILTER_AGE_DAYS = 7
|
||||
HARD_FILTER_BAD_RATIO_MIN_COUNT = 20
|
||||
HARD_FILTER_BAD_RATIO_THRESHOLD = 0.80
|
||||
|
||||
|
||||
class Aggregator:
|
||||
def aggregate(
|
||||
self,
|
||||
signal_scores: dict[str, Optional[int]],
|
||||
photo_hash_duplicate: bool,
|
||||
seller: Optional[Seller],
|
||||
listing_id: int = 0,
|
||||
) -> TrustScore:
|
||||
is_partial = any(v is None for v in signal_scores.values())
|
||||
clean = {k: (v if v is not None else 0) for k, v in signal_scores.items()}
|
||||
composite = sum(clean.values())
|
||||
|
||||
red_flags: list[str] = []
|
||||
|
||||
# Hard filters
|
||||
if seller and seller.account_age_days < HARD_FILTER_AGE_DAYS:
|
||||
red_flags.append("new_account")
|
||||
if seller and (
|
||||
seller.feedback_ratio < HARD_FILTER_BAD_RATIO_THRESHOLD
|
||||
and seller.feedback_count > HARD_FILTER_BAD_RATIO_MIN_COUNT
|
||||
):
|
||||
red_flags.append("established_bad_actor")
|
||||
|
||||
# Soft flags
|
||||
if seller and seller.account_age_days < 30:
|
||||
red_flags.append("account_under_30_days")
|
||||
if seller and seller.feedback_count < 10:
|
||||
red_flags.append("low_feedback_count")
|
||||
if clean["price_vs_market"] == 0:
|
||||
red_flags.append("suspicious_price")
|
||||
if photo_hash_duplicate:
|
||||
red_flags.append("duplicate_photo")
|
||||
|
||||
return TrustScore(
|
||||
listing_id=listing_id,
|
||||
composite_score=composite,
|
||||
account_age_score=clean["account_age"],
|
||||
feedback_count_score=clean["feedback_count"],
|
||||
feedback_ratio_score=clean["feedback_ratio"],
|
||||
price_vs_market_score=clean["price_vs_market"],
|
||||
category_history_score=clean["category_history"],
|
||||
photo_hash_duplicate=photo_hash_duplicate,
|
||||
red_flags_json=json.dumps(red_flags),
|
||||
score_is_partial=is_partial,
|
||||
)
|
||||
67
app/trust/metadata.py
Normal file
67
app/trust/metadata.py
Normal file
|
|
@ -0,0 +1,67 @@
|
|||
"""Five metadata trust signals, each scored 0–20."""
|
||||
from __future__ import annotations
|
||||
import json
|
||||
from typing import Optional
|
||||
from app.db.models import Seller
|
||||
|
||||
ELECTRONICS_CATEGORIES = {"ELECTRONICS", "COMPUTERS_TABLETS", "VIDEO_GAMES", "CELL_PHONES"}
|
||||
|
||||
|
||||
class MetadataScorer:
|
||||
def score(
|
||||
self,
|
||||
seller: Seller,
|
||||
market_median: Optional[float],
|
||||
listing_price: float,
|
||||
) -> dict[str, Optional[int]]:
|
||||
return {
|
||||
"account_age": self._account_age(seller.account_age_days),
|
||||
"feedback_count": self._feedback_count(seller.feedback_count),
|
||||
"feedback_ratio": self._feedback_ratio(seller.feedback_ratio, seller.feedback_count),
|
||||
"price_vs_market": self._price_vs_market(listing_price, market_median),
|
||||
"category_history": self._category_history(seller.category_history_json),
|
||||
}
|
||||
|
||||
def _account_age(self, days: int) -> int:
|
||||
if days < 7: return 0
|
||||
if days < 30: return 5
|
||||
if days < 90: return 10
|
||||
if days < 365: return 15
|
||||
return 20
|
||||
|
||||
def _feedback_count(self, count: int) -> int:
|
||||
if count < 3: return 0
|
||||
if count < 10: return 5
|
||||
if count < 50: return 10
|
||||
if count < 200: return 15
|
||||
return 20
|
||||
|
||||
def _feedback_ratio(self, ratio: float, count: int) -> int:
|
||||
if ratio < 0.80 and count > 20: return 0
|
||||
if ratio < 0.90: return 5
|
||||
if ratio < 0.95: return 10
|
||||
if ratio < 0.98: return 15
|
||||
return 20
|
||||
|
||||
def _price_vs_market(self, price: float, median: Optional[float]) -> Optional[int]:
|
||||
if median is None: return None # data unavailable → aggregator sets score_is_partial
|
||||
if median <= 0: return None
|
||||
ratio = price / median
|
||||
if ratio < 0.50: return 0 # >50% below = scam
|
||||
if ratio < 0.70: return 5 # >30% below = suspicious
|
||||
if ratio < 0.85: return 10
|
||||
if ratio <= 1.20: return 20
|
||||
return 15 # above market = still ok, just expensive
|
||||
|
||||
def _category_history(self, category_history_json: str) -> int:
|
||||
try:
|
||||
history = json.loads(category_history_json)
|
||||
except (ValueError, TypeError):
|
||||
return 0
|
||||
electronics_sales = sum(
|
||||
v for k, v in history.items() if k in ELECTRONICS_CATEGORIES
|
||||
)
|
||||
if electronics_sales == 0: return 0
|
||||
if electronics_sales < 5: return 8
|
||||
if electronics_sales < 20: return 14
|
||||
return 20
|
||||
74
app/trust/photo.py
Normal file
74
app/trust/photo.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
"""Perceptual hash deduplication within a result set (free tier, v0.1)."""
|
||||
from __future__ import annotations
|
||||
from typing import Optional
|
||||
import io
|
||||
import requests
|
||||
|
||||
try:
|
||||
import imagehash
|
||||
from PIL import Image
|
||||
_IMAGEHASH_AVAILABLE = True
|
||||
except ImportError:
|
||||
_IMAGEHASH_AVAILABLE = False
|
||||
|
||||
|
||||
class PhotoScorer:
|
||||
"""
|
||||
check_duplicates: compare images within a single result set.
|
||||
Cross-session dedup (PhotoHash table) is v0.2.
|
||||
Vision analysis (real/marketing/EM bag) is v0.2 paid tier.
|
||||
"""
|
||||
|
||||
def check_duplicates(self, photo_urls_per_listing: list[list[str]]) -> list[bool]:
|
||||
"""
|
||||
Returns a list of booleans parallel to photo_urls_per_listing.
|
||||
True = this listing's primary photo is a duplicate of another listing in the set.
|
||||
Falls back to URL-equality check if imagehash is unavailable or fetch fails.
|
||||
"""
|
||||
if not _IMAGEHASH_AVAILABLE:
|
||||
return self._url_dedup(photo_urls_per_listing)
|
||||
|
||||
primary_urls = [urls[0] if urls else "" for urls in photo_urls_per_listing]
|
||||
|
||||
# Fast path: URL equality is a trivial duplicate signal (no fetch needed)
|
||||
url_results = self._url_dedup([[u] for u in primary_urls])
|
||||
|
||||
hashes: list[Optional[str]] = []
|
||||
for url in primary_urls:
|
||||
hashes.append(self._fetch_hash(url))
|
||||
|
||||
results = list(url_results) # start from URL-equality results
|
||||
seen: dict[str, int] = {}
|
||||
for i, h in enumerate(hashes):
|
||||
if h is None:
|
||||
continue
|
||||
if h in seen:
|
||||
results[i] = True
|
||||
results[seen[h]] = True
|
||||
else:
|
||||
seen[h] = i
|
||||
return results
|
||||
|
||||
def _fetch_hash(self, url: str) -> Optional[str]:
|
||||
if not url:
|
||||
return None
|
||||
try:
|
||||
resp = requests.get(url, timeout=5, stream=True)
|
||||
resp.raise_for_status()
|
||||
img = Image.open(io.BytesIO(resp.content))
|
||||
return str(imagehash.phash(img))
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def _url_dedup(self, photo_urls_per_listing: list[list[str]]) -> list[bool]:
|
||||
seen: set[str] = set()
|
||||
results = []
|
||||
for urls in photo_urls_per_listing:
|
||||
primary = urls[0] if urls else ""
|
||||
if primary and primary in seen:
|
||||
results.append(True)
|
||||
else:
|
||||
if primary:
|
||||
seen.add(primary)
|
||||
results.append(False)
|
||||
return results
|
||||
127
app/ui/Search.py
Normal file
127
app/ui/Search.py
Normal file
|
|
@ -0,0 +1,127 @@
|
|||
"""Main search + results page."""
|
||||
from __future__ import annotations
|
||||
import os
|
||||
from pathlib import Path
|
||||
import streamlit as st
|
||||
from circuitforge_core.config import load_env
|
||||
from app.db.store import Store
|
||||
from app.platforms import SearchFilters
|
||||
from app.platforms.ebay.auth import EbayTokenManager
|
||||
from app.platforms.ebay.adapter import EbayAdapter
|
||||
from app.trust import TrustScorer
|
||||
from app.ui.components.filters import build_filter_options, render_filter_sidebar, FilterState
|
||||
from app.ui.components.listing_row import render_listing_row
|
||||
|
||||
load_env(Path(".env"))
|
||||
_DB_PATH = Path(os.environ.get("SNIPE_DB", "data/snipe.db"))
|
||||
_DB_PATH.parent.mkdir(exist_ok=True)
|
||||
|
||||
|
||||
def _get_adapter() -> EbayAdapter:
|
||||
store = Store(_DB_PATH)
|
||||
tokens = EbayTokenManager(
|
||||
client_id=os.environ.get("EBAY_CLIENT_ID", ""),
|
||||
client_secret=os.environ.get("EBAY_CLIENT_SECRET", ""),
|
||||
env=os.environ.get("EBAY_ENV", "production"),
|
||||
)
|
||||
return EbayAdapter(tokens, store, env=os.environ.get("EBAY_ENV", "production"))
|
||||
|
||||
|
||||
def _passes_filter(listing, trust, seller, state: FilterState) -> bool:
|
||||
import json
|
||||
if trust and trust.composite_score < state.min_trust_score:
|
||||
return False
|
||||
if state.min_price and listing.price < state.min_price:
|
||||
return False
|
||||
if state.max_price and listing.price > state.max_price:
|
||||
return False
|
||||
if state.conditions and listing.condition not in state.conditions:
|
||||
return False
|
||||
if seller:
|
||||
if seller.account_age_days < state.min_account_age_days:
|
||||
return False
|
||||
if seller.feedback_count < state.min_feedback_count:
|
||||
return False
|
||||
if seller.feedback_ratio < state.min_feedback_ratio:
|
||||
return False
|
||||
if trust:
|
||||
flags = json.loads(trust.red_flags_json or "[]")
|
||||
if state.hide_new_accounts and "account_under_30_days" in flags:
|
||||
return False
|
||||
if state.hide_suspicious_price and "suspicious_price" in flags:
|
||||
return False
|
||||
if state.hide_duplicate_photos and "duplicate_photo" in flags:
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def render() -> None:
|
||||
st.title("🔍 Snipe — eBay Listing Search")
|
||||
|
||||
col_q, col_price, col_btn = st.columns([4, 2, 1])
|
||||
query = col_q.text_input("Search", placeholder="RTX 4090 GPU", label_visibility="collapsed")
|
||||
max_price = col_price.number_input("Max price $", min_value=0.0, value=0.0,
|
||||
step=50.0, label_visibility="collapsed")
|
||||
search_clicked = col_btn.button("Search", use_container_width=True)
|
||||
|
||||
if not search_clicked or not query:
|
||||
st.info("Enter a search term and click Search.")
|
||||
return
|
||||
|
||||
with st.spinner("Fetching listings..."):
|
||||
try:
|
||||
adapter = _get_adapter()
|
||||
filters = SearchFilters(max_price=max_price if max_price > 0 else None)
|
||||
listings = adapter.search(query, filters)
|
||||
adapter.get_completed_sales(query) # warm the comps cache
|
||||
except Exception as e:
|
||||
st.error(f"eBay search failed: {e}")
|
||||
return
|
||||
|
||||
if not listings:
|
||||
st.warning("No listings found.")
|
||||
return
|
||||
|
||||
store = Store(_DB_PATH)
|
||||
for listing in listings:
|
||||
store.save_listing(listing)
|
||||
if listing.seller_platform_id:
|
||||
seller = adapter.get_seller(listing.seller_platform_id)
|
||||
if seller:
|
||||
store.save_seller(seller)
|
||||
|
||||
scorer = TrustScorer(store)
|
||||
trust_scores = scorer.score_batch(listings, query)
|
||||
pairs = list(zip(listings, trust_scores))
|
||||
|
||||
opts = build_filter_options(pairs)
|
||||
filter_state = render_filter_sidebar(pairs, opts)
|
||||
|
||||
sort_col = st.selectbox("Sort by", ["Trust score", "Price ↑", "Price ↓", "Newest"],
|
||||
label_visibility="collapsed")
|
||||
|
||||
def sort_key(pair):
|
||||
l, t = pair
|
||||
if sort_col == "Trust score": return -(t.composite_score if t else 0)
|
||||
if sort_col == "Price ↑": return l.price
|
||||
if sort_col == "Price ↓": return -l.price
|
||||
return l.listing_age_days
|
||||
|
||||
sorted_pairs = sorted(pairs, key=sort_key)
|
||||
visible = [(l, t) for l, t in sorted_pairs
|
||||
if _passes_filter(l, t, store.get_seller("ebay", l.seller_platform_id), filter_state)]
|
||||
hidden_count = len(sorted_pairs) - len(visible)
|
||||
|
||||
st.caption(f"{len(visible)} results · {hidden_count} hidden by filters")
|
||||
|
||||
for listing, trust in visible:
|
||||
seller = store.get_seller("ebay", listing.seller_platform_id)
|
||||
render_listing_row(listing, trust, seller)
|
||||
|
||||
if hidden_count:
|
||||
if st.button(f"Show {hidden_count} hidden results"):
|
||||
visible_ids = {(l.platform, l.platform_listing_id) for l, _ in visible}
|
||||
for listing, trust in sorted_pairs:
|
||||
if (listing.platform, listing.platform_listing_id) not in visible_ids:
|
||||
seller = store.get_seller("ebay", listing.seller_platform_id)
|
||||
render_listing_row(listing, trust, seller)
|
||||
0
app/ui/__init__.py
Normal file
0
app/ui/__init__.py
Normal file
0
app/ui/components/__init__.py
Normal file
0
app/ui/components/__init__.py
Normal file
118
app/ui/components/filters.py
Normal file
118
app/ui/components/filters.py
Normal file
|
|
@ -0,0 +1,118 @@
|
|||
"""Build dynamic filter options from a result set and render the Streamlit sidebar."""
|
||||
from __future__ import annotations
|
||||
import json
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Optional
|
||||
import streamlit as st
|
||||
from app.db.models import Listing, TrustScore
|
||||
|
||||
|
||||
@dataclass
|
||||
class FilterOptions:
|
||||
price_min: float
|
||||
price_max: float
|
||||
conditions: dict[str, int] # condition → count
|
||||
score_bands: dict[str, int] # safe/review/skip → count
|
||||
has_real_photo: int = 0
|
||||
has_em_bag: int = 0
|
||||
duplicate_count: int = 0
|
||||
new_account_count: int = 0
|
||||
free_shipping_count: int = 0
|
||||
|
||||
|
||||
@dataclass
|
||||
class FilterState:
|
||||
min_trust_score: int = 0
|
||||
min_price: Optional[float] = None
|
||||
max_price: Optional[float] = None
|
||||
min_account_age_days: int = 0
|
||||
min_feedback_count: int = 0
|
||||
min_feedback_ratio: float = 0.0
|
||||
conditions: list[str] = field(default_factory=list)
|
||||
hide_new_accounts: bool = False
|
||||
hide_marketing_photos: bool = False
|
||||
hide_suspicious_price: bool = False
|
||||
hide_duplicate_photos: bool = False
|
||||
|
||||
|
||||
def build_filter_options(
|
||||
pairs: list[tuple[Listing, TrustScore]],
|
||||
) -> FilterOptions:
|
||||
prices = [l.price for l, _ in pairs if l.price > 0]
|
||||
conditions: dict[str, int] = {}
|
||||
safe = review = skip = 0
|
||||
dup_count = new_acct = 0
|
||||
|
||||
for listing, ts in pairs:
|
||||
cond = listing.condition or "unknown"
|
||||
conditions[cond] = conditions.get(cond, 0) + 1
|
||||
if ts.composite_score >= 80:
|
||||
safe += 1
|
||||
elif ts.composite_score >= 50:
|
||||
review += 1
|
||||
else:
|
||||
skip += 1
|
||||
if ts.photo_hash_duplicate:
|
||||
dup_count += 1
|
||||
flags = json.loads(ts.red_flags_json or "[]")
|
||||
if "new_account" in flags or "account_under_30_days" in flags:
|
||||
new_acct += 1
|
||||
|
||||
return FilterOptions(
|
||||
price_min=min(prices) if prices else 0,
|
||||
price_max=max(prices) if prices else 0,
|
||||
conditions=conditions,
|
||||
score_bands={"safe": safe, "review": review, "skip": skip},
|
||||
duplicate_count=dup_count,
|
||||
new_account_count=new_acct,
|
||||
)
|
||||
|
||||
|
||||
def render_filter_sidebar(
|
||||
pairs: list[tuple[Listing, TrustScore]],
|
||||
opts: FilterOptions,
|
||||
) -> FilterState:
|
||||
"""Render filter sidebar and return current FilterState."""
|
||||
state = FilterState()
|
||||
|
||||
st.sidebar.markdown("### Filters")
|
||||
st.sidebar.caption(f"{len(pairs)} results")
|
||||
|
||||
state.min_trust_score = st.sidebar.slider("Min trust score", 0, 100, 0, key="min_trust")
|
||||
st.sidebar.caption(
|
||||
f"🟢 Safe (80+): {opts.score_bands['safe']} "
|
||||
f"🟡 Review (50–79): {opts.score_bands['review']} "
|
||||
f"🔴 Skip (<50): {opts.score_bands['skip']}"
|
||||
)
|
||||
|
||||
st.sidebar.markdown("**Price**")
|
||||
col1, col2 = st.sidebar.columns(2)
|
||||
state.min_price = col1.number_input("Min $", value=opts.price_min, step=50.0, key="min_p")
|
||||
state.max_price = col2.number_input("Max $", value=opts.price_max, step=50.0, key="max_p")
|
||||
|
||||
state.min_account_age_days = st.sidebar.slider(
|
||||
"Account age (min days)", 0, 365, 0, key="age")
|
||||
state.min_feedback_count = st.sidebar.slider(
|
||||
"Feedback count (min)", 0, 500, 0, key="fb_count")
|
||||
state.min_feedback_ratio = st.sidebar.slider(
|
||||
"Positive feedback % (min)", 0, 100, 0, key="fb_ratio") / 100.0
|
||||
|
||||
if opts.conditions:
|
||||
st.sidebar.markdown("**Condition**")
|
||||
selected = []
|
||||
for cond, count in sorted(opts.conditions.items()):
|
||||
if st.sidebar.checkbox(f"{cond} ({count})", value=True, key=f"cond_{cond}"):
|
||||
selected.append(cond)
|
||||
state.conditions = selected
|
||||
|
||||
st.sidebar.markdown("**Hide if flagged**")
|
||||
state.hide_new_accounts = st.sidebar.checkbox(
|
||||
f"New account (<30d) ({opts.new_account_count})", key="hide_new")
|
||||
state.hide_suspicious_price = st.sidebar.checkbox("Suspicious price", key="hide_price")
|
||||
state.hide_duplicate_photos = st.sidebar.checkbox(
|
||||
f"Duplicate photo ({opts.duplicate_count})", key="hide_dup")
|
||||
|
||||
if st.sidebar.button("Reset filters", key="reset"):
|
||||
st.rerun()
|
||||
|
||||
return state
|
||||
85
app/ui/components/listing_row.py
Normal file
85
app/ui/components/listing_row.py
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
"""Render a single listing row with trust score, badges, and error states."""
|
||||
from __future__ import annotations
|
||||
import json
|
||||
import streamlit as st
|
||||
from app.db.models import Listing, TrustScore, Seller
|
||||
from typing import Optional
|
||||
|
||||
|
||||
def _score_colour(score: int) -> str:
|
||||
if score >= 80: return "🟢"
|
||||
if score >= 50: return "🟡"
|
||||
return "🔴"
|
||||
|
||||
|
||||
def _flag_label(flag: str) -> str:
|
||||
labels = {
|
||||
"new_account": "✗ New account",
|
||||
"account_under_30_days": "⚠ Account <30d",
|
||||
"low_feedback_count": "⚠ Low feedback",
|
||||
"suspicious_price": "✗ Suspicious price",
|
||||
"duplicate_photo": "✗ Duplicate photo",
|
||||
"established_bad_actor": "✗ Bad actor",
|
||||
"marketing_photo": "✗ Marketing photo",
|
||||
}
|
||||
return labels.get(flag, f"⚠ {flag}")
|
||||
|
||||
|
||||
def render_listing_row(
|
||||
listing: Listing,
|
||||
trust: Optional[TrustScore],
|
||||
seller: Optional[Seller] = None,
|
||||
) -> None:
|
||||
col_img, col_info, col_score = st.columns([1, 5, 2])
|
||||
|
||||
with col_img:
|
||||
if listing.photo_urls:
|
||||
# Spec requires graceful 404 handling: show placeholder on failure
|
||||
try:
|
||||
import requests as _req
|
||||
r = _req.head(listing.photo_urls[0], timeout=3, allow_redirects=True)
|
||||
if r.status_code == 200:
|
||||
st.image(listing.photo_urls[0], width=80)
|
||||
else:
|
||||
st.markdown("📷 *Photo unavailable*")
|
||||
except Exception:
|
||||
st.markdown("📷 *Photo unavailable*")
|
||||
else:
|
||||
st.markdown("📷 *No photo*")
|
||||
|
||||
with col_info:
|
||||
st.markdown(f"**{listing.title}**")
|
||||
if seller:
|
||||
age_str = f"{seller.account_age_days // 365}yr" if seller.account_age_days >= 365 \
|
||||
else f"{seller.account_age_days}d"
|
||||
st.caption(
|
||||
f"{seller.username} · {seller.feedback_count} fb · "
|
||||
f"{seller.feedback_ratio*100:.1f}% · member {age_str}"
|
||||
)
|
||||
else:
|
||||
st.caption(f"{listing.seller_platform_id} · *Seller data unavailable*")
|
||||
|
||||
if trust:
|
||||
flags = json.loads(trust.red_flags_json or "[]")
|
||||
if flags:
|
||||
badge_html = " ".join(
|
||||
f'<span style="background:#c33;color:#fff;padding:1px 5px;'
|
||||
f'border-radius:3px;font-size:11px">{_flag_label(f)}</span>'
|
||||
for f in flags
|
||||
)
|
||||
st.markdown(badge_html, unsafe_allow_html=True)
|
||||
if trust.score_is_partial:
|
||||
st.caption("⚠ Partial score — some data unavailable")
|
||||
else:
|
||||
st.caption("⚠ Could not score this listing")
|
||||
|
||||
with col_score:
|
||||
if trust:
|
||||
icon = _score_colour(trust.composite_score)
|
||||
st.metric(label="Trust", value=f"{icon} {trust.composite_score}")
|
||||
else:
|
||||
st.metric(label="Trust", value="?")
|
||||
st.markdown(f"**${listing.price:,.0f}**")
|
||||
st.markdown(f"[Open eBay ↗]({listing.url})")
|
||||
|
||||
st.divider()
|
||||
3
app/wizard/__init__.py
Normal file
3
app/wizard/__init__.py
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
from .setup import SnipeSetupWizard
|
||||
|
||||
__all__ = ["SnipeSetupWizard"]
|
||||
52
app/wizard/setup.py
Normal file
52
app/wizard/setup.py
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
"""First-run wizard: collect eBay credentials and write .env."""
|
||||
from __future__ import annotations
|
||||
from pathlib import Path
|
||||
import streamlit as st
|
||||
from circuitforge_core.wizard import BaseWizard
|
||||
|
||||
|
||||
class SnipeSetupWizard(BaseWizard):
|
||||
"""
|
||||
Guides the user through first-run setup:
|
||||
1. Enter eBay Client ID (EBAY_CLIENT_ID) + Secret (EBAY_CLIENT_SECRET)
|
||||
2. Choose sandbox vs production
|
||||
3. Verify connection (token fetch)
|
||||
4. Write .env file
|
||||
"""
|
||||
|
||||
def __init__(self, env_path: Path = Path(".env")):
|
||||
self._env_path = env_path
|
||||
|
||||
def run(self) -> bool:
|
||||
"""Run the setup wizard. Returns True if setup completed successfully."""
|
||||
st.title("🎯 Snipe — First Run Setup")
|
||||
st.info(
|
||||
"To use Snipe, you need eBay developer credentials. "
|
||||
"Register at developer.ebay.com and create an app to get your Client ID (EBAY_CLIENT_ID) and Secret (EBAY_CLIENT_SECRET)."
|
||||
)
|
||||
|
||||
client_id = st.text_input("eBay Client ID (EBAY_CLIENT_ID)", type="password")
|
||||
client_secret = st.text_input("eBay Client Secret (EBAY_CLIENT_SECRET)", type="password")
|
||||
env = st.selectbox("eBay Environment", ["production", "sandbox"])
|
||||
|
||||
if st.button("Save and verify"):
|
||||
if not client_id or not client_secret:
|
||||
st.error("Both Client ID and Secret are required.")
|
||||
return False
|
||||
# Write .env
|
||||
self._env_path.write_text(
|
||||
f"EBAY_CLIENT_ID={client_id}\n"
|
||||
f"EBAY_CLIENT_SECRET={client_secret}\n"
|
||||
f"EBAY_ENV={env}\n"
|
||||
f"SNIPE_DB=data/snipe.db\n"
|
||||
)
|
||||
st.success(f".env written to {self._env_path}. Reload the app to begin searching.")
|
||||
return True
|
||||
return False
|
||||
|
||||
def is_configured(self) -> bool:
|
||||
"""Return True if .env exists and has eBay credentials."""
|
||||
if not self._env_path.exists():
|
||||
return False
|
||||
text = self._env_path.read_text()
|
||||
return "EBAY_CLIENT_ID=" in text and "EBAY_CLIENT_SECRET=" in text
|
||||
8
compose.override.yml
Normal file
8
compose.override.yml
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
services:
|
||||
snipe:
|
||||
volumes:
|
||||
- ../circuitforge-core:/app/circuitforge-core
|
||||
- ./app:/app/snipe/app
|
||||
- ./data:/app/snipe/data
|
||||
environment:
|
||||
- STREAMLIT_SERVER_RUN_ON_SAVE=true
|
||||
10
compose.yml
Normal file
10
compose.yml
Normal file
|
|
@ -0,0 +1,10 @@
|
|||
services:
|
||||
snipe:
|
||||
build:
|
||||
context: ..
|
||||
dockerfile: snipe/Dockerfile
|
||||
ports:
|
||||
- "8506:8506"
|
||||
env_file: .env
|
||||
volumes:
|
||||
- ./data:/app/snipe/data
|
||||
1045
docs/superpowers/plans/2026-03-25-circuitforge-core.md
Normal file
1045
docs/superpowers/plans/2026-03-25-circuitforge-core.md
Normal file
File diff suppressed because it is too large
Load diff
2227
docs/superpowers/plans/2026-03-25-snipe-mvp.md
Normal file
2227
docs/superpowers/plans/2026-03-25-snipe-mvp.md
Normal file
File diff suppressed because it is too large
Load diff
|
|
@ -0,0 +1,322 @@
|
|||
# Snipe MVP + circuitforge-core Extraction — Design Spec
|
||||
**Date:** 2026-03-25
|
||||
**Status:** Approved
|
||||
**Products:** `snipe` (new), `circuitforge-core` (new), `peregrine` (updated)
|
||||
|
||||
---
|
||||
|
||||
## 1. Overview
|
||||
|
||||
This spec covers two parallel workstreams:
|
||||
|
||||
1. **circuitforge-core extraction** — hoist the shared scaffold from Peregrine into a private, locally-installable Python package. Peregrine becomes the first downstream consumer. All future CF products depend on it.
|
||||
2. **Snipe MVP** — eBay listing monitor + seller trust scorer, built on top of circuitforge-core. Solves the immediate problem: filtering scam accounts when searching for used GPU listings on eBay.
|
||||
|
||||
Design principle: *cry once*. Pay the extraction cost now while there are only two products; every product after this benefits for free.
|
||||
|
||||
---
|
||||
|
||||
## 2. circuitforge-core
|
||||
|
||||
### 2.1 Repository
|
||||
|
||||
- **Repo:** `git.opensourcesolarpunk.com/Circuit-Forge/circuitforge-core` (private)
|
||||
- **Local path:** `/Library/Development/CircuitForge/circuitforge-core/`
|
||||
- **Install method:** `pip install -e ../circuitforge-core` (editable local package; graduate to Forgejo Packages private PyPI at product #3)
|
||||
- **License:** BSL 1.1 for AI features, MIT for pipeline/utility layers
|
||||
|
||||
### 2.2 Package Structure
|
||||
|
||||
```
|
||||
circuitforge-core/
|
||||
circuitforge_core/
|
||||
pipeline/ # SQLite staging DB, status machine, background task runner
|
||||
llm/ # LLM router: fallback chain, BYOK support, vision-aware routing
|
||||
vision/ # Vision model wrapper — moondream2 (local) + Claude vision (cloud) [NET-NEW]
|
||||
wizard/ # First-run onboarding framework, tier gating, crash recovery
|
||||
tiers/ # Tier system (Free/Paid/Premium/Ultra) + Heimdall license client
|
||||
db/ # SQLite base class, migration runner
|
||||
config/ # Settings loader, env validation, secrets management
|
||||
pyproject.toml
|
||||
README.md
|
||||
```
|
||||
|
||||
### 2.3 Extraction from Peregrine
|
||||
|
||||
The following Peregrine modules are **extracted** (migrated from Peregrine, not net-new):
|
||||
|
||||
| Peregrine source | → Core module | Notes |
|
||||
|---|---|---|
|
||||
| `app/wizard/` | `circuitforge_core/wizard/` | |
|
||||
| `scripts/llm_router.py` | `circuitforge_core/llm/router.py` | Path is `scripts/`, not `app/` |
|
||||
| `app/wizard/tiers.py` | `circuitforge_core/tiers/` | |
|
||||
| SQLite pipeline base | `circuitforge_core/pipeline/` | |
|
||||
|
||||
**`circuitforge_core/vision/`** is **net-new** — no vision module exists in Peregrine to extract. It is built fresh in core.
|
||||
|
||||
**Peregrine dependency management:** Peregrine uses `requirements.txt`, not `pyproject.toml`. The migration adds `circuitforge-core` to `requirements.txt` as a local path entry: `-e ../circuitforge-core`. Snipe is greenfield and uses `pyproject.toml` from the start. There is no requirement to migrate Peregrine to `pyproject.toml` as part of this work.
|
||||
|
||||
### 2.4 Docker Build Strategy
|
||||
|
||||
Docker build contexts cannot reference paths outside the context directory (`COPY ../` is forbidden). Both Peregrine and Snipe resolve this by setting the compose build context to the parent directory:
|
||||
|
||||
```yaml
|
||||
# compose.yml (snipe or peregrine)
|
||||
services:
|
||||
app:
|
||||
build:
|
||||
context: .. # /Library/Development/CircuitForge/
|
||||
dockerfile: snipe/Dockerfile
|
||||
```
|
||||
|
||||
```dockerfile
|
||||
# snipe/Dockerfile
|
||||
COPY circuitforge-core/ ./circuitforge-core/
|
||||
RUN pip install -e ./circuitforge-core
|
||||
COPY snipe/ ./snipe/
|
||||
RUN pip install -e ./snipe
|
||||
```
|
||||
|
||||
In development, `compose.override.yml` bind-mounts `../circuitforge-core` so local edits to core are immediately live without rebuild.
|
||||
|
||||
---
|
||||
|
||||
## 3. Snipe MVP
|
||||
|
||||
### 3.1 Scope
|
||||
|
||||
**In (v0.1 MVP):**
|
||||
- eBay listing search (Browse API + Seller API)
|
||||
- Metadata trust scoring (free tier)
|
||||
- Perceptual hash duplicate photo detection within a search result set (free tier)
|
||||
- Faceted filter UI with dynamic, data-driven filter options and sliders
|
||||
- On-demand search only
|
||||
- `SavedSearch` DB schema scaffolded but monitoring not wired up
|
||||
|
||||
**Out (future versions):**
|
||||
- Background polling / saved search alerts (v0.2)
|
||||
- Photo analysis via vision model — real vs marketing shot, EM bag detection (v0.2, paid)
|
||||
- Serial number consistency check (v0.2, paid)
|
||||
- AI-generated image detection (v0.3, paid)
|
||||
- Reverse image search (v0.4, paid)
|
||||
- Additional platforms: HiBid, CT Bids, AuctionZip (v0.3+)
|
||||
- Bid scheduling / snipe execution (v0.4+)
|
||||
|
||||
### 3.2 Repository
|
||||
|
||||
- **Repo:** `git.opensourcesolarpunk.com/Circuit-Forge/snipe` (public discovery layer)
|
||||
- **Local path:** `/Library/Development/CircuitForge/snipe/`
|
||||
- **License:** MIT (discovery/pipeline), BSL 1.1 (AI features)
|
||||
- **Product code:** `CFG-SNPE`
|
||||
- **Port:** 8506
|
||||
|
||||
### 3.3 Tech Stack
|
||||
|
||||
Follows Peregrine as the reference implementation:
|
||||
|
||||
- **UI:** Streamlit (Python)
|
||||
- **DB:** SQLite via `circuitforge_core.db`
|
||||
- **LLM/Vision:** `circuitforge_core.llm` / `circuitforge_core.vision`
|
||||
- **Tiers:** `circuitforge_core.tiers`
|
||||
- **Containerisation:** Docker + `compose.yml`, managed via `manage.sh`
|
||||
- **Python env:** `conda run -n job-seeker` (shared CF env)
|
||||
|
||||
### 3.4 Application Structure
|
||||
|
||||
```
|
||||
snipe/
|
||||
app/
|
||||
platforms/
|
||||
__init__.py # PlatformAdapter abstract base class
|
||||
ebay/
|
||||
adapter.py # eBay Browse API + Seller API client
|
||||
auth.py # OAuth2 client credentials token manager
|
||||
normaliser.py # Raw API response → Listing / Seller schema
|
||||
trust/
|
||||
__init__.py # TrustScorer orchestrator
|
||||
metadata.py # Account age, feedback, price vs market, category history
|
||||
photo.py # Perceptual hash dedup (free); vision analysis (paid, v0.2+)
|
||||
aggregator.py # Weighted composite score + red flag extraction
|
||||
ui/
|
||||
Search.py # Main search + results page
|
||||
components/
|
||||
filters.py # Dynamic faceted filter sidebar
|
||||
listing_row.py # Listing card with trust badge + red flags + error state
|
||||
db/
|
||||
models.py # Listing, Seller, Search, TrustScore, SavedSearch schemas
|
||||
migrations/
|
||||
wizard/ # First-run onboarding (thin wrapper on core wizard)
|
||||
snipe/ # Bid engine placeholder (v0.4)
|
||||
manage.sh
|
||||
compose.yml
|
||||
compose.override.yml
|
||||
Dockerfile
|
||||
pyproject.toml
|
||||
```
|
||||
|
||||
### 3.5 eBay API Credentials
|
||||
|
||||
eBay Browse API and Seller API require OAuth 2.0 app-level tokens (client credentials flow — no user auth needed, but a registered eBay developer account and app credentials are required).
|
||||
|
||||
**Token lifecycle:**
|
||||
- App token fetched at startup and cached in memory with expiry
|
||||
- `auth.py` handles refresh automatically on expiry (tokens last 2 hours)
|
||||
- On token fetch failure: search fails with a user-visible error; no silent fallback
|
||||
|
||||
**Credentials storage:** `.env` file (gitignored), never hardcoded.
|
||||
```
|
||||
EBAY_CLIENT_ID=...
|
||||
EBAY_CLIENT_SECRET=...
|
||||
EBAY_ENV=production # or sandbox
|
||||
```
|
||||
|
||||
**Rate limits:** eBay Browse API — 5,000 calls/day (sandbox), higher on production. Completed sales comps results are cached in SQLite with a 6-hour TTL to avoid redundant calls and stay within limits. Cache miss triggers a fresh fetch; fetch failure degrades gracefully (price vs market signal skipped, score noted as partial).
|
||||
|
||||
**API split:** `get_seller()` uses the eBay Seller API (different endpoint, same app token). Rate limits are tracked separately. The `PlatformAdapter` interface does not expose this distinction; it is an internal concern of the eBay adapter.
|
||||
|
||||
### 3.6 Data Model
|
||||
|
||||
**`Listing`**
|
||||
```
|
||||
id, platform, platform_listing_id, title, price, currency,
|
||||
condition, seller_id, url, photo_urls (JSON), listing_age_days,
|
||||
fetched_at, trust_score_id
|
||||
```
|
||||
|
||||
**`Seller`**
|
||||
```
|
||||
id, platform, platform_seller_id, username,
|
||||
account_age_days, feedback_count, feedback_ratio,
|
||||
category_history_json, fetched_at
|
||||
```
|
||||
|
||||
**`TrustScore`**
|
||||
```
|
||||
id, listing_id, composite_score,
|
||||
account_age_score, feedback_count_score, feedback_ratio_score,
|
||||
price_vs_market_score, category_history_score,
|
||||
photo_hash_duplicate (bool),
|
||||
photo_analysis_json (paid, nullable),
|
||||
red_flags_json, scored_at, score_is_partial (bool)
|
||||
```
|
||||
|
||||
**`MarketComp`** *(price comps cache)*
|
||||
```
|
||||
id, platform, query_hash, median_price, sample_count, fetched_at, expires_at
|
||||
```
|
||||
|
||||
**`SavedSearch`** *(schema scaffolded in v0.1; monitoring not wired until v0.2)*
|
||||
```
|
||||
id, name, query, platform, filters_json, created_at, last_run_at
|
||||
```
|
||||
|
||||
**`PhotoHash`** *(perceptual hash store for cross-search dedup, v0.2+)*
|
||||
```
|
||||
id, listing_id, photo_url, phash, first_seen_at
|
||||
```
|
||||
|
||||
### 3.7 Platform Adapter Interface
|
||||
|
||||
```python
|
||||
class PlatformAdapter:
|
||||
def search(self, query: str, filters: SearchFilters) -> list[Listing]: ...
|
||||
def get_seller(self, seller_id: str) -> Seller: ...
|
||||
def get_completed_sales(self, query: str) -> list[Listing]: ...
|
||||
```
|
||||
|
||||
Adding HiBid or CT Bids later = new adapter, zero changes to trust scorer or UI.
|
||||
|
||||
### 3.8 Trust Scorer
|
||||
|
||||
#### Metadata Signals (Free)
|
||||
|
||||
Five signals, each scored 0–20, equal weight. Composite = sum (0–100).
|
||||
|
||||
| Signal | Source | Red flag threshold | Score 0 condition |
|
||||
|---|---|---|---|
|
||||
| Account age | eBay Seller API | < 30 days | < 7 days (also hard-filter) |
|
||||
| Feedback count | eBay Seller API | < 10 | < 3 |
|
||||
| Feedback ratio | eBay Seller API | < 95% | < 80% with count > 20 |
|
||||
| Price vs market | Completed sales comps | > 30% below median | > 50% below median |
|
||||
| Category history | Seller past sales | No prior electronics sales | No prior sales at all |
|
||||
|
||||
**Hard filters** (auto-hide regardless of composite score):
|
||||
- Account age < 7 days
|
||||
- Feedback ratio < 80% with feedback count > 20
|
||||
|
||||
**Partial scores:** If any signal's data source is unavailable (API failure, rate limit), that signal contributes 0 and `score_is_partial = True` is set on the `TrustScore` record. The UI surfaces a "⚠ Partial score" indicator on affected listings.
|
||||
|
||||
#### Photo Signals — Anti-Gotcha Layer
|
||||
|
||||
| Signal | Tier | Version | Method |
|
||||
|---|---|---|---|
|
||||
| Perceptual hash dedup within result set | **Free** | v0.1 MVP | Compare phashes across all listings in the current search response; flag duplicates |
|
||||
| Real photo vs marketing shot | **Paid / Local vision** | v0.2 | Vision model classification |
|
||||
| Open box + EM antistatic bag (proof of possession) | **Paid / Local vision** | v0.2 | Vision model classification |
|
||||
| Serial number consistency across photos | **Paid / Local vision** | v0.2 | Vision model OCR + comparison |
|
||||
| AI-generated image detection | **Paid** | v0.3 | Classifier model |
|
||||
| Reverse image search | **Paid** | v0.4 | Google Lens / TinEye API |
|
||||
|
||||
**v0.1 dedup scope:** Perceptual hash comparison is within the current search result set only (not across historical searches). Cross-session dedup uses the `PhotoHash` table and is a v0.2 feature. Photos are not downloaded to disk in v0.1 — hashes are computed from the image bytes in memory during the search request.
|
||||
|
||||
### 3.9 Tier Gating
|
||||
|
||||
Photo analysis features use `LOCAL_VISION_UNLOCKABLE` (analogous to `BYOK_UNLOCKABLE` in Peregrine's `tiers.py`) — they unlock for free-tier users who have a local vision model (moondream2) configured. This is distinct from BYOK (text LLM key), which does not unlock vision features.
|
||||
|
||||
| Feature | Free | Paid | Local vision unlock |
|
||||
|---|---|---|---|
|
||||
| Metadata trust scoring | ✓ | ✓ | — |
|
||||
| Perceptual hash dedup (within result set) | ✓ | ✓ | — |
|
||||
| Photo analysis (real/marketing/EM bag) | — | ✓ | ✓ |
|
||||
| Serial number consistency | — | ✓ | ✓ |
|
||||
| AI generation detection | — | ✓ | — |
|
||||
| Reverse image search | — | ✓ | — |
|
||||
| Saved searches + background monitoring | — | ✓ | — |
|
||||
|
||||
Locked features are shown (disabled) in the filter sidebar so free users see what's available. Clicking a locked filter shows a tier upgrade prompt.
|
||||
|
||||
### 3.10 UI — Results Page
|
||||
|
||||
**Search bar:** keywords, max price, condition selector, search button. Sort: trust score (default), price ↑/↓, listing age.
|
||||
|
||||
**Filter sidebar** — all options and counts generated dynamically from the result set. Options with 0 results are hidden (not greyed):
|
||||
- Trust score — range slider (min/max from results); colour-band summary (safe/review/skip + counts)
|
||||
- Price — min/max text inputs + market avg/median annotation
|
||||
- Seller account age — min slider
|
||||
- Feedback count — min slider
|
||||
- Positive feedback % — min slider
|
||||
- Condition — checkboxes (options from data: New, Open Box, Used, For Parts)
|
||||
- Photo signals — checkboxes: Real photo, EM bag visible, Open box, No AI-generated (locked, paid)
|
||||
- Hide if flagged — checkboxes: New account (<30d), Marketing photo, >30% below market, Duplicate photo
|
||||
- Shipping — Free shipping, Local pickup
|
||||
- Reset filters button
|
||||
|
||||
**Listing row (happy path):** thumbnail · title · seller summary (username, feedback count, ratio, tenure) · red flag badges · trust score badge (colour-coded: green 80+, amber 50–79, red <50) · `score_is_partial` indicator if applicable · price · "Open eBay ↗" link. Left border colour matches score band.
|
||||
|
||||
**Listing row (error states):**
|
||||
- Seller data unavailable: seller summary shows "Seller data unavailable" in muted text; affected signals show "–" and partial score indicator is set
|
||||
- Photo URL 404: thumbnail shows placeholder icon; hash dedup skipped for that photo
|
||||
- Trust scoring failed entirely: listing shown with score "?" badge in neutral grey; error logged; "Could not score this listing" tooltip
|
||||
|
||||
**Hidden results:** count shown at bottom ("N results hidden by filters · show anyway"). Clicking reveals them in-place at reduced opacity.
|
||||
|
||||
---
|
||||
|
||||
## 4. Build Order
|
||||
|
||||
1. **circuitforge-core** — scaffold repo, extract wizard/llm/tiers/pipeline from Peregrine, build vision module net-new, update Peregrine `requirements.txt`
|
||||
2. **Snipe scaffold** — repo init, Dockerfile, compose.yml (parent context), manage.sh, DB migrations, wizard first-run, `.env` template
|
||||
3. **eBay adapter** — OAuth2 token manager, Browse API search, Seller API, completed sales comps with cache
|
||||
4. **Metadata trust scorer** — all five signals, aggregator, hard filters, partial score handling
|
||||
5. **Perceptual hash dedup** — in-memory within-result-set comparison
|
||||
6. **Results UI** — search page, listing rows (happy + error states), dynamic filter sidebar
|
||||
7. **Tier gating** — lock photo signals, `LOCAL_VISION_UNLOCKABLE` gate, upsell prompts in UI
|
||||
|
||||
---
|
||||
|
||||
## 5. Documentation Locations
|
||||
|
||||
- Product spec: `snipe/docs/superpowers/specs/2026-03-25-snipe-circuitforge-core-design.md` *(this file)*
|
||||
- Internal copy: `circuitforge-plans/snipe/2026-03-25-snipe-circuitforge-core-design.md`
|
||||
- Roadmap: `Circuit-Forge/roadmap` issues #14 (snipe) and #21 (circuitforge-core)
|
||||
- Org-level context: `/Library/Development/CircuitForge/CLAUDE.md`
|
||||
45
manage.sh
Executable file
45
manage.sh
Executable file
|
|
@ -0,0 +1,45 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SERVICE=snipe
|
||||
PORT=8506
|
||||
COMPOSE_FILE="compose.yml"
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 {start|stop|restart|status|logs|open|update}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
cmd="${1:-help}"
|
||||
shift || true
|
||||
|
||||
case "$cmd" in
|
||||
start)
|
||||
docker compose -f "$COMPOSE_FILE" up -d
|
||||
echo "$SERVICE started on http://localhost:$PORT"
|
||||
;;
|
||||
stop)
|
||||
docker compose -f "$COMPOSE_FILE" down
|
||||
;;
|
||||
restart)
|
||||
docker compose -f "$COMPOSE_FILE" down
|
||||
docker compose -f "$COMPOSE_FILE" up -d
|
||||
echo "$SERVICE restarted on http://localhost:$PORT"
|
||||
;;
|
||||
status)
|
||||
docker compose -f "$COMPOSE_FILE" ps
|
||||
;;
|
||||
logs)
|
||||
docker compose -f "$COMPOSE_FILE" logs -f "${@:-$SERVICE}"
|
||||
;;
|
||||
open)
|
||||
xdg-open "http://localhost:$PORT" 2>/dev/null || open "http://localhost:$PORT"
|
||||
;;
|
||||
update)
|
||||
docker compose -f "$COMPOSE_FILE" pull
|
||||
docker compose -f "$COMPOSE_FILE" up -d --build
|
||||
;;
|
||||
*)
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
24
pyproject.toml
Normal file
24
pyproject.toml
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
[build-system]
|
||||
requires = ["setuptools>=68"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "snipe"
|
||||
version = "0.1.0"
|
||||
description = "Auction listing monitor and trust scorer"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = [
|
||||
"circuitforge-core",
|
||||
"streamlit>=1.32",
|
||||
"requests>=2.31",
|
||||
"imagehash>=4.3",
|
||||
"Pillow>=10.0",
|
||||
"python-dotenv>=1.0",
|
||||
]
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["."]
|
||||
include = ["app*"]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
0
tests/db/__init__.py
Normal file
0
tests/db/__init__.py
Normal file
78
tests/db/test_store.py
Normal file
78
tests/db/test_store.py
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
import pytest
|
||||
from pathlib import Path
|
||||
from app.db.store import Store
|
||||
from app.db.models import Listing, Seller, TrustScore, MarketComp
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def store(tmp_path):
|
||||
return Store(tmp_path / "test.db")
|
||||
|
||||
|
||||
def test_store_creates_tables(store):
|
||||
# If no exception on init, tables exist
|
||||
pass
|
||||
|
||||
|
||||
def test_save_and_get_seller(store):
|
||||
seller = Seller(
|
||||
platform="ebay",
|
||||
platform_seller_id="user123",
|
||||
username="techseller",
|
||||
account_age_days=730,
|
||||
feedback_count=450,
|
||||
feedback_ratio=0.991,
|
||||
category_history_json="{}",
|
||||
)
|
||||
store.save_seller(seller)
|
||||
result = store.get_seller("ebay", "user123")
|
||||
assert result is not None
|
||||
assert result.username == "techseller"
|
||||
assert result.feedback_count == 450
|
||||
|
||||
|
||||
def test_save_and_get_listing(store):
|
||||
listing = Listing(
|
||||
platform="ebay",
|
||||
platform_listing_id="ebay-123",
|
||||
title="RTX 4090 FE",
|
||||
price=950.00,
|
||||
currency="USD",
|
||||
condition="used",
|
||||
seller_platform_id="user123",
|
||||
url="https://ebay.com/itm/123",
|
||||
photo_urls=["https://i.ebayimg.com/1.jpg"],
|
||||
listing_age_days=3,
|
||||
)
|
||||
store.save_listing(listing)
|
||||
result = store.get_listing("ebay", "ebay-123")
|
||||
assert result is not None
|
||||
assert result.title == "RTX 4090 FE"
|
||||
assert result.price == 950.00
|
||||
|
||||
|
||||
def test_save_and_get_market_comp(store):
|
||||
comp = MarketComp(
|
||||
platform="ebay",
|
||||
query_hash="abc123",
|
||||
median_price=1050.0,
|
||||
sample_count=12,
|
||||
expires_at="2026-03-26T00:00:00",
|
||||
)
|
||||
store.save_market_comp(comp)
|
||||
result = store.get_market_comp("ebay", "abc123")
|
||||
assert result is not None
|
||||
assert result.median_price == 1050.0
|
||||
|
||||
|
||||
def test_get_market_comp_returns_none_for_expired(store):
|
||||
comp = MarketComp(
|
||||
platform="ebay",
|
||||
query_hash="expired",
|
||||
median_price=900.0,
|
||||
sample_count=5,
|
||||
expires_at="2020-01-01T00:00:00", # past
|
||||
)
|
||||
store.save_market_comp(comp)
|
||||
result = store.get_market_comp("ebay", "expired")
|
||||
assert result is None
|
||||
0
tests/platforms/__init__.py
Normal file
0
tests/platforms/__init__.py
Normal file
46
tests/platforms/test_ebay_auth.py
Normal file
46
tests/platforms/test_ebay_auth.py
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
import time
|
||||
import requests
|
||||
from unittest.mock import patch, MagicMock
|
||||
import pytest
|
||||
from app.platforms.ebay.auth import EbayTokenManager
|
||||
|
||||
|
||||
def test_fetches_token_on_first_call():
|
||||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.json.return_value = {"access_token": "tok123", "expires_in": 7200}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
with patch("app.platforms.ebay.auth.requests.post", return_value=mock_resp) as mock_post:
|
||||
token = manager.get_token()
|
||||
assert token == "tok123"
|
||||
assert mock_post.called
|
||||
|
||||
|
||||
def test_returns_cached_token_before_expiry():
|
||||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||||
manager._token = "cached"
|
||||
manager._expires_at = time.time() + 3600
|
||||
with patch("app.platforms.ebay.auth.requests.post") as mock_post:
|
||||
token = manager.get_token()
|
||||
assert token == "cached"
|
||||
assert not mock_post.called
|
||||
|
||||
|
||||
def test_refreshes_token_after_expiry():
|
||||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||||
manager._token = "old"
|
||||
manager._expires_at = time.time() - 1 # expired
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.json.return_value = {"access_token": "new_tok", "expires_in": 7200}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
with patch("app.platforms.ebay.auth.requests.post", return_value=mock_resp):
|
||||
token = manager.get_token()
|
||||
assert token == "new_tok"
|
||||
|
||||
|
||||
def test_token_fetch_failure_raises():
|
||||
"""Spec requires: on token fetch failure, raise immediately — no silent fallback."""
|
||||
manager = EbayTokenManager(client_id="id", client_secret="secret", env="sandbox")
|
||||
with patch("app.platforms.ebay.auth.requests.post", side_effect=requests.RequestException("network error")):
|
||||
with pytest.raises(requests.RequestException):
|
||||
manager.get_token()
|
||||
57
tests/platforms/test_ebay_normaliser.py
Normal file
57
tests/platforms/test_ebay_normaliser.py
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
import pytest
|
||||
from app.platforms.ebay.normaliser import normalise_listing, normalise_seller
|
||||
|
||||
|
||||
def test_normalise_listing_maps_fields():
|
||||
raw = {
|
||||
"itemId": "v1|12345|0",
|
||||
"title": "RTX 4090 GPU",
|
||||
"price": {"value": "950.00", "currency": "USD"},
|
||||
"condition": "USED",
|
||||
"seller": {"username": "techguy", "feedbackScore": 300, "feedbackPercentage": "99.1"},
|
||||
"itemWebUrl": "https://ebay.com/itm/12345",
|
||||
"image": {"imageUrl": "https://i.ebayimg.com/1.jpg"},
|
||||
"additionalImages": [{"imageUrl": "https://i.ebayimg.com/2.jpg"}],
|
||||
"itemCreationDate": "2026-03-20T00:00:00.000Z",
|
||||
}
|
||||
listing = normalise_listing(raw)
|
||||
assert listing.platform == "ebay"
|
||||
assert listing.platform_listing_id == "v1|12345|0"
|
||||
assert listing.title == "RTX 4090 GPU"
|
||||
assert listing.price == 950.0
|
||||
assert listing.condition == "used"
|
||||
assert listing.seller_platform_id == "techguy"
|
||||
assert "https://i.ebayimg.com/1.jpg" in listing.photo_urls
|
||||
assert "https://i.ebayimg.com/2.jpg" in listing.photo_urls
|
||||
|
||||
|
||||
def test_normalise_listing_handles_missing_images():
|
||||
raw = {
|
||||
"itemId": "v1|999|0",
|
||||
"title": "GPU",
|
||||
"price": {"value": "100.00", "currency": "USD"},
|
||||
"condition": "NEW",
|
||||
"seller": {"username": "u"},
|
||||
"itemWebUrl": "https://ebay.com/itm/999",
|
||||
}
|
||||
listing = normalise_listing(raw)
|
||||
assert listing.photo_urls == []
|
||||
|
||||
|
||||
def test_normalise_seller_maps_fields():
|
||||
raw = {
|
||||
"username": "techguy",
|
||||
"feedbackScore": 300,
|
||||
"feedbackPercentage": "99.1",
|
||||
"registrationDate": "2020-03-01T00:00:00.000Z",
|
||||
"sellerFeedbackSummary": {
|
||||
"feedbackByCategory": [
|
||||
{"transactionPercent": "95.0", "categorySite": "ELECTRONICS", "count": "50"}
|
||||
]
|
||||
}
|
||||
}
|
||||
seller = normalise_seller(raw)
|
||||
assert seller.username == "techguy"
|
||||
assert seller.feedback_count == 300
|
||||
assert seller.feedback_ratio == pytest.approx(0.991, abs=0.001)
|
||||
assert seller.account_age_days > 0
|
||||
23
tests/test_tiers.py
Normal file
23
tests/test_tiers.py
Normal file
|
|
@ -0,0 +1,23 @@
|
|||
from app.tiers import can_use, FEATURES, LOCAL_VISION_UNLOCKABLE
|
||||
|
||||
|
||||
def test_metadata_scoring_is_free():
|
||||
assert can_use("metadata_trust_scoring", tier="free") is True
|
||||
|
||||
|
||||
def test_photo_analysis_is_paid():
|
||||
assert can_use("photo_analysis", tier="free") is False
|
||||
assert can_use("photo_analysis", tier="paid") is True
|
||||
|
||||
|
||||
def test_local_vision_unlocks_photo_analysis():
|
||||
assert can_use("photo_analysis", tier="free", has_local_vision=True) is True
|
||||
|
||||
|
||||
def test_byok_does_not_unlock_photo_analysis():
|
||||
assert can_use("photo_analysis", tier="free", has_byok=True) is False
|
||||
|
||||
|
||||
def test_saved_searches_require_paid():
|
||||
assert can_use("saved_searches", tier="free") is False
|
||||
assert can_use("saved_searches", tier="paid") is True
|
||||
0
tests/trust/__init__.py
Normal file
0
tests/trust/__init__.py
Normal file
52
tests/trust/test_aggregator.py
Normal file
52
tests/trust/test_aggregator.py
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
from app.db.models import Seller
|
||||
from app.trust.aggregator import Aggregator
|
||||
|
||||
|
||||
def test_composite_sum_of_five_signals():
|
||||
agg = Aggregator()
|
||||
scores = {
|
||||
"account_age": 18, "feedback_count": 16,
|
||||
"feedback_ratio": 20, "price_vs_market": 15,
|
||||
"category_history": 14,
|
||||
}
|
||||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=None)
|
||||
assert result.composite_score == 83
|
||||
|
||||
|
||||
def test_hard_filter_new_account():
|
||||
agg = Aggregator()
|
||||
scores = {k: 20 for k in ["account_age", "feedback_count",
|
||||
"feedback_ratio", "price_vs_market", "category_history"]}
|
||||
young_seller = Seller(
|
||||
platform="ebay", platform_seller_id="u", username="u",
|
||||
account_age_days=3, feedback_count=0,
|
||||
feedback_ratio=1.0, category_history_json="{}",
|
||||
)
|
||||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=young_seller)
|
||||
assert "new_account" in result.red_flags_json
|
||||
|
||||
|
||||
def test_hard_filter_bad_actor_established_account():
|
||||
"""Established account (count > 20) with very bad ratio → hard filter."""
|
||||
agg = Aggregator()
|
||||
scores = {k: 10 for k in ["account_age", "feedback_count",
|
||||
"feedback_ratio", "price_vs_market", "category_history"]}
|
||||
bad_seller = Seller(
|
||||
platform="ebay", platform_seller_id="u", username="u",
|
||||
account_age_days=730, feedback_count=25, # count > 20
|
||||
feedback_ratio=0.70, # ratio < 80% → hard filter
|
||||
category_history_json="{}",
|
||||
)
|
||||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=bad_seller)
|
||||
assert "established_bad_actor" in result.red_flags_json
|
||||
|
||||
|
||||
def test_partial_score_flagged_when_signals_missing():
|
||||
agg = Aggregator()
|
||||
scores = {
|
||||
"account_age": 18, "feedback_count": None, # None = unavailable
|
||||
"feedback_ratio": 20, "price_vs_market": 15,
|
||||
"category_history": 14,
|
||||
}
|
||||
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=None)
|
||||
assert result.score_is_partial is True
|
||||
45
tests/trust/test_metadata.py
Normal file
45
tests/trust/test_metadata.py
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
from app.db.models import Seller
|
||||
from app.trust.metadata import MetadataScorer
|
||||
|
||||
|
||||
def _seller(**kwargs) -> Seller:
|
||||
defaults = dict(
|
||||
platform="ebay", platform_seller_id="u", username="u",
|
||||
account_age_days=730, feedback_count=450,
|
||||
feedback_ratio=0.991, category_history_json='{"ELECTRONICS": 30}',
|
||||
)
|
||||
defaults.update(kwargs)
|
||||
return Seller(**defaults)
|
||||
|
||||
|
||||
def test_established_seller_scores_high():
|
||||
scorer = MetadataScorer()
|
||||
scores = scorer.score(_seller(), market_median=1000.0, listing_price=950.0)
|
||||
total = sum(scores.values())
|
||||
assert total >= 80
|
||||
|
||||
|
||||
def test_new_account_scores_zero_on_age():
|
||||
scorer = MetadataScorer()
|
||||
scores = scorer.score(_seller(account_age_days=3), market_median=1000.0, listing_price=950.0)
|
||||
assert scores["account_age"] == 0
|
||||
|
||||
|
||||
def test_low_feedback_count_scores_low():
|
||||
scorer = MetadataScorer()
|
||||
scores = scorer.score(_seller(feedback_count=2), market_median=1000.0, listing_price=950.0)
|
||||
assert scores["feedback_count"] < 10
|
||||
|
||||
|
||||
def test_suspicious_price_scores_zero():
|
||||
scorer = MetadataScorer()
|
||||
# 60% below market → zero
|
||||
scores = scorer.score(_seller(), market_median=1000.0, listing_price=400.0)
|
||||
assert scores["price_vs_market"] == 0
|
||||
|
||||
|
||||
def test_no_market_data_returns_none():
|
||||
scorer = MetadataScorer()
|
||||
scores = scorer.score(_seller(), market_median=None, listing_price=950.0)
|
||||
# None signals "data unavailable" — aggregator will set score_is_partial=True
|
||||
assert scores["price_vs_market"] is None
|
||||
24
tests/trust/test_photo.py
Normal file
24
tests/trust/test_photo.py
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
from app.trust.photo import PhotoScorer
|
||||
|
||||
|
||||
def test_no_duplicates_in_single_listing_result():
|
||||
scorer = PhotoScorer()
|
||||
photo_urls_per_listing = [
|
||||
["https://img.com/a.jpg", "https://img.com/b.jpg"],
|
||||
["https://img.com/c.jpg"],
|
||||
]
|
||||
# All unique images — no duplicates
|
||||
results = scorer.check_duplicates(photo_urls_per_listing)
|
||||
assert all(not r for r in results)
|
||||
|
||||
|
||||
def test_duplicate_photo_flagged():
|
||||
scorer = PhotoScorer()
|
||||
# Same URL in two listings = trivially duplicate (hash will match)
|
||||
photo_urls_per_listing = [
|
||||
["https://img.com/same.jpg"],
|
||||
["https://img.com/same.jpg"],
|
||||
]
|
||||
results = scorer.check_duplicates(photo_urls_per_listing)
|
||||
# Both listings should be flagged
|
||||
assert results[0] is True or results[1] is True
|
||||
0
tests/ui/__init__.py
Normal file
0
tests/ui/__init__.py
Normal file
38
tests/ui/test_filters.py
Normal file
38
tests/ui/test_filters.py
Normal file
|
|
@ -0,0 +1,38 @@
|
|||
from app.db.models import Listing, TrustScore
|
||||
from app.ui.components.filters import build_filter_options
|
||||
|
||||
|
||||
def _listing(price, condition, score):
|
||||
return (
|
||||
Listing("ebay", "1", "GPU", price, "USD", condition, "u", "https://ebay.com", [], 1),
|
||||
TrustScore(0, score, 10, 10, 10, 10, 10),
|
||||
)
|
||||
|
||||
|
||||
def test_price_range_from_results():
|
||||
pairs = [_listing(500, "used", 80), _listing(1200, "new", 60)]
|
||||
opts = build_filter_options(pairs)
|
||||
assert opts.price_min == 500
|
||||
assert opts.price_max == 1200
|
||||
|
||||
|
||||
def test_conditions_from_results():
|
||||
pairs = [_listing(500, "used", 80), _listing(1200, "new", 60), _listing(800, "used", 70)]
|
||||
opts = build_filter_options(pairs)
|
||||
assert "used" in opts.conditions
|
||||
assert opts.conditions["used"] == 2
|
||||
assert opts.conditions["new"] == 1
|
||||
|
||||
|
||||
def test_missing_condition_not_included():
|
||||
pairs = [_listing(500, "used", 80)]
|
||||
opts = build_filter_options(pairs)
|
||||
assert "new" not in opts.conditions
|
||||
|
||||
|
||||
def test_trust_score_bands():
|
||||
pairs = [_listing(500, "used", 85), _listing(700, "new", 60), _listing(400, "used", 20)]
|
||||
opts = build_filter_options(pairs)
|
||||
assert opts.score_bands["safe"] == 1 # 80+
|
||||
assert opts.score_bands["review"] == 1 # 50–79
|
||||
assert opts.score_bands["skip"] == 1 # <50
|
||||
Loading…
Reference in a new issue