Compare commits

...

15 commits
v0.5.1 ... main

Author SHA1 Message Date
7abc765fe7 Merge branch 'feature/perf-pool-cache'
Some checks are pending
CI / Python tests (push) Waiting to run
CI / Frontend typecheck + tests (push) Waiting to run
Mirror / mirror (push) Waiting to run
Release / release (push) Waiting to run
2026-04-20 12:10:16 -07:00
0ec29f0551 feat(scraper): pre-warmed Chromium browser pool (BROWSER_POOL_SIZE=2 default) 2026-04-20 12:09:09 -07:00
29d2033ef2 feat: browser pool + search result cache (#47, #48)
Some checks are pending
CI / Python tests (push) Waiting to run
CI / Frontend typecheck + tests (push) Waiting to run
Mirror / mirror (push) Waiting to run
Release / release (push) Waiting to run
2026-04-20 11:57:56 -07:00
a83e0957e2 feat(api): short-TTL search result cache (SEARCH_CACHE_TTL_S=300 default) 2026-04-20 11:53:27 -07:00
844721c6fd feat: near-term UX batch -- URL normalization, currency preference, async search/SSE
Some checks are pending
CI / Python tests (push) Waiting to run
CI / Frontend typecheck + tests (push) Waiting to run
Mirror / mirror (push) Waiting to run
Release / release (push) Waiting to run
2026-04-20 11:04:52 -07:00
dca3c3f50b feat(prefs): display.currency preference with live exchange rate conversion
- Backend: validate display.currency against 10 supported ISO 4217 codes
  (USD, GBP, EUR, CAD, AUD, JPY, CHF, MXN, BRL, INR); return 400 on
  unsupported code with a clear message listing accepted values
- Frontend: useCurrency composable fetches rates from open.er-api.com
  with 1-hour module-level cache and in-flight deduplication; falls back
  to USD display on network failure
- Preferences store: adds display.currency with localStorage fallback for
  anonymous users and localStorage-to-DB migration for newly logged-in users
- ListingCard: price and market price now convert from USD using live rates,
  showing USD synchronously while rates load then updating reactively
- Settings UI: currency selector dropdown in Appearance section using
  theme-aware CSS classes; available to all users (anon via localStorage,
  logged-in via DB preference)
- Tests: 6 Python tests for the PATCH /api/preferences currency endpoint
  (including ordering-safe fixture using patch.object on _LOCAL_SNIPE_DB);
  14 Vitest tests for convertFromUSD, formatPrice, and formatPriceUSD
2026-04-20 11:02:59 -07:00
d5912080fb feat(search): async endpoint + SSE streaming for initial results
Add GET /api/search/async that returns HTTP 202 immediately and streams
scrape results via SSE to avoid nginx 120s timeouts on slow eBay searches.

Backend:
- New GET /api/search/async endpoint submits scraping to ThreadPoolExecutor
  and returns {session_id, status: "queued"} before scrape begins
- Background worker runs same pipeline as synchronous search, pushing
  typed SSE events: "listings" (initial batch), "update" (enrichment),
  "market_price", and None sentinel
- Existing GET /api/updates/{session_id} passes new event types through
  as-is (already a generic pass-through); deadline extended to 150s
- Module-level _search_executor (max_workers=4) caps concurrent scrape sessions

Frontend (search.ts):
- search() now calls /api/search/async instead of /api/search
- loading stays true until first "listings" SSE event arrives
- _openUpdates() handles new typed events: "listings", "market_price",
  "update"; legacy untyped enrichment events still handled
- cancelSearch() now also closes any open SSE stream

Tests: tests/test_async_search.py (6 tests) covering 202 response,
session_id registration in _update_queues, empty query path, UUID format,
and no-Chromium guarantee. All 159 pre-existing tests still pass.

Closes #49. Also closes Forgejo issue #1 (SSE enrichment streaming, already
implemented; async search completes the picture).
2026-04-20 10:57:32 -07:00
2e0a49bc12 docs(config): add cf_text trunk service backend to llm.yaml.example
Documents the cf-orch allocation pattern (cf_text openai_compat backend
with cf_orch block). Snipe's trust query builder can route through
cf-text when CF_ORCH_URL is set, rather than hitting ollama directly.
2026-04-20 10:56:23 -07:00
df4610c57b feat(search): normalize eBay listing + checkout URLs as item lookup
When the user pastes an eBay listing URL (www.ebay.com/itm/...) or an
eBay checkout URL (pay.ebay.com/rxo?itemId=...) into the search field,
extract the numeric item ID and use it as the search query.

Supported URL patterns:
- https://www.ebay.com/itm/Title-Slug/123456789012
- https://www.ebay.com/itm/123456789012
- https://ebay.com/itm/123456789012
- https://pay.ebay.com/rxo?action=view&sessionid=...&itemId=123456789012
- https://pay.ebay.com/rxo/view?itemId=123456789012

Closes #42
2026-04-20 10:49:17 -07:00
349cff8c50 chore: ignore .worktrees/ directory 2026-04-20 10:45:39 -07:00
90f72d6e53 feat(config): add CF_APP_NAME for cf-orch analytics attribution 2026-04-20 07:03:18 -07:00
e539427bec fix: catch sqlite3.OperationalError in search post-processing
Under high concurrency (100+ users), shared_db write contention causes
database is locked errors in the unguarded post-scrape block. These were
surfacing as 500s because there was no exception handler after line 663.

Now catches OperationalError and returns raw listings with empty trust
scores/sellers (degraded mode) instead of crashing. The SSE queue entry
is cleaned up on this path so no orphaned queue accumulates.

Root cause: shared_db (sellers, market_comps) is SQLite; at 100 concurrent
writers the WAL write queue exceeds the 30s busy timeout. Long-term fix
is migrating shared state to Postgres (see snipe#NN).

Refs: infra#12 load test Phase 2 spike findings
2026-04-19 21:26:20 -07:00
ed6d509a26 fix: authenticate eBay public key fetch + add webhook health endpoint
Fixes recurring `400 Missing access token` errors in Snipe logs.
`_fetch_public_key()` was making unauthenticated GET requests to
eBay's Notification API (`/commerce/notification/v1/public_key/{kid}`),
which requires an app-level Bearer token (client_credentials grant).

Wires in the existing `EbayTokenManager` as a lazy module-level
singleton so every public key fetch carries a valid OAuth token.

Also adds `GET /api/ebay/webhook-health` for Uptime Kuma compliance
monitoring — returns 200 + status dict when all five required env vars
are present, 500 with missing var names otherwise.

Runbook: circuitforge-plans/snipe/ebay-webhook-compliance-runbook.md
Kuma monitor: id=19 on heimdall status page (Snipe group)
2026-04-18 22:20:29 -07:00
16cd32b0db fix: body background follows theme tokens + Plausible analytics
- theme.css: add background: var(--color-surface) to body so it responds
  to theme changes (was hardcoded #0d1117 via FOFT guard in index.html,
  causing mixed dark/light on light theme)
- index.html: add Plausible analytics snippet (cookie-free, self-hosted,
  skips localhost; reports to hostname + circuitforge.tech rollup)
- index.html: clarify FOFT guard comment — bundle overrides both html
  and body once loaded
2026-04-17 03:00:16 -07:00
dbe9aaa00b feat: add Plausible analytics to Vue SPA and docs
Some checks failed
CI / Python tests (push) Has been cancelled
CI / Frontend typecheck + tests (push) Has been cancelled
Mirror / mirror (push) Has been cancelled
2026-04-16 21:15:56 -07:00
22 changed files with 3331 additions and 186 deletions

View file

@ -19,6 +19,25 @@ EBAY_SANDBOX_CERT_ID=
# production | sandbox
EBAY_ENV=production
# ── eBay OAuth — Authorization Code (user account connection) ─────────────────
# Enables paid-tier users to connect their personal eBay account for instant
# trust scoring via Trading API GetUser (account age + per-category feedback).
# Without this, Snipe falls back to Shopping API + Playwright scraping.
#
# Setup steps:
# 1. Go to https://developer.ebay.com/my/keys → select your Production app
# 2. Under "Auth Accepted URL / RuName", create a new entry:
# - Callback URL: https://your-domain/api/ebay/callback
# (e.g. https://menagerie.circuitforge.tech/snipe/api/ebay/callback)
# - Snipe generates the redirect automatically — just register the URL above
# 3. Copy the RuName value (looks like "YourName-AppName-PRD-xxx-yyy")
# and paste it as EBAY_RUNAME below.
# 4. Set EBAY_OAUTH_REDIRECT_URI to the same HTTPS callback URL.
#
# Self-hosted: your callback URL must be HTTPS and publicly reachable.
# EBAY_RUNAME=YourName-AppName-PRD-xxxxxxxx-xxxxxxxx
# EBAY_OAUTH_REDIRECT_URI=https://your-domain/api/ebay/callback
# ── eBay Account Deletion Webhook ──────────────────────────────────────────────
# Register endpoint at https://developer.ebay.com/my/notification — required for
# production key activation. Set EBAY_NOTIFICATION_ENDPOINT to the public HTTPS
@ -32,6 +51,9 @@ EBAY_WEBHOOK_VERIFY_SIGNATURES=true
# ── Database ───────────────────────────────────────────────────────────────────
SNIPE_DB=data/snipe.db
# Product identifier reported in cf-orch coordinator analytics for per-app breakdown
CF_APP_NAME=snipe
# ── Cloud mode (managed / menagerie instance only) ─────────────────────────────
# Leave unset for self-hosted / local use. When set, per-user DB isolation
# and Heimdall licensing are enabled. compose.cloud.yml sets CLOUD_MODE=true

1
.gitignore vendored
View file

@ -10,3 +10,4 @@ data/
web/node_modules/
web/dist/
config/llm.yaml
.worktrees/

View file

@ -33,6 +33,7 @@ from cryptography.hazmat.primitives.serialization import load_pem_public_key
from fastapi import APIRouter, Header, HTTPException, Request
from app.db.store import Store
from app.platforms.ebay.auth import EbayTokenManager
log = logging.getLogger(__name__)
@ -40,6 +41,24 @@ router = APIRouter()
_DB_PATH = Path(os.environ.get("SNIPE_DB", "data/snipe.db"))
# ── App-level token manager ───────────────────────────────────────────────────
# Lazily initialized from env vars; shared across all webhook requests.
# The Notification public_key endpoint requires a Bearer app token.
_app_token_manager: EbayTokenManager | None = None
def _get_app_token() -> str | None:
"""Return a valid eBay app-level Bearer token, or None if creds are absent."""
global _app_token_manager
client_id = (os.environ.get("EBAY_APP_ID") or os.environ.get("EBAY_CLIENT_ID", "")).strip()
client_secret = (os.environ.get("EBAY_CERT_ID") or os.environ.get("EBAY_CLIENT_SECRET", "")).strip()
if not client_id or not client_secret:
return None
if _app_token_manager is None:
_app_token_manager = EbayTokenManager(client_id, client_secret)
return _app_token_manager.get_token()
# ── Public-key cache ──────────────────────────────────────────────────────────
# eBay key rotation is rare; 1-hour TTL is appropriate.
_KEY_CACHE_TTL = 3600
@ -58,7 +77,14 @@ def _fetch_public_key(kid: str) -> bytes:
return cached[0]
key_url = _EBAY_KEY_URL.format(kid=kid)
resp = requests.get(key_url, timeout=10)
headers: dict[str, str] = {}
app_token = _get_app_token()
if app_token:
headers["Authorization"] = f"Bearer {app_token}"
else:
log.warning("public_key fetch: no app credentials — request will likely fail")
resp = requests.get(key_url, headers=headers, timeout=10)
if not resp.ok:
log.error("public key fetch failed: %s %s — body: %s", resp.status_code, key_url, resp.text[:500])
resp.raise_for_status()
@ -68,6 +94,42 @@ def _fetch_public_key(kid: str) -> bytes:
return pem_bytes
# ── GET — webhook health check ───────────────────────────────────────────────
@router.get("/api/ebay/webhook-health")
def ebay_webhook_health() -> dict:
"""Lightweight health check for eBay webhook compliance monitoring.
Returns 200 + status dict when the webhook is fully configured.
Returns 500 when required env vars are missing.
Intended for Uptime Kuma or similar uptime monitors.
"""
token = os.environ.get("EBAY_NOTIFICATION_TOKEN", "")
endpoint = os.environ.get("EBAY_NOTIFICATION_ENDPOINT", "")
client_id = (os.environ.get("EBAY_APP_ID") or os.environ.get("EBAY_CLIENT_ID", "")).strip()
client_secret = (os.environ.get("EBAY_CERT_ID") or os.environ.get("EBAY_CLIENT_SECRET", "")).strip()
missing = [
name for name, val in [
("EBAY_NOTIFICATION_TOKEN", token),
("EBAY_NOTIFICATION_ENDPOINT", endpoint),
("EBAY_APP_ID / EBAY_CLIENT_ID", client_id),
("EBAY_CERT_ID / EBAY_CLIENT_SECRET", client_secret),
] if not val
]
if missing:
log.error("ebay_webhook_health: missing config: %s", missing)
raise HTTPException(
status_code=500,
detail=f"Webhook misconfigured — missing: {missing}",
)
return {
"status": "ok",
"endpoint": endpoint,
"signature_verification": os.environ.get("EBAY_WEBHOOK_VERIFY_SIGNATURES", "true"),
}
# ── GET — challenge verification ──────────────────────────────────────────────
@router.get("/api/ebay/account-deletion")

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,394 @@
"""Pre-warmed Chromium browser pool for the eBay scraper.
Eliminates cold-start latency (5-10s per call) by keeping a small pool of
long-lived Playwright browser instances with fresh contexts ready to serve.
Key design:
- Pool slots: ``(xvfb_proc, pw_instance, browser, context, display_num, last_used_ts)``
One headed Chromium browser per slot keeps the Kasada fingerprint clean.
- Thread safety: ``queue.Queue`` with blocking get (timeout=3s before fresh fallback).
- Replenishment: after each use, the dirty context is closed and a new context is
opened on the *same* browser, then returned to the queue. Browser launch overhead
is only paid at startup and during idle-cleanup replenishment.
- Idle cleanup: daemon thread closes slots idle for >5 minutes to avoid memory leaks
when the service is quiet.
- Graceful degradation: if Playwright / Xvfb is unavailable (host-side test env),
``fetch_html`` falls back to launching a fresh browser per call same behavior
as before this module existed.
Pool size is controlled via ``BROWSER_POOL_SIZE`` env var (default: 2).
"""
from __future__ import annotations
import itertools
import logging
import os
import queue
import subprocess
import threading
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
from dataclasses import dataclass, field
from typing import Optional
log = logging.getLogger(__name__)
# Reuse the same display counter namespace as scraper.py to avoid collisions.
# Pool uses :100-:199; scraper.py fallback uses :200-:299.
_pool_display_counter = itertools.cycle(range(100, 200))
_IDLE_TIMEOUT_SECS = 300 # 5 minutes
_CLEANUP_INTERVAL_SECS = 60
_QUEUE_TIMEOUT_SECS = 3.0
_CHROMIUM_ARGS = ["--no-sandbox", "--disable-dev-shm-usage"]
_USER_AGENT = (
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36"
)
_VIEWPORT = {"width": 1280, "height": 800}
@dataclass
class _PooledBrowser:
"""One slot in the browser pool."""
xvfb: subprocess.Popen
pw: object # playwright instance (sync_playwright().__enter__())
browser: object # playwright Browser
ctx: object # playwright BrowserContext (fresh per use)
display_num: int
last_used_ts: float = field(default_factory=time.time)
def _launch_slot() -> "_PooledBrowser":
"""Launch a new Xvfb display + headed Chromium browser + fresh context.
Raises on failure callers must catch and handle gracefully.
"""
from playwright.sync_api import sync_playwright
from playwright_stealth import Stealth # noqa: F401 — imported here to confirm availability
display_num = next(_pool_display_counter)
display = f":{display_num}"
env = os.environ.copy()
env["DISPLAY"] = display
xvfb = subprocess.Popen(
["Xvfb", display, "-screen", "0", "1280x800x24"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
# Small grace period for Xvfb to bind the display socket.
time.sleep(0.3)
pw = sync_playwright().start()
try:
browser = pw.chromium.launch(
headless=False,
env=env,
args=_CHROMIUM_ARGS,
)
ctx = browser.new_context(
user_agent=_USER_AGENT,
viewport=_VIEWPORT,
)
except Exception:
pw.stop()
xvfb.terminate()
xvfb.wait()
raise
return _PooledBrowser(
xvfb=xvfb,
pw=pw,
browser=browser,
ctx=ctx,
display_num=display_num,
last_used_ts=time.time(),
)
def _close_slot(slot: _PooledBrowser) -> None:
"""Cleanly close a pool slot: context → browser → Playwright → Xvfb."""
try:
slot.ctx.close()
except Exception:
pass
try:
slot.browser.close()
except Exception:
pass
try:
slot.pw.stop()
except Exception:
pass
try:
slot.xvfb.terminate()
slot.xvfb.wait(timeout=5)
except Exception:
pass
def _replenish_slot(slot: _PooledBrowser) -> _PooledBrowser:
"""Close the used context and open a fresh one on the same browser.
Returns a new _PooledBrowser sharing the same xvfb/pw/browser but with a
clean context avoids paying browser launch overhead on every fetch.
"""
try:
slot.ctx.close()
except Exception:
pass
new_ctx = slot.browser.new_context(
user_agent=_USER_AGENT,
viewport=_VIEWPORT,
)
return _PooledBrowser(
xvfb=slot.xvfb,
pw=slot.pw,
browser=slot.browser,
ctx=new_ctx,
display_num=slot.display_num,
last_used_ts=time.time(),
)
class BrowserPool:
"""Thread-safe pool of pre-warmed Playwright browser contexts."""
def __init__(self, size: int = 2) -> None:
self._size = size
self._q: queue.Queue[_PooledBrowser] = queue.Queue()
self._lock = threading.Lock()
self._started = False
self._stopped = False
self._playwright_available: Optional[bool] = None # cached after first check
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def start(self) -> None:
"""Pre-warm N browser slots in background threads.
Non-blocking: returns immediately; slots appear in the queue as they
finish launching. Safe to call multiple times (no-op after first).
"""
with self._lock:
if self._started:
return
self._started = True
if not self._check_playwright():
log.warning(
"BrowserPool: Playwright / Xvfb not available — "
"pool disabled, falling back to per-call fresh browser."
)
return
def _warm_one(_: int) -> None:
try:
slot = _launch_slot()
self._q.put(slot)
log.debug("BrowserPool: slot :%d ready", slot.display_num)
except Exception as exc:
log.warning("BrowserPool: pre-warm failed: %s", exc)
with ThreadPoolExecutor(max_workers=self._size) as ex:
futures = [ex.submit(_warm_one, i) for i in range(self._size)]
# Don't wait — executor exits after submitting, threads continue.
# Actually ThreadPoolExecutor.__exit__ waits for completion, which
# is fine: pre-warming completes in background relative to FastAPI
# startup because this whole method is called from a thread.
for f in as_completed(futures):
pass # propagate exceptions via logging, not raises
_idle_cleaner = threading.Thread(
target=self._idle_cleanup_loop, daemon=True, name="browser-pool-idle-cleaner"
)
_idle_cleaner.start()
log.info("BrowserPool: started with %d slots", self._q.qsize())
def stop(self) -> None:
"""Drain and close all pool slots. Called at FastAPI shutdown."""
with self._lock:
self._stopped = True
closed = 0
while True:
try:
slot = self._q.get_nowait()
_close_slot(slot)
closed += 1
except queue.Empty:
break
log.info("BrowserPool: stopped, closed %d slot(s)", closed)
# ------------------------------------------------------------------
# Core fetch
# ------------------------------------------------------------------
def fetch_html(self, url: str, delay: float = 1.0) -> str:
"""Navigate to *url* and return the rendered HTML.
Borrows a browser context from the pool (blocks up to 3s), uses it to
fetch the page, then replenishes the slot with a fresh context.
Falls back to a fully fresh browser if the pool is empty after the
timeout or if Playwright is unavailable.
"""
time.sleep(delay)
slot: Optional[_PooledBrowser] = None
try:
slot = self._q.get(timeout=_QUEUE_TIMEOUT_SECS)
except queue.Empty:
log.debug("BrowserPool: pool empty after %.1fs — using fresh browser", _QUEUE_TIMEOUT_SECS)
if slot is not None:
try:
html = self._fetch_with_slot(slot, url)
# Replenish: close dirty context, open fresh one, return to queue.
try:
fresh_slot = _replenish_slot(slot)
self._q.put(fresh_slot)
except Exception as exc:
log.warning("BrowserPool: replenish failed, slot discarded: %s", exc)
_close_slot(slot)
return html
except Exception as exc:
log.warning("BrowserPool: pooled fetch failed (%s) — closing slot", exc)
_close_slot(slot)
# Fall through to fresh browser below.
# Fallback: fresh browser (same code as old scraper._fetch_url).
return self._fetch_fresh(url)
# ------------------------------------------------------------------
# Internal helpers
# ------------------------------------------------------------------
def _check_playwright(self) -> bool:
"""Return True if Playwright and Xvfb are importable/runnable."""
if self._playwright_available is not None:
return self._playwright_available
try:
import playwright # noqa: F401
from playwright_stealth import Stealth # noqa: F401
self._playwright_available = True
except ImportError:
self._playwright_available = False
return self._playwright_available
def _fetch_with_slot(self, slot: _PooledBrowser, url: str) -> str:
"""Open a new page on *slot.ctx*, navigate to *url*, return HTML."""
from playwright_stealth import Stealth
page = slot.ctx.new_page()
try:
Stealth().apply_stealth_sync(page)
page.goto(url, wait_until="domcontentloaded", timeout=30_000)
page.wait_for_timeout(2000)
return page.content()
finally:
try:
page.close()
except Exception:
pass
def _fetch_fresh(self, url: str) -> str:
"""Launch a fully fresh browser, fetch *url*, close everything."""
import subprocess as _subprocess
try:
from playwright.sync_api import sync_playwright
from playwright_stealth import Stealth
except ImportError as exc:
raise RuntimeError(
"Playwright not installed — cannot fetch eBay pages. "
"Install playwright and playwright-stealth in the Docker image."
) from exc
display_num = next(_pool_display_counter)
display = f":{display_num}"
env = os.environ.copy()
env["DISPLAY"] = display
xvfb = _subprocess.Popen(
["Xvfb", display, "-screen", "0", "1280x800x24"],
stdout=_subprocess.DEVNULL,
stderr=_subprocess.DEVNULL,
)
try:
with sync_playwright() as pw:
browser = pw.chromium.launch(
headless=False,
env=env,
args=_CHROMIUM_ARGS,
)
ctx = browser.new_context(
user_agent=_USER_AGENT,
viewport=_VIEWPORT,
)
page = ctx.new_page()
Stealth().apply_stealth_sync(page)
page.goto(url, wait_until="domcontentloaded", timeout=30_000)
page.wait_for_timeout(2000)
html = page.content()
browser.close()
finally:
xvfb.terminate()
xvfb.wait()
return html
def _idle_cleanup_loop(self) -> None:
"""Daemon thread: drain slots idle for >5 minutes every 60 seconds."""
while not self._stopped:
time.sleep(_CLEANUP_INTERVAL_SECS)
if self._stopped:
break
now = time.time()
idle_cutoff = now - _IDLE_TIMEOUT_SECS
# Drain the entire queue, keep non-idle slots, close idle ones.
kept: list[_PooledBrowser] = []
closed = 0
while True:
try:
slot = self._q.get_nowait()
except queue.Empty:
break
if slot.last_used_ts < idle_cutoff:
_close_slot(slot)
closed += 1
else:
kept.append(slot)
for slot in kept:
self._q.put(slot)
if closed:
log.info("BrowserPool: idle cleanup closed %d slot(s)", closed)
# ---------------------------------------------------------------------------
# Module-level singleton
# ---------------------------------------------------------------------------
_pool: Optional[BrowserPool] = None
_pool_lock = threading.Lock()
def get_pool() -> BrowserPool:
"""Return the module-level BrowserPool singleton (creates it if needed).
Pool size is read from ``BROWSER_POOL_SIZE`` env var (default: 2).
Call ``get_pool().start()`` at FastAPI startup to pre-warm slots.
"""
global _pool
if _pool is None:
with _pool_lock:
if _pool is None:
size = int(os.environ.get("BROWSER_POOL_SIZE", "2"))
_pool = BrowserPool(size)
return _pool

View file

@ -291,7 +291,7 @@ class ScrapedEbayAdapter(PlatformAdapter):
self._delay = delay
def _fetch_url(self, url: str) -> str:
"""Core Playwright fetch — stealthed headed Chromium via Xvfb.
"""Core Playwright fetch — stealthed headed Chromium via pre-warmed browser pool.
Shared by both search (_get) and BTF item-page enrichment (_fetch_item_html).
Results cached for _HTML_CACHE_TTL seconds.
@ -300,44 +300,8 @@ class ScrapedEbayAdapter(PlatformAdapter):
if cached and time.time() < cached[1]:
return cached[0]
time.sleep(self._delay)
import os
import subprocess
display_num = next(_display_counter)
display = f":{display_num}"
xvfb = subprocess.Popen(
["Xvfb", display, "-screen", "0", "1280x800x24"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
)
env = os.environ.copy()
env["DISPLAY"] = display
try:
from playwright.sync_api import (
sync_playwright, # noqa: PLC0415 — lazy: only needed in Docker
)
from playwright_stealth import Stealth # noqa: PLC0415
with sync_playwright() as pw:
browser = pw.chromium.launch(
headless=False,
env=env,
args=["--no-sandbox", "--disable-dev-shm-usage"],
)
ctx = browser.new_context(
user_agent=_HEADERS["User-Agent"],
viewport={"width": 1280, "height": 800},
)
page = ctx.new_page()
Stealth().apply_stealth_sync(page)
page.goto(url, wait_until="domcontentloaded", timeout=30_000)
page.wait_for_timeout(2000) # let any JS challenges resolve
html = page.content()
browser.close()
finally:
xvfb.terminate()
xvfb.wait()
from app.platforms.ebay.browser_pool import get_pool # noqa: PLC0415 — lazy import
html = get_pool().fetch_html(url, delay=self._delay)
_html_cache[url] = (html, time.time() + _HTML_CACHE_TTL)
return html

View file

@ -39,6 +39,21 @@ backends:
# service: ollama
# ttl_s: 300
# ── cf-orch trunk services ─────────────────────────────────────────────────
# Allocate via cf-orch; the router calls the allocated service directly.
# Set CF_ORCH_URL (env) or url below to activate.
cf_text:
type: openai_compat
enabled: false
base_url: http://localhost:8008/v1
model: __auto__
api_key: any
supports_images: false
cf_orch:
service: cf-text
model_candidates: []
ttl_s: 3600
fallback_order:
- anthropic
- openai

1
docs/plausible.js Normal file
View file

@ -0,0 +1 @@
(function(){var s=document.createElement("script");s.defer=true;s.dataset.domain="docs.circuitforge.tech,circuitforge.tech";s.dataset.api="https://analytics.circuitforge.tech/api/event";s.src="https://analytics.circuitforge.tech/js/script.js";document.head.appendChild(s);})();

View file

@ -61,3 +61,6 @@ nav:
- Trust Score Algorithm: reference/trust-scoring.md
- Tier System: reference/tier-system.md
- Architecture: reference/architecture.md
extra_javascript:
- plausible.js

View file

@ -0,0 +1,466 @@
"""Tests for app.platforms.ebay.browser_pool.
All tests run without real Chromium / Xvfb / Playwright.
Playwright, Xvfb subprocess calls, and Stealth are mocked throughout.
"""
from __future__ import annotations
import queue
import subprocess
import threading
import time
from typing import Any
from unittest.mock import MagicMock, patch, call
import pytest
# ---------------------------------------------------------------------------
# Helpers to reset the module-level singleton between tests
# ---------------------------------------------------------------------------
def _reset_pool_singleton():
"""Force the module-level _pool singleton back to None."""
import app.platforms.ebay.browser_pool as _mod
_mod._pool = None
# ---------------------------------------------------------------------------
# Fixtures
# ---------------------------------------------------------------------------
@pytest.fixture(autouse=True)
def reset_singleton():
"""Reset the singleton before and after every test."""
_reset_pool_singleton()
yield
_reset_pool_singleton()
def _make_fake_slot():
"""Build a mock _PooledBrowser with all necessary attributes."""
from app.platforms.ebay.browser_pool import _PooledBrowser
xvfb = MagicMock(spec=subprocess.Popen)
pw = MagicMock()
browser = MagicMock()
ctx = MagicMock()
slot = _PooledBrowser(
xvfb=xvfb,
pw=pw,
browser=browser,
ctx=ctx,
display_num=100,
last_used_ts=time.time(),
)
return slot
# ---------------------------------------------------------------------------
# Singleton tests
# ---------------------------------------------------------------------------
class TestGetPoolSingleton:
def test_returns_same_instance(self):
from app.platforms.ebay.browser_pool import get_pool, BrowserPool
p1 = get_pool()
p2 = get_pool()
assert p1 is p2
def test_returns_browser_pool_instance(self):
from app.platforms.ebay.browser_pool import get_pool, BrowserPool
assert isinstance(get_pool(), BrowserPool)
def test_default_size_is_two(self):
from app.platforms.ebay.browser_pool import get_pool
pool = get_pool()
assert pool._size == 2
def test_custom_size_from_env(self, monkeypatch):
monkeypatch.setenv("BROWSER_POOL_SIZE", "5")
from app.platforms.ebay.browser_pool import get_pool
pool = get_pool()
assert pool._size == 5
# ---------------------------------------------------------------------------
# start() / stop() lifecycle tests
# ---------------------------------------------------------------------------
class TestLifecycle:
def test_start_is_noop_when_playwright_unavailable(self):
"""Pool should handle missing Playwright gracefully — no error raised."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
with patch.object(pool, "_check_playwright", return_value=False):
pool.start() # must not raise
# Pool queue is empty — no slots launched.
assert pool._q.empty()
def test_start_only_runs_once(self):
"""Calling start() twice must not double-warm."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
with patch.object(pool, "_check_playwright", return_value=False):
pool.start()
pool.start()
assert pool._started is True
def test_stop_drains_queue(self):
"""stop() should close every slot in the queue."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
slot1 = _make_fake_slot()
slot2 = _make_fake_slot()
pool._q.put(slot1)
pool._q.put(slot2)
with patch("app.platforms.ebay.browser_pool._close_slot") as mock_close:
pool.stop()
assert mock_close.call_count == 2
assert pool._q.empty()
assert pool._stopped is True
def test_stop_on_empty_pool_is_safe(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
pool.stop() # must not raise
# ---------------------------------------------------------------------------
# fetch_html — pool hit path
# ---------------------------------------------------------------------------
class TestFetchHtmlPoolHit:
def test_uses_pooled_slot_and_replenishes(self):
"""fetch_html should borrow a slot, call _fetch_with_slot, replenish."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
slot = _make_fake_slot()
pool._q.put(slot)
fresh_slot = _make_fake_slot()
with (
patch.object(pool, "_fetch_with_slot", return_value="<html>ok</html>") as mock_fetch,
patch("app.platforms.ebay.browser_pool._replenish_slot", return_value=fresh_slot) as mock_replenish,
patch("time.sleep"),
):
html = pool.fetch_html("https://www.ebay.com/sch/i.html?_nkw=test", delay=0)
assert html == "<html>ok</html>"
mock_fetch.assert_called_once_with(slot, "https://www.ebay.com/sch/i.html?_nkw=test")
mock_replenish.assert_called_once_with(slot)
# Fresh slot returned to queue
assert pool._q.get_nowait() is fresh_slot
def test_delay_is_respected(self):
"""fetch_html must call time.sleep(delay)."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
slot = _make_fake_slot()
pool._q.put(slot)
with (
patch.object(pool, "_fetch_with_slot", return_value="<html/>"),
patch("app.platforms.ebay.browser_pool._replenish_slot", return_value=_make_fake_slot()),
patch("app.platforms.ebay.browser_pool.time") as mock_time,
):
pool.fetch_html("https://example.com", delay=1.5)
mock_time.sleep.assert_called_once_with(1.5)
# ---------------------------------------------------------------------------
# fetch_html — pool empty / fallback path
# ---------------------------------------------------------------------------
class TestFetchHtmlFallback:
def test_falls_back_to_fresh_browser_when_pool_empty(self):
"""When pool is empty after timeout, _fetch_fresh should be called."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
# Queue is empty — no slots available.
with (
patch.object(pool, "_fetch_fresh", return_value="<html>fresh</html>") as mock_fresh,
patch("time.sleep"),
# Make Queue.get raise Empty after a short wait.
patch.object(pool._q, "get", side_effect=queue.Empty),
):
html = pool.fetch_html("https://www.ebay.com/sch/i.html?_nkw=widget", delay=0)
assert html == "<html>fresh</html>"
mock_fresh.assert_called_once_with("https://www.ebay.com/sch/i.html?_nkw=widget")
def test_falls_back_when_pooled_fetch_raises(self):
"""If _fetch_with_slot raises, the slot is closed and _fetch_fresh is used."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
slot = _make_fake_slot()
pool._q.put(slot)
with (
patch.object(pool, "_fetch_with_slot", side_effect=RuntimeError("Chromium crashed")),
patch.object(pool, "_fetch_fresh", return_value="<html>recovered</html>") as mock_fresh,
patch("app.platforms.ebay.browser_pool._close_slot") as mock_close,
patch("time.sleep"),
):
html = pool.fetch_html("https://www.ebay.com/", delay=0)
assert html == "<html>recovered</html>"
mock_close.assert_called_once_with(slot)
mock_fresh.assert_called_once()
# ---------------------------------------------------------------------------
# ImportError graceful fallback
# ---------------------------------------------------------------------------
class TestImportErrorHandling:
def test_check_playwright_returns_false_on_import_error(self):
"""_check_playwright should cache False when playwright is not installed."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
with patch.dict("sys.modules", {"playwright": None, "playwright_stealth": None}):
# Force re-check by clearing the cached value.
pool._playwright_available = None
result = pool._check_playwright()
assert result is False
assert pool._playwright_available is False
def test_start_logs_warning_when_playwright_missing(self, caplog):
"""start() should log a warning and not crash when Playwright is absent."""
import logging
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
pool._playwright_available = False # simulate missing
with patch.object(pool, "_check_playwright", return_value=False):
with caplog.at_level(logging.WARNING, logger="app.platforms.ebay.browser_pool"):
pool.start()
assert any("not available" in r.message for r in caplog.records)
def test_fetch_fresh_raises_runtime_error_when_playwright_missing(self):
"""_fetch_fresh must raise RuntimeError (not ImportError) when PW absent."""
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
with patch.dict("sys.modules", {"playwright": None, "playwright.sync_api": None}):
with pytest.raises(RuntimeError, match="Playwright not installed"):
pool._fetch_fresh("https://www.ebay.com/")
# ---------------------------------------------------------------------------
# Idle cleanup
# ---------------------------------------------------------------------------
class TestIdleCleanup:
def test_idle_cleanup_closes_stale_slots(self):
"""_idle_cleanup_loop should close slots whose last_used_ts is too old."""
from app.platforms.ebay.browser_pool import BrowserPool, _IDLE_TIMEOUT_SECS
pool = BrowserPool(size=2)
stale_slot = _make_fake_slot()
stale_slot.last_used_ts = time.time() - (_IDLE_TIMEOUT_SECS + 60)
fresh_slot = _make_fake_slot()
fresh_slot.last_used_ts = time.time()
pool._q.put(stale_slot)
pool._q.put(fresh_slot)
closed_slots = []
def fake_close(s):
closed_slots.append(s)
with patch("app.platforms.ebay.browser_pool._close_slot", side_effect=fake_close):
# Run one cleanup tick directly (not the full loop).
now = time.time()
idle_cutoff = now - _IDLE_TIMEOUT_SECS
kept = []
while True:
try:
s = pool._q.get_nowait()
except queue.Empty:
break
if s.last_used_ts < idle_cutoff:
fake_close(s)
else:
kept.append(s)
for s in kept:
pool._q.put(s)
assert stale_slot in closed_slots
assert fresh_slot not in closed_slots
assert pool._q.qsize() == 1
def test_idle_cleanup_loop_stops_when_pool_stopped(self):
"""Cleanup daemon should exit when _stopped is True."""
from app.platforms.ebay.browser_pool import BrowserPool, _CLEANUP_INTERVAL_SECS
pool = BrowserPool(size=1)
pool._stopped = True
# The loop should return after one iteration of the while check.
# Use a very short sleep mock so the test doesn't actually wait 60s.
sleep_calls = []
def fake_sleep(secs):
sleep_calls.append(secs)
with patch("app.platforms.ebay.browser_pool.time") as mock_time:
mock_time.time.return_value = time.time()
mock_time.sleep.side_effect = fake_sleep
# Run in a thread with a short timeout to confirm it exits.
t = threading.Thread(target=pool._idle_cleanup_loop)
t.start()
t.join(timeout=2.0)
assert not t.is_alive(), "idle cleanup loop did not exit when _stopped=True"
# ---------------------------------------------------------------------------
# _replenish_slot helper
# ---------------------------------------------------------------------------
class TestReplenishSlot:
def test_replenish_closes_old_context_and_opens_new(self):
from app.platforms.ebay.browser_pool import _replenish_slot, _PooledBrowser
old_ctx = MagicMock()
new_ctx = MagicMock()
browser = MagicMock()
browser.new_context.return_value = new_ctx
slot = _PooledBrowser(
xvfb=MagicMock(),
pw=MagicMock(),
browser=browser,
ctx=old_ctx,
display_num=101,
last_used_ts=time.time() - 10,
)
result = _replenish_slot(slot)
old_ctx.close.assert_called_once()
browser.new_context.assert_called_once()
assert result.ctx is new_ctx
assert result.browser is browser
assert result.xvfb is slot.xvfb
# last_used_ts is refreshed
assert result.last_used_ts > slot.last_used_ts
# ---------------------------------------------------------------------------
# _close_slot helper
# ---------------------------------------------------------------------------
class TestCloseSlot:
def test_close_slot_closes_all_components(self):
from app.platforms.ebay.browser_pool import _close_slot, _PooledBrowser
xvfb = MagicMock(spec=subprocess.Popen)
pw = MagicMock()
browser = MagicMock()
ctx = MagicMock()
slot = _PooledBrowser(
xvfb=xvfb, pw=pw, browser=browser, ctx=ctx,
display_num=102, last_used_ts=time.time(),
)
_close_slot(slot)
ctx.close.assert_called_once()
browser.close.assert_called_once()
pw.stop.assert_called_once()
xvfb.terminate.assert_called_once()
xvfb.wait.assert_called_once()
def test_close_slot_ignores_exceptions(self):
"""_close_slot must not raise even if components throw."""
from app.platforms.ebay.browser_pool import _close_slot, _PooledBrowser
xvfb = MagicMock(spec=subprocess.Popen)
xvfb.terminate.side_effect = OSError("already dead")
xvfb.wait.side_effect = OSError("already dead")
pw = MagicMock()
pw.stop.side_effect = RuntimeError("stopped")
browser = MagicMock()
browser.close.side_effect = RuntimeError("gone")
ctx = MagicMock()
ctx.close.side_effect = RuntimeError("gone")
slot = _PooledBrowser(
xvfb=xvfb, pw=pw, browser=browser, ctx=ctx,
display_num=103, last_used_ts=time.time(),
)
_close_slot(slot) # must not raise
# ---------------------------------------------------------------------------
# Scraper integration — _fetch_url uses pool
# ---------------------------------------------------------------------------
class TestScraperUsesPool:
def test_fetch_url_delegates_to_pool(self):
"""ScrapedEbayAdapter._fetch_url must use the pool, not launch its own browser."""
from app.platforms.ebay.browser_pool import BrowserPool
from app.platforms.ebay.scraper import ScrapedEbayAdapter
from app.db.store import Store
store = MagicMock(spec=Store)
adapter = ScrapedEbayAdapter(store, delay=0)
fake_pool = MagicMock(spec=BrowserPool)
fake_pool.fetch_html.return_value = "<html>pooled</html>"
with patch("app.platforms.ebay.browser_pool.get_pool", return_value=fake_pool):
# Clear the cache so fetch_url actually hits the pool.
import app.platforms.ebay.scraper as scraper_mod
scraper_mod._html_cache.clear()
html = adapter._fetch_url("https://www.ebay.com/sch/i.html?_nkw=test")
assert html == "<html>pooled</html>"
fake_pool.fetch_html.assert_called_once_with(
"https://www.ebay.com/sch/i.html?_nkw=test", delay=0
)
def test_fetch_url_uses_cache_before_pool(self):
"""_fetch_url should return cached HTML without hitting the pool."""
from app.platforms.ebay.scraper import ScrapedEbayAdapter, _html_cache, _HTML_CACHE_TTL
from app.db.store import Store
store = MagicMock(spec=Store)
adapter = ScrapedEbayAdapter(store, delay=0)
url = "https://www.ebay.com/sch/i.html?_nkw=cached"
_html_cache[url] = ("<html>cached</html>", time.time() + _HTML_CACHE_TTL)
fake_pool = MagicMock()
with patch("app.platforms.ebay.browser_pool.get_pool", return_value=fake_pool):
html = adapter._fetch_url(url)
assert html == "<html>cached</html>"
fake_pool.fetch_html.assert_not_called()
# Cleanup
_html_cache.pop(url, None)

View file

@ -1,5 +1,6 @@
import pytest
from api.main import _extract_ebay_item_id
from app.platforms.ebay.normaliser import normalise_listing, normalise_seller
@ -56,3 +57,48 @@ def test_normalise_seller_maps_fields():
assert seller.feedback_count == 300
assert seller.feedback_ratio == pytest.approx(0.991, abs=0.001)
assert seller.account_age_days > 0
# ── _extract_ebay_item_id ─────────────────────────────────────────────────────
class TestExtractEbayItemId:
"""Unit tests for the URL-to-item-ID normaliser."""
def test_itm_url_with_title_slug(self):
url = "https://www.ebay.com/itm/Sony-WH-1000XM5-Headphones/123456789012"
assert _extract_ebay_item_id(url) == "123456789012"
def test_itm_url_without_title_slug(self):
url = "https://www.ebay.com/itm/123456789012"
assert _extract_ebay_item_id(url) == "123456789012"
def test_itm_url_no_www(self):
url = "https://ebay.com/itm/123456789012"
assert _extract_ebay_item_id(url) == "123456789012"
def test_itm_url_with_query_params(self):
url = "https://www.ebay.com/itm/123456789012?hash=item1234abcd"
assert _extract_ebay_item_id(url) == "123456789012"
def test_pay_ebay_rxo_with_itemId_query_param(self):
url = "https://pay.ebay.com/rxo?action=view&sessionid=abc123&itemId=123456789012"
assert _extract_ebay_item_id(url) == "123456789012"
def test_pay_ebay_rxo_path_with_itemId(self):
url = "https://pay.ebay.com/rxo/view?itemId=123456789012"
assert _extract_ebay_item_id(url) == "123456789012"
def test_non_ebay_url_returns_none(self):
assert _extract_ebay_item_id("https://amazon.com/dp/B08N5WRWNW") is None
def test_plain_keyword_returns_none(self):
assert _extract_ebay_item_id("rtx 4090 gpu") is None
def test_empty_string_returns_none(self):
assert _extract_ebay_item_id("") is None
def test_ebay_url_no_item_id_returns_none(self):
assert _extract_ebay_item_id("https://www.ebay.com/sch/i.html?_nkw=gpu") is None
def test_pay_ebay_no_item_id_returns_none(self):
assert _extract_ebay_item_id("https://pay.ebay.com/rxo?action=view&sessionid=abc") is None

231
tests/test_async_search.py Normal file
View file

@ -0,0 +1,231 @@
"""Tests for GET /api/search/async (fire-and-forget search + SSE streaming).
Verifies:
- Returns HTTP 202 with session_id and status: "queued"
- session_id is registered in _update_queues immediately
- Actual scraping is not performed (mocked out)
- Empty query path returns a completed session with done event
"""
from __future__ import annotations
import os
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from fastapi.testclient import TestClient
# ── Fixtures ──────────────────────────────────────────────────────────────────
@pytest.fixture
def client(tmp_path):
"""TestClient with a fresh tmp DB. Must set SNIPE_DB *before* importing app."""
os.environ["SNIPE_DB"] = str(tmp_path / "snipe.db")
from api.main import app
return TestClient(app, raise_server_exceptions=False)
def _make_mock_listing():
"""Return a minimal mock listing object that satisfies the search pipeline."""
m = MagicMock()
m.platform_listing_id = "123456789"
m.seller_platform_id = "test_seller"
m.title = "Test GPU"
m.price = 100.0
m.currency = "USD"
m.condition = "Used"
m.url = "https://www.ebay.com/itm/123456789"
m.photo_urls = []
m.listing_age_days = 5
m.buying_format = "fixed_price"
m.ends_at = None
m.fetched_at = None
m.trust_score_id = None
m.id = 1
m.category_name = None
return m
# ── Core contract tests ───────────────────────────────────────────────────────
def test_async_search_returns_202(client):
"""GET /api/search/async?q=... returns HTTP 202 with session_id and status."""
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.return_value = []
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
resp = client.get("/api/search/async?q=test+gpu")
assert resp.status_code == 202
data = resp.json()
assert "session_id" in data
assert data["status"] == "queued"
assert isinstance(data["session_id"], str)
assert len(data["session_id"]) > 0
def test_async_search_registers_session_id(client):
"""session_id returned by 202 response must appear in _update_queues immediately."""
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.return_value = []
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
resp = client.get("/api/search/async?q=test+gpu")
assert resp.status_code == 202
session_id = resp.json()["session_id"]
# The queue must be registered so the SSE endpoint can open it.
from api.main import _update_queues
assert session_id in _update_queues
def test_async_search_empty_query(client):
"""Empty query returns 202 with a pre-loaded done sentinel, no scraping needed."""
resp = client.get("/api/search/async?q=")
assert resp.status_code == 202
data = resp.json()
assert data["status"] == "queued"
assert "session_id" in data
from api.main import _update_queues
import queue as _queue
sid = data["session_id"]
assert sid in _update_queues
q = _update_queues[sid]
# First item should be the empty listings event
first = q.get_nowait()
assert first is not None
assert first["type"] == "listings"
assert first["listings"] == []
# Second item should be the sentinel
sentinel = q.get_nowait()
assert sentinel is None
def test_async_search_no_real_chromium(client):
"""Async search endpoint must not launch real Chromium in tests.
Verifies that the background scraper is submitted to the executor but the
adapter factory is patched no real Playwright/Xvfb process is spawned.
Uses a broad patch on Store to avoid sqlite3 DB path issues in the thread pool.
"""
import threading
scrape_called = threading.Event()
def _fake_search(query, filters):
scrape_called.set()
return []
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
patch("api.main.Store") as mock_store_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.side_effect = _fake_search
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
mock_store = MagicMock()
mock_store.get_listings_staged.return_value = {}
mock_store.refresh_seller_categories.return_value = 0
mock_store.save_listings.return_value = None
mock_store.save_trust_scores.return_value = None
mock_store.get_market_comp.return_value = None
mock_store.get_seller.return_value = None
mock_store.get_user_preference.return_value = None
mock_store_cls.return_value = mock_store
resp = client.get("/api/search/async?q=rtx+3080")
assert resp.status_code == 202
# Give the background worker a moment to run (it's in a thread pool)
scrape_called.wait(timeout=5.0)
# If we get here without a real Playwright process, the test passes.
assert scrape_called.is_set(), "Background search worker never ran"
def test_async_search_query_params_forwarded(client):
"""All filter params accepted by /api/search are also accepted here."""
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.return_value = []
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
resp = client.get(
"/api/search/async"
"?q=rtx+3080"
"&max_price=400"
"&min_price=100"
"&pages=2"
"&must_include=rtx,3080"
"&must_include_mode=all"
"&must_exclude=mining"
"&category_id=27386"
"&adapter=auto"
)
assert resp.status_code == 202
def test_async_search_session_id_is_uuid(client):
"""session_id must be a valid UUID v4 string."""
import uuid as _uuid
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.return_value = []
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
resp = client.get("/api/search/async?q=test")
assert resp.status_code == 202
sid = resp.json()["session_id"]
# Should not raise if it's a valid UUID
parsed = _uuid.UUID(sid)
assert str(parsed) == sid

View file

@ -0,0 +1,76 @@
"""Tests for PATCH /api/preferences display.currency validation."""
from __future__ import annotations
import os
from pathlib import Path
from unittest.mock import patch
import pytest
from fastapi.testclient import TestClient
@pytest.fixture
def client(tmp_path):
"""TestClient with a patched local DB path.
api.cloud_session._LOCAL_SNIPE_DB is set at module import time, so we
cannot rely on setting SNIPE_DB before import when other tests have already
triggered the module load. Patch the module-level variable directly so
the session dependency points at our fresh tmp DB for the duration of this
fixture.
"""
db_path = tmp_path / "snipe.db"
# Ensure the DB is initialised so the Store can create its tables.
import api.cloud_session as _cs
from circuitforge_core.db import get_connection, run_migrations
conn = get_connection(db_path)
run_migrations(conn, Path("app/db/migrations"))
conn.close()
from api.main import app
with patch.object(_cs, "_LOCAL_SNIPE_DB", db_path):
yield TestClient(app, raise_server_exceptions=False)
def test_set_display_currency_valid(client):
"""Accepted ISO 4217 codes are stored and returned."""
for code in ("USD", "GBP", "EUR", "CAD", "AUD", "JPY", "CHF", "MXN", "BRL", "INR"):
resp = client.patch("/api/preferences", json={"path": "display.currency", "value": code})
assert resp.status_code == 200, f"Expected 200 for {code}, got {resp.status_code}: {resp.text}"
data = resp.json()
assert data.get("display", {}).get("currency") == code
def test_set_display_currency_normalises_lowercase(client):
"""Lowercase code is accepted and normalised to uppercase."""
resp = client.patch("/api/preferences", json={"path": "display.currency", "value": "eur"})
assert resp.status_code == 200
assert resp.json()["display"]["currency"] == "EUR"
def test_set_display_currency_unsupported_returns_400(client):
"""Unsupported currency code returns 400 with a clear message."""
resp = client.patch("/api/preferences", json={"path": "display.currency", "value": "XYZ"})
assert resp.status_code == 400
detail = resp.json().get("detail", "")
assert "XYZ" in detail
assert "Supported" in detail or "supported" in detail
def test_set_display_currency_empty_string_returns_400(client):
"""Empty string is not a valid currency code."""
resp = client.patch("/api/preferences", json={"path": "display.currency", "value": ""})
assert resp.status_code == 400
def test_set_display_currency_none_returns_400(client):
"""None is not a valid currency code."""
resp = client.patch("/api/preferences", json={"path": "display.currency", "value": None})
assert resp.status_code == 400
def test_other_preference_paths_unaffected(client):
"""Unrelated preference paths still work normally after currency validation added."""
resp = client.patch("/api/preferences", json={"path": "affiliate.opt_out", "value": True})
assert resp.status_code == 200
assert resp.json().get("affiliate", {}).get("opt_out") is True

402
tests/test_search_cache.py Normal file
View file

@ -0,0 +1,402 @@
"""Tests for the short-TTL search result cache in api/main.py.
Covers:
- _cache_key stability (same inputs same key)
- _cache_key uniqueness (different inputs different keys)
- cache hit path returns early without scraping (async worker)
- cache miss path stores result in _search_result_cache
- refresh=True bypasses cache read (still writes fresh result)
- TTL expiry: expired entries are not returned as hits
- _evict_expired_cache removes expired entries
"""
from __future__ import annotations
import os
import queue as _queue
import time
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
# ── Helpers ───────────────────────────────────────────────────────────────────
def _clear_cache():
"""Reset module-level cache state between tests."""
import api.main as _main
_main._search_result_cache.clear()
_main._last_eviction_ts = 0.0
@pytest.fixture(autouse=True)
def isolated_cache():
"""Ensure each test starts with an empty cache."""
_clear_cache()
yield
_clear_cache()
@pytest.fixture
def client(tmp_path):
"""TestClient backed by a fresh tmp DB."""
os.environ["SNIPE_DB"] = str(tmp_path / "snipe.db")
from api.main import app
from fastapi.testclient import TestClient
return TestClient(app, raise_server_exceptions=False)
def _make_mock_listing(listing_id: str = "123456789", seller_id: str = "test_seller"):
"""Return a MagicMock listing (for use where asdict() is NOT called on it)."""
m = MagicMock()
m.platform_listing_id = listing_id
m.seller_platform_id = seller_id
m.title = "Test GPU"
m.price = 100.0
m.currency = "USD"
m.condition = "Used"
m.url = f"https://www.ebay.com/itm/{listing_id}"
m.photo_urls = []
m.listing_age_days = 5
m.buying_format = "fixed_price"
m.ends_at = None
m.fetched_at = None
m.trust_score_id = None
m.id = 1
m.category_name = None
return m
def _make_real_listing(listing_id: str = "123456789", seller_id: str = "test_seller"):
"""Return a real Listing dataclass instance (for use where asdict() is called)."""
from app.db.models import Listing
return Listing(
platform="ebay",
platform_listing_id=listing_id,
title="Test GPU",
price=100.0,
currency="USD",
condition="Used",
seller_platform_id=seller_id,
url=f"https://www.ebay.com/itm/{listing_id}",
photo_urls=[],
listing_age_days=5,
buying_format="fixed_price",
id=None,
)
# ── _cache_key unit tests ─────────────────────────────────────────────────────
def test_cache_key_stable_for_same_inputs():
"""The same parameter set always produces the same key."""
from api.main import _cache_key
k1 = _cache_key("rtx 3080", 400.0, 100.0, 2, "rtx,3080", "all", "mining", "27386")
k2 = _cache_key("rtx 3080", 400.0, 100.0, 2, "rtx,3080", "all", "mining", "27386")
assert k1 == k2
def test_cache_key_case_normalised():
"""Query is normalised to lower-case + stripped before hashing."""
from api.main import _cache_key
k1 = _cache_key("RTX 3080", None, None, 1, "", "all", "", "")
k2 = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
assert k1 == k2
def test_cache_key_differs_on_query_change():
"""Different query strings must produce different keys."""
from api.main import _cache_key
k1 = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
k2 = _cache_key("gtx 1080", None, None, 1, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_price_filter():
"""Different max_price must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", 400.0, None, 1, "", "all", "", "")
k2 = _cache_key("gpu", 500.0, None, 1, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_min_price():
"""Different min_price must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, 50.0, 1, "", "all", "", "")
k2 = _cache_key("gpu", None, 100.0, 1, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_pages():
"""Different page count must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "", "all", "", "")
k2 = _cache_key("gpu", None, None, 2, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_must_include():
"""Different must_include terms must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "rtx", "all", "", "")
k2 = _cache_key("gpu", None, None, 1, "gtx", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_must_exclude():
"""Different must_exclude terms must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "", "all", "mining", "")
k2 = _cache_key("gpu", None, None, 1, "", "all", "defective", "")
assert k1 != k2
def test_cache_key_differs_on_category_id():
"""Different category_id must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "", "all", "", "27386")
k2 = _cache_key("gpu", None, None, 1, "", "all", "", "12345")
assert k1 != k2
def test_cache_key_is_16_chars():
"""Key must be exactly 16 hex characters."""
from api.main import _cache_key
k = _cache_key("gpu", None, None, 1, "", "all", "", "")
assert len(k) == 16
assert all(c in "0123456789abcdef" for c in k)
# ── TTL / eviction unit tests ─────────────────────────────────────────────────
def test_expired_entry_is_not_returned_as_hit():
"""An entry past its TTL must not be treated as a cache hit."""
import api.main as _main
from api.main import _cache_key
key = _cache_key("gpu", None, None, 1, "", "all", "", "")
# Write an already-expired entry.
_main._search_result_cache[key] = (
{"listings": [], "market_price": None},
time.time() - 1.0, # expired 1 second ago
)
cached = _main._search_result_cache.get(key)
assert cached is not None
payload, expiry = cached
# Simulate the hit-check used in main.py
assert expiry <= time.time(), "Entry should be expired"
def test_evict_expired_cache_removes_stale_entries():
"""_evict_expired_cache must remove entries whose expiry has passed."""
import api.main as _main
from api.main import _cache_key, _evict_expired_cache
key_expired = _cache_key("old query", None, None, 1, "", "all", "", "")
key_valid = _cache_key("new query", None, None, 1, "", "all", "", "")
_main._search_result_cache[key_expired] = (
{"listings": [], "market_price": None},
time.time() - 10.0, # already expired
)
_main._search_result_cache[key_valid] = (
{"listings": [], "market_price": 99.0},
time.time() + 300.0, # valid for 5 min
)
# Reset throttle so eviction runs immediately.
_main._last_eviction_ts = 0.0
_evict_expired_cache()
assert key_expired not in _main._search_result_cache
assert key_valid in _main._search_result_cache
def test_evict_is_rate_limited():
"""_evict_expired_cache should skip eviction if called within 60 s."""
import api.main as _main
from api.main import _cache_key, _evict_expired_cache
key_expired = _cache_key("stale", None, None, 1, "", "all", "", "")
_main._search_result_cache[key_expired] = (
{"listings": [], "market_price": None},
time.time() - 5.0,
)
# Pretend eviction just ran.
_main._last_eviction_ts = time.time()
_evict_expired_cache()
# Entry should still be present because eviction was throttled.
assert key_expired in _main._search_result_cache
# ── Integration tests — async endpoint cache hit ──────────────────────────────
def test_async_cache_hit_skips_scraper(client, tmp_path):
"""On a warm cache hit the scraper adapter must not be called."""
import threading
import api.main as _main
from api.main import _cache_key
# Pre-seed a valid cache entry.
key = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
_main._search_result_cache[key] = (
{"listings": [], "market_price": 250.0},
time.time() + 300.0,
)
scraper_called = threading.Event()
def _fake_search(query, filters):
scraper_called.set()
return []
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
patch("api.main.Store") as mock_store_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.side_effect = _fake_search
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
mock_store = MagicMock()
mock_store.get_listings_staged.return_value = {}
mock_store.refresh_seller_categories.return_value = 0
mock_store.save_listings.return_value = None
mock_store.save_trust_scores.return_value = None
mock_store.get_market_comp.return_value = None
mock_store.get_seller.return_value = None
mock_store.get_user_preference.return_value = None
mock_store_cls.return_value = mock_store
resp = client.get("/api/search/async?q=rtx+3080")
assert resp.status_code == 202
# Give the background worker a moment to run.
scraper_called.wait(timeout=3.0)
# Scraper must NOT have been called on a cache hit.
assert not scraper_called.is_set(), "Scraper was called despite a warm cache hit"
def test_async_cache_miss_stores_result(client, tmp_path):
"""After a cache miss the result must be stored in _search_result_cache."""
import threading
import api.main as _main
from api.main import _cache_key
search_done = threading.Event()
real_listing = _make_real_listing()
def _fake_search(query, filters):
return [real_listing]
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment") as mock_enrich,
patch("api.main.TrustScorer") as mock_scorer_cls,
patch("api.main.Store") as mock_store_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.side_effect = _fake_search
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
mock_store = MagicMock()
mock_store.get_listings_staged.return_value = {
real_listing.platform_listing_id: real_listing
}
mock_store.refresh_seller_categories.return_value = 0
mock_store.save_listings.return_value = None
mock_store.save_trust_scores.return_value = None
mock_store.get_market_comp.return_value = None
mock_store.get_seller.return_value = None
mock_store.get_user_preference.return_value = None
mock_store_cls.return_value = mock_store
def _enrich_side_effect(*args, **kwargs):
search_done.set()
mock_enrich.side_effect = _enrich_side_effect
resp = client.get("/api/search/async?q=rtx+3080")
assert resp.status_code == 202
# Wait until the background worker reaches _trigger_scraper_enrichment.
search_done.wait(timeout=5.0)
assert search_done.is_set(), "Background search worker never completed"
key = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
assert key in _main._search_result_cache, "Result was not stored in cache after miss"
payload, expiry = _main._search_result_cache[key]
assert expiry > time.time(), "Cache entry has already expired"
assert "listings" in payload
# ── Integration tests — async endpoint refresh=True ──────────────────────────
def test_async_refresh_bypasses_cache_read(client, tmp_path):
"""refresh=True must bypass cache read and invoke the scraper."""
import threading
import api.main as _main
from api.main import _cache_key
# Seed a valid cache entry so we can confirm it is bypassed.
key = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
_main._search_result_cache[key] = (
{"listings": [], "market_price": 100.0},
time.time() + 300.0,
)
scraper_called = threading.Event()
def _fake_search(query, filters):
scraper_called.set()
return []
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
patch("api.main.Store") as mock_store_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.side_effect = _fake_search
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
mock_store = MagicMock()
mock_store.get_listings_staged.return_value = {}
mock_store.refresh_seller_categories.return_value = 0
mock_store.save_listings.return_value = None
mock_store.save_trust_scores.return_value = None
mock_store.get_market_comp.return_value = None
mock_store.get_seller.return_value = None
mock_store.get_user_preference.return_value = None
mock_store_cls.return_value = mock_store
resp = client.get("/api/search/async?q=rtx+3080&refresh=true")
assert resp.status_code == 202
scraper_called.wait(timeout=5.0)
assert scraper_called.is_set(), "Scraper was not called even though refresh=True"

View file

@ -22,11 +22,15 @@
<meta name="twitter:description" content="Free eBay trust scorer. Catches scammers before you bid. No account required." />
<meta name="twitter:image" content="https://menagerie.circuitforge.tech/snipe/og-image.png" />
<link rel="canonical" href="https://menagerie.circuitforge.tech/snipe" />
<!-- Inline background prevents blank flash before CSS bundle loads -->
<!-- Matches --color-surface dark tactical theme from theme.css -->
<!-- FOFT guard: prevents dark flash before CSS bundle loads.
theme.css overrides both html and body backgrounds via var(--color-surface)
once loaded, so this only applies for the brief pre-bundle window. -->
<style>
html, body { margin: 0; background: #0d1117; min-height: 100vh; }
</style>
<!-- Plausible analytics: cookie-free, GDPR-compliant, self-hosted.
Skips localhost/127.0.0.1. Reports to hostname + circuitforge.tech rollup. -->
<script>(function(){if(/localhost|127\.0\.0\.1/.test(location.hostname))return;var s=document.createElement('script');s.defer=true;s.dataset.domain=location.hostname+',circuitforge.tech';s.dataset.api='https://analytics.circuitforge.tech/api/event';s.src='https://analytics.circuitforge.tech/js/script.js';document.head.appendChild(s);})();</script>
</head>
<body>
<!-- Mount target only — App.vue root must NOT use id="app". Gotcha #1. -->

View file

@ -0,0 +1,140 @@
import { beforeEach, describe, expect, it, vi } from 'vitest'
// Reset module-level cache and fetch mock between tests
beforeEach(async () => {
vi.restoreAllMocks()
// Reset module-level cache so each test starts clean
const mod = await import('../composables/useCurrency')
mod._resetCacheForTest()
})
const MOCK_RATES: Record<string, number> = {
USD: 1,
GBP: 0.79,
EUR: 0.92,
JPY: 151.5,
CAD: 1.36,
}
function mockFetchSuccess(rates = MOCK_RATES) {
vi.stubGlobal('fetch', vi.fn().mockResolvedValue({
ok: true,
json: async () => ({ rates }),
}))
}
function mockFetchFailure() {
vi.stubGlobal('fetch', vi.fn().mockRejectedValue(new Error('Network error')))
}
describe('convertFromUSD', () => {
it('returns the same amount for USD (no conversion)', async () => {
mockFetchSuccess()
const { convertFromUSD } = await import('../composables/useCurrency')
const result = await convertFromUSD(100, 'USD')
expect(result).toBe(100)
// fetch should not be called for USD passthrough
expect(fetch).not.toHaveBeenCalled()
})
it('converts USD to GBP using fetched rates', async () => {
mockFetchSuccess()
const { convertFromUSD, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
const result = await convertFromUSD(100, 'GBP')
expect(result).toBeCloseTo(79, 1)
})
it('converts USD to JPY using fetched rates', async () => {
mockFetchSuccess()
const { convertFromUSD, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
const result = await convertFromUSD(10, 'JPY')
expect(result).toBeCloseTo(1515, 1)
})
it('returns the original amount when rates are unavailable (network failure)', async () => {
mockFetchFailure()
const { convertFromUSD, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
const result = await convertFromUSD(100, 'EUR')
expect(result).toBe(100)
})
it('returns the original amount when the currency code is unknown', async () => {
mockFetchSuccess({ USD: 1, EUR: 0.92 }) // no XYZ rate
const { convertFromUSD, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
const result = await convertFromUSD(50, 'XYZ')
expect(result).toBe(50)
})
it('only calls fetch once when called concurrently (deduplication)', async () => {
mockFetchSuccess()
const { convertFromUSD, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
await Promise.all([
convertFromUSD(100, 'GBP'),
convertFromUSD(200, 'EUR'),
convertFromUSD(50, 'CAD'),
])
expect((fetch as ReturnType<typeof vi.fn>).mock.calls.length).toBe(1)
})
})
describe('formatPrice', () => {
it('formats USD amount with dollar sign', async () => {
mockFetchSuccess()
const { formatPrice, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
const result = await formatPrice(99.99, 'USD')
expect(result).toMatch(/^\$99\.99$|^\$100$/) // Intl rounding may vary
expect(result).toContain('$')
})
it('formats GBP amount with correct symbol', async () => {
mockFetchSuccess()
const { formatPrice, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
const result = await formatPrice(100, 'GBP')
// GBP 79 — expect pound sign or "GBP" prefix
expect(result).toMatch(/[£]|GBP/)
})
it('formats JPY without decimal places (Intl rounds to zero decimals)', async () => {
mockFetchSuccess()
const { formatPrice, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
const result = await formatPrice(10, 'JPY')
// 10 * 151.5 = 1515 JPY — no decimal places for JPY
expect(result).toMatch(/¥1,515|JPY.*1,515|¥1515/)
})
it('falls back gracefully on network failure, showing USD', async () => {
mockFetchFailure()
const { formatPrice, _resetCacheForTest } = await import('../composables/useCurrency')
_resetCacheForTest()
// With failed rates, conversion returns original amount and uses Intl with target currency
// This may throw if Intl doesn't know EUR — but the function should not throw
const result = await formatPrice(50, 'EUR')
expect(typeof result).toBe('string')
expect(result.length).toBeGreaterThan(0)
})
})
describe('formatPriceUSD', () => {
it('formats a USD amount synchronously', async () => {
const { formatPriceUSD } = await import('../composables/useCurrency')
const result = formatPriceUSD(1234.5)
// Intl output varies by runtime locale data; check structure not exact string
expect(result).toContain('$')
expect(result).toContain('1,234')
})
it('formats zero as a USD string', async () => {
const { formatPriceUSD } = await import('../composables/useCurrency')
const result = formatPriceUSD(0)
expect(result).toContain('$')
expect(result).toMatch(/\$0/)
})
})

View file

@ -2,6 +2,12 @@
Dark tactical theme: near-black surfaces, amber accent, trust-signal colours.
ALL color/font/spacing tokens live here nowhere else.
Snipe Mode easter egg: activated by Konami code (cf-snipe-mode in localStorage).
Planned theme variants (add as [data-theme="<name>"] blocks using the same token set):
solarized-dark Ethan Schoonover's Solarized dark palette, amber accent
solarized-light Solarized light palette, amber accent
high-contrast WCAG AAA minimum contrast ratios, no mid-grey text
colorblind Deuteranopia-safe trust signal colours (blue/orange instead of green/red)
*/
/* Snipe dark tactical (default)
@ -212,7 +218,7 @@ html {
-moz-osx-font-smoothing: grayscale;
}
body { margin: 0; min-height: 100vh; }
body { margin: 0; min-height: 100vh; background: var(--color-surface); }
h1, h2, h3, h4, h5, h6 {
font-family: var(--font-display);

View file

@ -189,15 +189,18 @@
</template>
<script setup lang="ts">
import { computed, ref } from 'vue'
import { computed, ref, watch } from 'vue'
import { RouterLink } from 'vue-router'
import type { Listing, TrustScore, Seller } from '../stores/search'
import { useSearchStore } from '../stores/search'
import { useBlocklistStore } from '../stores/blocklist'
import TrustFeedbackButtons from './TrustFeedbackButtons.vue'
import { useTrustSignalPref } from '../composables/useTrustSignalPref'
import { formatPrice, formatPriceUSD } from '../composables/useCurrency'
import { usePreferencesStore } from '../stores/preferences'
const { enabled: trustSignalEnabled } = useTrustSignalPref()
const prefsStore = usePreferencesStore()
const props = defineProps<{
listing: Listing
@ -379,15 +382,26 @@ const isSteal = computed(() => {
return props.listing.price < props.marketPrice * 0.8
})
const formattedPrice = computed(() => {
const sym = props.listing.currency === 'USD' ? '$' : props.listing.currency + ' '
return `${sym}${props.listing.price.toLocaleString('en-US', { minimumFractionDigits: 0, maximumFractionDigits: 2 })}`
})
// Async price display show USD synchronously while rates load, then update
const formattedPrice = ref(formatPriceUSD(props.listing.price))
const formattedMarket = ref(props.marketPrice ? formatPriceUSD(props.marketPrice) : '')
const formattedMarket = computed(() => {
if (!props.marketPrice) return ''
return `$${props.marketPrice.toLocaleString('en-US', { maximumFractionDigits: 0 })}`
})
async function _updatePrices() {
const currency = prefsStore.displayCurrency
formattedPrice.value = await formatPrice(props.listing.price, currency)
if (props.marketPrice) {
formattedMarket.value = await formatPrice(props.marketPrice, currency)
} else {
formattedMarket.value = ''
}
}
// Update when the listing, marketPrice, or display currency changes
watch(
[() => props.listing.price, () => props.marketPrice, () => prefsStore.displayCurrency],
() => { _updatePrices() },
{ immediate: true },
)
</script>
<style scoped>

View file

@ -0,0 +1,102 @@
/**
* useCurrency live exchange rate conversion from USD to a target display currency.
*
* Rates are fetched lazily on first use from open.er-api.com (free, no key required).
* A module-level cache with a 1-hour TTL prevents redundant network calls.
* On fetch failure the composable falls back silently to USD display.
*/
const ER_API_URL = 'https://open.er-api.com/v6/latest/USD'
const CACHE_TTL_MS = 60 * 60 * 1000 // 1 hour
interface RateCache {
rates: Record<string, number>
fetchedAt: number
}
// Module-level cache shared across all composable instances
let _cache: RateCache | null = null
let _inflight: Promise<Record<string, number>> | null = null
async function _fetchRates(): Promise<Record<string, number>> {
const now = Date.now()
if (_cache && now - _cache.fetchedAt < CACHE_TTL_MS) {
return _cache.rates
}
// Deduplicate concurrent calls — reuse the same in-flight fetch
if (_inflight) {
return _inflight
}
_inflight = (async () => {
try {
const res = await fetch(ER_API_URL)
if (!res.ok) throw new Error(`ER-API responded ${res.status}`)
const data = await res.json()
const rates: Record<string, number> = data.rates ?? {}
_cache = { rates, fetchedAt: Date.now() }
return rates
} catch {
// Return cached stale data if available, otherwise empty object (USD passthrough)
return _cache?.rates ?? {}
} finally {
_inflight = null
}
})()
return _inflight
}
/**
* Convert an amount in USD to the target currency using the latest exchange rates.
* Returns the original amount unchanged if rates are unavailable or the currency is USD.
*/
export async function convertFromUSD(amountUSD: number, targetCurrency: string): Promise<number> {
if (targetCurrency === 'USD') return amountUSD
const rates = await _fetchRates()
const rate = rates[targetCurrency]
if (!rate) return amountUSD
return amountUSD * rate
}
/**
* Format a USD amount as a localized string in the target currency.
* Fetches exchange rates lazily. Falls back to USD display if rates are unavailable.
*
* Returns a plain USD string synchronously on first call while rates load;
* callers should use a ref that updates once the promise resolves.
*/
export async function formatPrice(amountUSD: number, currency: string): Promise<string> {
const converted = await convertFromUSD(amountUSD, currency)
try {
return new Intl.NumberFormat('en-US', {
style: 'currency',
currency,
minimumFractionDigits: 0,
maximumFractionDigits: 2,
}).format(converted)
} catch {
// Fallback if Intl doesn't know the currency code
return `${currency} ${converted.toLocaleString('en-US', { minimumFractionDigits: 0, maximumFractionDigits: 2 })}`
}
}
/**
* Synchronous USD-only formatter for use before rates have loaded.
*/
export function formatPriceUSD(amountUSD: number): string {
return new Intl.NumberFormat('en-US', {
style: 'currency',
currency: 'USD',
minimumFractionDigits: 0,
maximumFractionDigits: 2,
}).format(amountUSD)
}
// Exported for testing — allows resetting module-level cache between test cases
export function _resetCacheForTest(): void {
_cache = null
_inflight = null
}

View file

@ -12,8 +12,14 @@ export interface UserPreferences {
community?: {
blocklist_share?: boolean
}
display?: {
currency?: string
}
}
const CURRENCY_LS_KEY = 'snipe:currency'
const DEFAULT_CURRENCY = 'USD'
const apiBase = (import.meta.env.VITE_API_BASE as string) ?? ''
export const usePreferencesStore = defineStore('preferences', () => {
@ -26,14 +32,34 @@ export const usePreferencesStore = defineStore('preferences', () => {
const affiliateByokId = computed(() => prefs.value.affiliate?.byok_ids?.ebay ?? '')
const communityBlocklistShare = computed(() => prefs.value.community?.blocklist_share ?? false)
// displayCurrency: DB preference for logged-in users, localStorage for anon users
const displayCurrency = computed((): string => {
return prefs.value.display?.currency ?? DEFAULT_CURRENCY
})
async function load() {
if (!session.isLoggedIn) return
if (!session.isLoggedIn) {
// Anonymous user: read currency from localStorage
const stored = localStorage.getItem(CURRENCY_LS_KEY)
if (stored) {
prefs.value = { ...prefs.value, display: { ...prefs.value.display, currency: stored } }
}
return
}
loading.value = true
error.value = null
try {
const res = await fetch(`${apiBase}/api/preferences`)
if (res.ok) {
prefs.value = await res.json()
const data: UserPreferences = await res.json()
// Migration: if logged in but no DB preference, fall back to localStorage value
if (!data.display?.currency) {
const lsVal = localStorage.getItem(CURRENCY_LS_KEY)
if (lsVal) {
data.display = { ...data.display, currency: lsVal }
}
}
prefs.value = data
}
} catch {
// Non-cloud deploy or network error — preferences unavailable
@ -75,6 +101,18 @@ export const usePreferencesStore = defineStore('preferences', () => {
await setPref('community.blocklist_share', value)
}
async function setDisplayCurrency(code: string) {
const upper = code.toUpperCase()
// Optimistic local update so the UI reacts immediately
prefs.value = { ...prefs.value, display: { ...prefs.value.display, currency: upper } }
if (session.isLoggedIn) {
await setPref('display.currency', upper)
} else {
// Anonymous user: persist to localStorage only
localStorage.setItem(CURRENCY_LS_KEY, upper)
}
}
return {
prefs,
loading,
@ -82,9 +120,11 @@ export const usePreferencesStore = defineStore('preferences', () => {
affiliateOptOut,
affiliateByokId,
communityBlocklistShare,
displayCurrency,
load,
setAffiliateOptOut,
setAffiliateByokId,
setCommunityBlocklistShare,
setDisplayCurrency,
}
})

View file

@ -145,6 +145,7 @@ export const useSearchStore = defineStore('search', () => {
_abort?.abort()
_abort = null
loading.value = false
closeUpdates()
}
async function search(q: string, filters: SearchFilters = {}) {
@ -158,8 +159,6 @@ export const useSearchStore = defineStore('search', () => {
error.value = null
try {
// TODO: POST /api/search with { query: q, filters }
// API does not exist yet — stub returns empty results
// VITE_API_BASE is '' in dev; '/snipe' under menagerie (baked at build time by Vite)
const apiBase = (import.meta.env.VITE_API_BASE as string) ?? ''
const params = new URLSearchParams({ q })
@ -174,51 +173,36 @@ export const useSearchStore = defineStore('search', () => {
if (filters.mustExclude?.trim()) params.set('must_exclude', filters.mustExclude.trim())
if (filters.categoryId?.trim()) params.set('category_id', filters.categoryId.trim())
if (filters.adapter && filters.adapter !== 'auto') params.set('adapter', filters.adapter)
const res = await fetch(`${apiBase}/api/search?${params}`, { signal })
// Use the async endpoint: returns 202 immediately with a session_id, then
// streams listings + trust scores via SSE as the scrape completes.
const res = await fetch(`${apiBase}/api/search/async?${params}`, { signal })
if (!res.ok) throw new Error(`Search failed: ${res.status} ${res.statusText}`)
const data = await res.json() as {
listings: Listing[]
trust_scores: Record<string, TrustScore>
sellers: Record<string, Seller>
market_price: number | null
adapter_used: 'api' | 'scraper'
affiliate_active: boolean
session_id: string | null
session_id: string
status: 'queued'
}
results.value = data.listings ?? []
trustScores.value = new Map(Object.entries(data.trust_scores ?? {}))
sellers.value = new Map(Object.entries(data.sellers ?? {}))
marketPrice.value = data.market_price ?? null
adapterUsed.value = data.adapter_used ?? null
affiliateActive.value = data.affiliate_active ?? false
saveCache({
query: q,
results: results.value,
trustScores: data.trust_scores ?? {},
sellers: data.sellers ?? {},
marketPrice: marketPrice.value,
adapterUsed: adapterUsed.value,
})
// Open SSE stream if any scores are partial and a session_id was provided
const hasPartial = Object.values(data.trust_scores ?? {}).some(ts => ts.score_is_partial)
if (data.session_id && hasPartial) {
_openUpdates(data.session_id, apiBase)
}
// HTTP 202 received — scraping is underway in the background.
// Stay in loading state until the first "listings" SSE event arrives.
// loading.value stays true; enriching tracks the SSE stream being open.
enriching.value = true
_openUpdates(data.session_id, apiBase)
} catch (e) {
if (e instanceof DOMException && e.name === 'AbortError') {
// User cancelled — clear loading but don't surface as an error
results.value = []
loading.value = false
} else {
error.value = e instanceof Error ? e.message : 'Unknown error'
results.value = []
loading.value = false
}
} finally {
loading.value = false
_abort = null
}
// Note: loading.value is NOT set to false here — it stays true until the
// first "listings" SSE event arrives (see _openUpdates handler below).
}
function closeUpdates() {
@ -229,34 +213,115 @@ export const useSearchStore = defineStore('search', () => {
enriching.value = false
}
// Internal type for typed SSE events from the async search endpoint
type _AsyncListingsEvent = {
type: 'listings'
listings: Listing[]
trust_scores: Record<string, TrustScore>
sellers: Record<string, Seller>
market_price: number | null
adapter_used: 'api' | 'scraper'
affiliate_active: boolean
session_id: string
}
type _MarketPriceEvent = {
type: 'market_price'
market_price: number | null
}
type _UpdateEvent = {
type: 'update'
platform_listing_id: string
trust_score: TrustScore
seller: Seller
market_price: number | null
}
type _LegacyUpdateEvent = {
platform_listing_id: string
trust_score: TrustScore
seller: Record<string, unknown>
market_price: number | null
}
type _SSEEvent =
| _AsyncListingsEvent
| _MarketPriceEvent
| _UpdateEvent
| _LegacyUpdateEvent
function _openUpdates(sessionId: string, apiBase: string) {
closeUpdates() // close any previous stream
enriching.value = true
// Close any pre-existing stream but preserve enriching state — caller sets it.
if (_sse) {
_sse.close()
_sse = null
}
const es = new EventSource(`${apiBase}/api/updates/${sessionId}`)
_sse = es
es.onmessage = (e) => {
try {
const update = JSON.parse(e.data) as {
platform_listing_id: string
trust_score: TrustScore
seller: Record<string, unknown>
market_price: number | null
}
if (update.platform_listing_id && update.trust_score) {
trustScores.value = new Map(trustScores.value)
trustScores.value.set(update.platform_listing_id, update.trust_score)
}
if (update.seller) {
const s = update.seller as Seller
if (s.platform_seller_id) {
sellers.value = new Map(sellers.value)
sellers.value.set(s.platform_seller_id, s)
const update = JSON.parse(e.data) as _SSEEvent
if ('type' in update) {
// Typed events from the async search endpoint
if (update.type === 'listings') {
// First batch: hydrate store and transition out of loading state
results.value = update.listings ?? []
trustScores.value = new Map(Object.entries(update.trust_scores ?? {}))
sellers.value = new Map(Object.entries(update.sellers ?? {}))
marketPrice.value = update.market_price ?? null
adapterUsed.value = update.adapter_used ?? null
affiliateActive.value = update.affiliate_active ?? false
saveCache({
query: query.value,
results: results.value,
trustScores: update.trust_scores ?? {},
sellers: update.sellers ?? {},
marketPrice: marketPrice.value,
adapterUsed: adapterUsed.value,
})
// Scrape complete — turn off the initial loading spinner.
// enriching stays true while enrichment SSE is still open.
loading.value = false
} else if (update.type === 'market_price') {
if (update.market_price != null) {
marketPrice.value = update.market_price
}
} else if (update.type === 'update') {
// Per-seller enrichment update (same as legacy format but typed)
if (update.platform_listing_id && update.trust_score) {
trustScores.value = new Map(trustScores.value)
trustScores.value.set(update.platform_listing_id, update.trust_score)
}
if (update.seller?.platform_seller_id) {
sellers.value = new Map(sellers.value)
sellers.value.set(update.seller.platform_seller_id, update.seller)
}
if (update.market_price != null) {
marketPrice.value = update.market_price
}
}
// type: "error" — no special handling; stream will close via 'done'
} else {
// Legacy enrichment update (no type field) from synchronous search path
const legacy = update as _LegacyUpdateEvent
if (legacy.platform_listing_id && legacy.trust_score) {
trustScores.value = new Map(trustScores.value)
trustScores.value.set(legacy.platform_listing_id, legacy.trust_score)
}
if (legacy.seller) {
const s = legacy.seller as Seller
if (s.platform_seller_id) {
sellers.value = new Map(sellers.value)
sellers.value.set(s.platform_seller_id, s)
}
}
if (legacy.market_price != null) {
marketPrice.value = legacy.market_price
}
}
if (update.market_price != null) {
marketPrice.value = update.market_price
}
} catch {
// malformed event — ignore
@ -268,6 +333,8 @@ export const useSearchStore = defineStore('search', () => {
})
es.onerror = () => {
// If loading is still true (never got a "listings" event), clear it
loading.value = false
closeUpdates()
}
}

View file

@ -69,6 +69,28 @@
>{{ opt.label }}</button>
</div>
</div>
<!-- Display currency -->
<div class="settings-toggle">
<div class="settings-toggle-text">
<span class="settings-toggle-label">Display currency</span>
<span class="settings-toggle-desc">
Listing prices are converted from USD using live exchange rates.
Rates update hourly.
</span>
</div>
<select
id="display-currency"
class="settings-select"
:value="prefs.displayCurrency"
aria-label="Select display currency"
@change="prefs.setDisplayCurrency(($event.target as HTMLSelectElement).value)"
>
<option v-for="opt in currencyOptions" :key="opt.code" :value="opt.code">
{{ opt.code }} {{ opt.label }}
</option>
</select>
</div>
</section>
<!-- Affiliate Links only shown to signed-in cloud users -->
@ -166,6 +188,18 @@ const themeOptions: { value: 'system' | 'dark' | 'light'; label: string }[] = [
{ value: 'dark', label: 'Dark' },
{ value: 'light', label: 'Light' },
]
const currencyOptions: { code: string; label: string }[] = [
{ code: 'USD', label: 'US Dollar' },
{ code: 'EUR', label: 'Euro' },
{ code: 'GBP', label: 'British Pound' },
{ code: 'CAD', label: 'Canadian Dollar' },
{ code: 'AUD', label: 'Australian Dollar' },
{ code: 'JPY', label: 'Japanese Yen' },
{ code: 'CHF', label: 'Swiss Franc' },
{ code: 'MXN', label: 'Mexican Peso' },
{ code: 'BRL', label: 'Brazilian Real' },
{ code: 'INR', label: 'Indian Rupee' },
]
const session = useSessionStore()
const prefs = usePreferencesStore()
const { autoRun: llmAutoRun, setAutoRun: setLLMAutoRun } = useLLMQueryBuilder()
@ -346,6 +380,24 @@ function saveByokId() {
margin: 0;
}
.settings-select {
padding: var(--space-2) var(--space-3);
background: var(--color-surface);
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text);
font-size: 0.875rem;
font-family: inherit;
cursor: pointer;
outline: none;
flex-shrink: 0;
transition: border-color 0.15s ease;
}
.settings-select:focus {
border-color: var(--app-primary);
}
.theme-btn-group {
display: flex;
gap: 0;