Compare commits

..

18 commits
v0.5.3 ... main

Author SHA1 Message Date
ae0d4fbc89 docs(screenshots): add search results view showing trust scores, STEAL badges, and market price comparison
Some checks failed
CI / Python tests (push) Has been cancelled
CI / Frontend typecheck + tests (push) Has been cancelled
Mirror / mirror (push) Has been cancelled
2026-05-06 10:19:38 -07:00
8ba07b9766 docs(screenshots): retake hero after CSS theme fix — consistent warm light theme throughout
Some checks are pending
CI / Python tests (push) Waiting to run
CI / Frontend typecheck + tests (push) Waiting to run
Mirror / mirror (push) Waiting to run
2026-05-06 09:58:39 -07:00
d7c8a8bca6 docs(readme): landing page rewrite — corrected tagline, hero screenshot, platform table, sniping engine roadmap, split license
Some checks are pending
CI / Python tests (push) Waiting to run
CI / Frontend typecheck + tests (push) Waiting to run
Mirror / mirror (push) Waiting to run
2026-05-06 08:51:37 -07:00
108f63b4f2 fix(browser-pool): replace queue with thread-local storage to fix Playwright cross-thread crash (#53)
Playwright's sync API binds its greenlet event loop to the creating thread.
Sharing pre-warmed slots across threads caused "cannot switch to a different
thread" panics under uvicorn. New design: each worker thread owns its own
Playwright instance created lazily on first fetch_html() call. A registry
dict keyed by thread-id lets stop() close all slots at shutdown. Removes
ThreadPoolExecutor warmup and idle-cleanup daemon thread entirely.
2026-05-04 09:27:20 -07:00
bccedb1fe5 fix(trust): treat feedback_ratio=0.0 as missing data for buyer-only/returning sellers (#52)
eBay omits the 12-month positive percentage for returning sellers and
buyer-only accounts with no recent sales. Previously ratio=0.0 with
count>0 triggered established_bad_actor; now it returns None from the
scorer (score_is_partial=True) and emits a soft no_recent_seller_data
flag instead. ratio=0.0 with count=0 is still treated as no-history.
2026-05-04 09:24:27 -07:00
89d3862f62 feat(monitor): background saved-search monitoring with watch alerts (#12)
Some checks failed
CI / Python tests (push) Has been cancelled
CI / Frontend typecheck + tests (push) Has been cancelled
Mirror / mirror (push) Has been cancelled
Release / release (push) Has been cancelled
Backend:
- Migrations 013-015: eBay user tokens, monitor settings on saved_searches
  (monitor_enabled, poll_interval_min, min_trust_score, last_checked_at),
  watch_alerts table with UNIQUE dedup on (saved_search_id, platform_listing_id),
  active_monitors registry for cross-user polling
- WatchAlert model + store methods: upsert_alert, list_alerts, dismiss_alert,
  count_undismissed_alerts, dismiss_all_alerts, list_active_monitors
- monitor.py: run_monitor_search() using TrustScorer.score_batch(); should_alert()
  with BIN/auction/partial-score logic (auction window = 24h, partial +10 buffer)
- PATCH /api/saved-searches/{id}/monitor, GET /api/alerts, POST /api/alerts/*/dismiss
- Background polling loop at startup (asyncio.to_thread every 60s check cycle)
- ebay/adapter.py: enrich_seller_trading_api() via Trading API GetUser (OAuth token)
- nginx: raise proxy_read_timeout to 120s for slow eBay search responses

Frontend:
- AlertBell component: bell button + unread badge + panel with dismiss/clear-all;
  polls /api/alerts every 2 minutes; aria-live announcement on count change
- alerts.ts Pinia store: fetchAlerts, dismiss, dismissAll
- SavedSearchesView: monitor toggle + poll interval + min trust score controls
- SettingsView: eBay OAuth connect/disconnect section
- AppNav: AlertBell wired for logged-in and local-tier users

Tests: 24 monitor tests (should_alert branches, store alert CRUD, run_monitor_search
with mocked adapter); fix browser_pool test assertions for new wait_for_* params.
2026-05-04 08:24:56 -07:00
ac5e6166c9 docs: update roadmap — mark shipped issues, add platform expansion section
Reorganize roadmap into 5 sections: Intelligence features, Platform
expansion, Cloud/infrastructure, Auction sniping engine, Already shipped.

Mark #1, #2, #4, #6, #8, #11, #27, #29, #47, #48, #49, #50 as shipped
(all closed in Forgejo). Add #21, #43, #51, #52 (open intelligence
issues), #45 (Postgres migration), #46 (ActivityPub), #53 (BrowserPool
thread-safety). Promote Mercari to Platform expansion section.
2026-05-04 07:29:44 -07:00
8e0ec01b8f docs: update README for Mercari Phase 2 2026-05-04 07:24:16 -07:00
15996472b7 feat(mercari): Phase 2 — MercariAdapter with Xvfb stability fixes
Implements full Mercari scraping support for the trust-scoring pipeline:

- `app/platforms/mercari/` — new MercariAdapter (scraper-based), scraper
  (parse_search_html / parse_listing_html), and __init__
- `app/platforms/__init__.py` — adds "mercari" to SUPPORTED_PLATFORMS
- `api/main.py` — platform routing: _make_adapter, OR-group guard, seller
  lookup, BTF/Trading API guards all parameterised by platform
- `web/src/views/SearchView.vue` — enables Mercari tab in platform picker

BrowserPool stability fixes (browser_pool.py):
- Add -ac flag to Xvfb (disables X11 auth requirement in Docker containers)
- Shift display counter from :100-:199 to :200-:399 (avoids ghost kernel
  socket conflicts with low-numbered displays)
- Add wait_for_selector / wait_for_timeout_ms params to fetch_html,
  _fetch_with_slot, _fetch_fresh
- Add time.sleep(0.3) in _fetch_fresh after Xvfb start (was missing)

Mercari scraper fix:
- Remove sortBy=SORT_SCORE from build_search_url — that param is deprecated
  on Mercari and causes an empty 85KB response instead of search results

Probe + debug scripts in scripts/:
- probe_mercari.py — standalone Cloudflare bypass test
- debug_fetch_fresh.py — pool simulation diagnostic

Trust signal coverage: feedback_count, feedback_ratio partial score
(account_age_days, category_history absent = score_is_partial=True).
get_completed_sales stubbed for Phase 3.
Tracks: snipe#53 (pool thread-safety fix, follow-up)
2026-05-03 18:39:25 -07:00
f48f8ef80f feat: multi-platform scaffolding — phase 1 (eBay-only, wire complete)
Backend:
- app/platforms/__init__.py: add SUPPORTED_PLATFORMS frozenset (single
  source of truth for platform validation); add must_include_mode and
  adapter fields to SearchFilters dataclass
- api/main.py: add platform: str = Query("ebay") to both /api/search
  and /api/search/async; validate against SUPPORTED_PLATFORMS (422 on
  unknown platform); thread platform into structured log lines; document
  Phase 2 registry extension point in _make_adapter

Frontend:
- SearchView.vue: platform tab strip (eBay active, Mercari + Poshmark
  disabled with "soon" badge) above search bar; eBay-specific controls
  (category select, data source, pages, keywords) hidden when platform
  !== 'ebay'; platform passed to SearchProgress
- search.ts: platform?: string added to SearchFilters; included in
  async search params when non-eBay
- SearchProgress.vue: platform prop + PLATFORM_LABELS map; status line
  reads "Searching eBay for…" / "Searching Mercari for…" dynamically
2026-05-02 20:09:36 -07:00
b993f6f4a9 feat(ux): active search indicator + Candycore easter egg theme
Search indicator:
- SearchProgress.vue: indeterminate amber progress bar + status line
  + 4 staggered skeleton cards shown while loading=true and no results yet
  (fills the previously-blank results area during initial scrape phase)
- Re-search badge: blue "Re-searching…" pill in toolbar when loading=true
  over existing stale results (distinct from the amber enrichment badge)

Candycore theme:
- New [data-candycore="active"] CSS block; palette sourced from
  snipe_v0_Neon_IPad_Paint.jpeg — purple-black sky, lavender primary,
  cyan glow, yellow crown, bubblegum pink text
- useCandycoreMode.ts: word trigger ("neon", typed outside form fields),
  ascending arpeggio audio, localStorage persistence, restore on reload
- Mutually exclusive with Snipe Mode (each deactivates the other)
- Added :not([data-candycore="active"]) guards to existing dark/light
  theme override selectors so they don't stomp on Candycore
2026-05-01 23:11:36 -07:00
05f845962f fix(trust): soften established_bad_actor for high-volume sellers; add declining_ratio flag
Some checks failed
CI / Python tests (push) Has been cancelled
CI / Frontend typecheck + tests (push) Has been cancelled
Mirror / mirror (push) Has been cancelled
Release / release (push) Has been cancelled
Fixes a false-positive edge case (snipe#52) where sellers with 500+
lifetime feedback were hard-flagged as established_bad_actor when the
12-month ratio dipped below 80% — even though the 12-month window may
cover only a small recent sample relative to lifetime history.

Changes:
- established_bad_actor hard filter now only fires for accounts with
  20–500 lifetime feedback (unchanged behavior for moderate accounts)
- Accounts >500 feedback with ratio 60–80%: new declining_ratio soft flag
  (composite score penalised but not zeroed, no hard block)
- Accounts >500 feedback with ratio <60%: still established_bad_actor
  (catastrophically bad even for high-volume sellers)
- Two new constants: HARD_FILTER_BAD_RATIO_MAX_COUNT=500,
  HARD_FILTER_BAD_RATIO_HIGH_THRESHOLD=0.60

Note: buyer-feedback-only accounts (lifetime buyer history inflating
feedback_count for new sellers) requires profile-page scraping to detect
properly — tracked in snipe#52 as medium-term work.

Tests: 22 passed
2026-04-27 12:54:51 -07:00
0354234f86 refactor: replace hand-rolled JWT+Heimdall with cf-core CloudSessionFactory
Some checks failed
CI / Python tests (push) Has been cancelled
CI / Frontend typecheck + tests (push) Has been cancelled
Mirror / mirror (push) Has been cancelled
Delegates JWT validation, Heimdall provision/tier-resolve, bypass-IP
handling, and guest session management to circuitforge_core. Snipe keeps
its own CloudUser (shared_db + user_db), SessionFeatures, compute_features,
and DB path helpers. Removes ~158 lines of duplicated auth code.

Note: get_session() now takes (Request, Response) — FastAPI auto-injects
both, no call-site changes needed.
2026-04-25 16:35:41 -07:00
ec0af07905 feat: add CF_APP_NAME=snipe to cloud compose for cf-orch pipeline attribution
Some checks failed
CI / Python tests (push) Has been cancelled
CI / Frontend typecheck + tests (push) Has been cancelled
Mirror / mirror (push) Has been cancelled
2026-04-21 10:58:52 -07:00
7abc765fe7 Merge branch 'feature/perf-pool-cache'
Some checks failed
CI / Python tests (push) Waiting to run
CI / Frontend typecheck + tests (push) Waiting to run
Mirror / mirror (push) Has been cancelled
Release / release (push) Has been cancelled
2026-04-20 12:10:16 -07:00
0ec29f0551 feat(scraper): pre-warmed Chromium browser pool (BROWSER_POOL_SIZE=2 default) 2026-04-20 12:09:09 -07:00
29d2033ef2 feat: browser pool + search result cache (#47, #48)
Some checks failed
CI / Python tests (push) Waiting to run
CI / Frontend typecheck + tests (push) Waiting to run
Mirror / mirror (push) Has been cancelled
Release / release (push) Has been cancelled
2026-04-20 11:57:56 -07:00
a83e0957e2 feat(api): short-TTL search result cache (SEARCH_CACHE_TTL_S=300 default) 2026-04-20 11:53:27 -07:00
41 changed files with 4803 additions and 529 deletions

313
README.md
View file

@ -1,29 +1,86 @@
# Snipe — Auction Sniping & Listing Intelligence
<!-- Logo coming soon — replace docs/snipe-logo.svg when final icon ships -->
<div align="center">
<img src="docs/snipe-logo.svg" alt="Snipe logo" width="120" />
> *Part of the Circuit Forge LLC "AI for the tasks you hate most" suite.*
# Snipe
**Status:** Active — eBay listing intelligence MVP complete (search, trust scoring, affiliate links, feedback FAB, vision task scheduling). Auction sniping engine and multi-platform support are next.
**Auction intelligence and sniping for people who don't trust the platform.**
**[Documentation](https://docs.circuitforge.tech/snipe/)** · [circuitforge.tech](https://circuitforge.tech)
[![License: MIT / BSL 1.1](https://img.shields.io/badge/license-MIT%20%2F%20BSL%201.1-blue)](LICENSE)
[![Status: Beta](https://img.shields.io/badge/status-beta-yellow)]()
[![Forgejo](https://img.shields.io/badge/primary%20repo-Forgejo-orange)](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe)
[![Docs](https://img.shields.io/badge/docs-docs.circuitforge.tech%2Fsnipe-green)](https://docs.circuitforge.tech/snipe)
## Quick install (self-hosted)
*Part of the Circuit Forge LLC suite — "AI for the tasks the system made hard on purpose."*
</div>
**Requirements:** Docker with Compose plugin, Git. No API keys needed to get started.
---
<table>
<tr>
<td><img src="docs/screenshots/hero.png" alt="Snipe search page with filter panel and feature overview"/></td>
<td><img src="docs/screenshots/results.png" alt="Search results — trust score badges, STEAL price flags, seller feedback, and market price comparison"/></td>
</tr>
</table>
---
## Why Snipe?
Auction platforms are designed to make you act fast and trust blindly. The closing countdown, the hidden price history, the new-account seller with one feedback — all of it is structured against the buyer.
Snipe inverts that. Before you place a bid, you get a trust score built from five independently sourced signals: seller account age, feedback volume, feedback ratio, price versus recent completed sales, and category history. A hard-coded red flag for new accounts or bad actors overrides the composite. Soft flags surface buried damage disclosures, duplicate photos, and listings that have been sitting unsold for weeks. When the listing is priced well below market, you see a STEAL badge — sourced from eBay Marketplace Insights, not from the seller's description.
The sniping engine — precise last-second bid submission with NTP (network time protocol) synchronization and soft-close handling — is next on the roadmap. The intelligence layer is live now.
---
## Features
### Listing intelligence (live)
- **Trust scoring** — five-signal composite score (0100) per listing: account age, feedback count, feedback ratio, price vs. market, category history
- **Red flag detection** — hard flags for new accounts and established bad actors; soft flags for damage keywords, evasive language, duplicate photos, long-on-market listings, and significant price drops
- **Price vs. market** — listing price compared against completed-sale medians via eBay Marketplace Insights API (Browse API fallback)
- **Keyword filtering** — must-include (AND / ANY / OR-groups), must-exclude, category, price range; OR-groups expand into multiple targeted queries so eBay relevance doesn't silently drop variants
- **Saved searches** — one-click re-run that restores all filter settings
- **Background enrichment** — seller account age scraped via Playwright + Xvfb (Kasada/Cloudflare-safe headed Chromium); on-demand re-score per listing without re-searching
- **LLM query builder** — describe what you want in plain language; an LLM builds the search terms (paid tier)
- **Vision photo assessment** — condition scoring from listing photos via moondream2 locally or Claude vision (paid/cloud); VRAM-aware scheduling via circuitforge-core task scheduler
- **Affiliate link builder** — eBay Partner Network wrapping with user BYOK support and per-retailer disclosure
### Platforms
| Platform | Search | Trust scoring | Completed-sale comps |
|----------|--------|---------------|----------------------|
| **eBay** | Browse API + Playwright fallback | All 5 signals | Marketplace Insights + Browse fallback |
| **Mercari** | Playwright scraper | 3/5 signals (partial) | Phase 3 |
| CT Bids, HiBid, AuctionZip, Invaluable, GovPlanet, Bidsquare, Proxibid | Planned | Planned | Planned |
### Auction sniping engine (roadmap)
- NTP-synchronized last-second bid submission
- Soft-close detection and strategy adjustment
- Proxy bid ladder with configurable max
- Human approval gate before any bid executes
- Post-win workflow: payment routing, shipping coordination, provenance documentation
---
## Quick Start
**Requirements:** Docker with Compose plugin, Git. No API keys required to get started.
```bash
# One-line install — clones to ~/snipe by default
bash <(curl -fsSL https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/raw/branch/main/install.sh)
# Or clone manually and run the script:
git clone https://git.opensourcesolarpunk.com/Circuit-Forge/snipe.git
bash snipe/install.sh
```
Then open **http://localhost:8509**.
### Manual setup (if you prefer)
### Manual setup
Snipe's API image is built from a parent context that includes `circuitforge-core`. Both repos must sit as siblings in the same directory:
Snipe's API image builds from a parent context that includes `circuitforge-core`. Both repos must sit as siblings:
```
workspace/
@ -36,236 +93,86 @@ mkdir snipe-workspace && cd snipe-workspace
git clone https://git.opensourcesolarpunk.com/Circuit-Forge/snipe.git
git clone https://git.opensourcesolarpunk.com/Circuit-Forge/circuitforge-core.git
cd snipe
cp .env.example .env # edit if you have eBay API credentials (optional)
cp .env.example .env # add eBay API credentials if you have them (optional)
./manage.sh start
```
### Optional: eBay API credentials
Snipe works without any credentials using its Playwright scraper fallback. Adding eBay API credentials unlocks faster searches and inline seller account age (no extra scrape needed):
Snipe works without credentials using its Playwright scraper fallback. Adding credentials unlocks faster searches and inline seller account age without an extra scrape:
1. Register at [developer.ebay.com](https://developer.ebay.com/my/keys)
2. Copy your Production **App ID** and **Cert ID** into `.env`
3. Restart: `./manage.sh restart`
3. `./manage.sh restart`
---
## What it does
## Tiers
Snipe has two layers that work together:
| Tier | What you get |
|------|-------------|
| **Free** | eBay + Mercari search, full trust scoring, keyword filtering, saved searches — local LLM only |
| **Paid** | LLM query builder, background saved-search monitoring with alerts, cloud LLM option |
| **Premium** | Vision photo condition assessment, fine-tuned trust models, multi-user |
| **Ultra** | Human-in-the-loop operator — handles CAPTCHAs, phone calls, anything automation can't |
**Layer 1 — Listing intelligence (MVP, implemented)**
Before you bid, Snipe tells you whether a listing is worth your time. It fetches eBay listings, scores each seller's trustworthiness across five signals, flags suspicious pricing relative to completed sales, and surfaces red flags like new accounts, cosmetic damage buried in titles, and listings that have been sitting unsold for weeks.
**Layer 2 — Auction sniping (roadmap)**
Snipe manages the bid itself: monitors listings across platforms, schedules last-second bids, handles soft-close extensions, and guides you through the post-win logistics (payment routing, shipping coordination, provenance documentation for antiques).
The name is the origin of the word "sniping" — common snipes are notoriously elusive birds, secretive and camouflaged, that flush suddenly from cover. Shooting one required extreme patience, stillness, and a precise last-second shot. That's the auction strategy.
License key format: `CFG-SNPE-XXXX-XXXX-XXXX`
---
## Screenshots
**Landing page — no account required**
![Snipe landing hero showing search bar and three feature tiles: Seller trust score, Price vs. market, Red flag detection](docs/screenshots/01-hero.png)
**Search results with trust scores**
![Search results for vintage film camera listings, each card showing a trust score badge, seller feedback, price, and market comparison](docs/screenshots/02-results.png)
**STEAL badge — price significantly below market**
![Listing cards with STEAL badge highlighting listings priced well below completed sales median](docs/screenshots/03-steal-badge.png)
> Red flag and Triple Red screenshots coming — captured opportunistically from real scammy listings.
---
## Implemented: eBay Listing Intelligence
### Search & filtering
- Full-text eBay search via Browse API (with Playwright scraper fallback when no API credentials configured)
- Price range, must-include keywords (AND / ANY / OR-groups mode), must-exclude terms, eBay category filter
- OR-group mode expands keyword combinations into multiple targeted queries and deduplicates results — eBay relevance won't silently drop variants
- Pages-to-fetch control: each Browse API page returns up to 200 listings
- Saved searches with one-click re-run that restores all filter settings
### Seller trust scoring
Five signals, each scored 020, composited to 0100:
| Signal | What it measures |
|--------|-----------------|
| `account_age` | Days since eBay account registration |
| `feedback_count` | Total feedback received |
| `feedback_ratio` | Positive feedback percentage |
| `price_vs_market` | Listing price vs. median of recent completed sales |
| `category_history` | Whether seller has history selling in this category |
Scores are marked **partial** when signals are unavailable (e.g. account age not yet enriched). Partial scores are displayed with a visual indicator rather than penalizing the seller for missing data.
### Red flags
Hard filters that override the composite score:
- `new_account` — account registered within 7 days
- `established_bad_actor` — feedback ratio < 80% with 20+ reviews
Soft flags surfaced as warnings:
- `account_under_30_days` — account under 30 days old
- `low_feedback_count` — fewer than 10 reviews
- `suspicious_price` — listing price below 50% of market median *(suppressed automatically when the search returns a heterogeneous price distribution — e.g. mixed laptop generations — to prevent false positives)*
- `duplicate_photo` — same image found on another listing (perceptual hash)
- `scratch_dent_mentioned` — title keywords indicating cosmetic damage, functional problems, or evasive language (see below)
- `long_on_market` — listing has been seen 5+ times over 14+ days without selling
- `significant_price_drop` — current price more than 20% below first-seen price
### Scratch & dent title detection
Scans listing titles for signals the item may have undisclosed damage or problems:
- **Explicit damage**: scratch, scuff, dent, crack, chip, blemish, worn
- **Condition catch-alls**: as is, for parts, parts only, spares or repair
- **Evasive redirects**: "see description", "read description", "see photos for" (seller hiding damage detail in listing body)
- **Functional problems**: "not working", "stopped working", "no power", "dead on arrival", "powers on but", "faulty", "broken screen/hinge/port"
- **DIY/repair listings**: "needs repair", "needs tlc", "project laptop", "for repair", "sold as is"
### Seller enrichment
- **Inline (API adapter)**: account age filled from Browse API `registrationDate` field
- **Background (scraper)**: `/itm/` listing pages scraped for seller "Joined" date via Playwright + Xvfb (Kasada-safe headed Chromium)
- **On-demand**: ↻ button on any listing card triggers `POST /api/enrich` — runs enrichment and re-scores without waiting for a second search
- **Category history**: derived from the seller's accumulated listing data (Browse API `categories` field); improves with every search, no extra API calls
### Affiliate link builder
Listing cards surface eBay affiliate-wrapped URLs. Uses `circuitforge_core.affiliates.wrap_url` — resolution order: user opted out → plain URL; user has BYOK affiliate ID → their ID; CF env var set (`EBAY_AFFILIATE_ID`) → CF's ID; otherwise plain URL. Users can configure their own eBay Partner Network ID or opt out entirely in Settings.
Disclosure tooltip appears on first encounter per-session and on each wrapped link (per-retailer copy from `get_disclosure_text`).
### Feedback FAB
In-app feedback button (bottom-right FAB) opens a modal: title, description, optional screenshot. Posts to the CF feedback endpoint. Status probed on load; FAB hidden if endpoint unreachable.
### Vision task scheduling
Photo condition assessment tasks queued through `circuitforge_core.tasks.TaskScheduler` — VRAM-aware slot management shared with any other LLM workloads on the same host. Runs moondream2 locally (free tier) or Claude vision (paid/cloud). Results stored per-listing and update the trust score card.
### Market price comparison
Completed sales fetched via eBay Marketplace Insights API (with Browse API fallback for app tiers that don't have Insights access). Median stored per query hash, used to score `price_vs_market` across all listings in a search.
### Adapters
| Adapter | When used | Signals available |
|---------|-----------|-------------------|
| Browse API (`api`) | eBay API credentials configured | All signals; account age inline |
| Playwright scraper (`scraper`) | No credentials / forced | All signals except account age (async BTF enrichment) |
| `auto` (default) | — | API if credentials present, scraper otherwise |
---
## Stack
| Layer | Tech | Port |
|-------|------|------|
| Frontend | Vue 3 + Pinia + UnoCSS + Vite (nginx) | 8509 |
| API | FastAPI (uvicorn) | 8510 |
| Scraper | Playwright + playwright-stealth + Xvfb | — |
| DB | SQLite (`data/snipe.db`) | — |
| Core | circuitforge-core (editable install) | — |
## Running
```bash
./manage.sh start # start all services
./manage.sh stop # stop
./manage.sh restart # restart
./manage.sh logs # tail logs
./manage.sh open # open in browser
```
Cloud stack (shared DB, multi-user):
```bash
docker compose -f compose.cloud.yml -p snipe-cloud up -d
docker compose -f compose.cloud.yml -p snipe-cloud build api # after Python changes
```
---
## Stack
| Layer | Technology | Port |
|-------|-----------|------|
| Frontend | Vue 3 + Pinia + UnoCSS + Vite (served via nginx) | 8509 |
| API | FastAPI (uvicorn) | 8510 |
| Scraper | Playwright + playwright-stealth + Xvfb (Kasada/Cloudflare-safe headed Chromium) | — |
| Database | SQLite (`data/snipe.db`) | — |
| Core | circuitforge-core (editable install) | — |
The scraper stack uses headed Chromium via Xvfb (X virtual framebuffer) with playwright-stealth for all platform access. Headless and `requests`-based approaches are blocked by eBay and Mercari.
---
## Roadmap
## Documentation
### Near-term (eBay)
| Issue | Feature |
|-------|---------|
| [#1](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/1) | SSE/WebSocket live score push — enriched data appears without re-search |
| [#2](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/2) | eBay OAuth (Connect eBay Account) for full trust score access via Trading API |
| [#4](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/4) | Scammer database: community blocklist + batch eBay Trust & Safety reporting |
| [#5](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/5) | UPC/product lookup → LLM-crafted search terms (paid tier) |
| [#8](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/8) | "Triple Red" easter egg: CSS animation when all hard flags fire simultaneously |
| [#11](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/11) | Vision-based photo condition assessment — moondream2 (local) / Claude vision (cloud, paid) |
| [#12](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/12) | Background saved-search monitoring with configurable alerts |
### Cloud / infrastructure
| Issue | Feature |
|-------|---------|
| [#6](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/6) | Shared seller/scammer/comps DB across cloud users (public data, no re-scraping) |
| [#7](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/7) | Shared image hash DB — requires explicit opt-in consent (CF privacy-by-architecture) |
### Auction sniping engine
| Issue | Feature |
|-------|---------|
| [#9](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/9) | Bid scheduling + snipe execution (NTP-synchronized, soft-close handling, human approval gate) |
| [#13](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/13) | Post-win workflow: payment routing, shipping coordination, provenance documentation |
### Multi-platform expansion
| Issue | Feature |
|-------|---------|
| [#10](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe/issues/10) | CT Bids, HiBid, AuctionZip, Invaluable, GovPlanet, Bidsquare, Proxibid |
Full documentation at **[docs.circuitforge.tech/snipe](https://docs.circuitforge.tech/snipe)** — setup guide, trust scoring algorithm, platform adapter reference, API docs, and self-hosting notes.
---
## Primary platforms (full vision)
## Forgejo-primary
- **eBay** — general + collectibles *(search + trust scoring: implemented)*
- **CT Bids** — Connecticut state surplus and municipal auctions
- **GovPlanet / IronPlanet** — government surplus equipment
- **AuctionZip** — antique auction house aggregator (1,000+ houses)
- **Invaluable / LiveAuctioneers** — fine art and antiques
- **Bidsquare** — antiques and collectibles
- **HiBid** — estate auctions
- **Proxibid** — industrial and collector auctions
Snipe is developed and maintained on Forgejo at [git.opensourcesolarpunk.com/Circuit-Forge/snipe](https://git.opensourcesolarpunk.com/Circuit-Forge/snipe). GitHub and Codeberg are read-only mirrors. File issues and submit pull requests on Forgejo.
## Why auctions are hard
---
Online auctions are frustrating because:
- Winning requires being present at the exact closing moment — sometimes 2 AM
- Platforms vary wildly: some allow proxy bids, some don't; closing times extend on activity
- Scammers exploit auction urgency — new accounts, stolen photos, pressure to pay outside platform
- Price history is hidden — you don't know if an item is underpriced or a trap
- Sellers hide damage in descriptions rather than titles to avoid automated filters
- Shipping logistics for large / fragile antiques require coordination with the auction house
- Provenance documentation is inconsistent across auction houses
## Contributing
## Bidding strategy engine (planned)
Bug reports and feature requests: open an issue on Forgejo. The discovery pipeline (scrapers, adapters, signal extraction) is MIT-licensed — pull requests welcome. AI trust-scoring features are BSL 1.1 — contributions are accepted but the license terms apply.
- **Hard snipe**: submit bid N seconds before close (default: 8s)
- **Soft-close handling**: detect if platform extends on last-minute bids; adjust strategy
- **Proxy ladder**: set max and let the engine bid in increments, reserve snipe for final window
- **Reserve detection**: identify likely reserve price from bid history patterns
- **Comparable sales**: pull recent auction results for same/similar items across platforms
---
## Post-win workflow (planned)
## License
1. Payment method routing (platform-specific: CC, wire, check)
2. Shipping quote requests to approved carriers (freight / large items via uShip; parcel via FedEx/UPS)
3. Condition report request from auction house
4. Provenance packet generation (for antiques / fine art resale or insurance)
5. Add to inventory (for dealers / collectors tracking portfolio value)
Snipe uses a dual license:
## Product code (license key)
| Component | License |
|-----------|---------|
| Discovery pipeline — scrapers, platform adapters, search, keyword filtering | [MIT](LICENSE-MIT) |
| LLM trust-scoring, query builder, vision assessment, AI features | [BSL 1.1](LICENSE-BSL) — free for personal non-commercial self-hosting; commercial use requires a paid license; converts to MIT after 4 years |
`CFG-SNPE-XXXX-XXXX-XXXX`
Privacy · Safety · Accessibility — co-equal, non-negotiable.
## Tech notes
- Shared `circuitforge-core` scaffold (DB, LLM router, tier system, config)
- Platform adapters: currently eBay only; AuctionZip, Invaluable, HiBid, CT Bids planned (Playwright + API where available)
- Bid execution: Playwright automation with precise timing (NTP-synchronized)
- Soft-close detection: platform-specific rules engine
- Comparable sales: eBay completed listings via Marketplace Insights API + Browse API fallback
- Vision module: condition assessment from listing photos — moondream2 / Claude vision (paid tier stub in `app/trust/photo.py`)
- **Kasada bypass**: headed Chromium via Xvfb; all scraping uses this path — headless and `requests`-based approaches are blocked by eBay
[circuitforge.tech](https://circuitforge.tech)

View file

@ -1,11 +1,9 @@
"""Cloud session resolution for Snipe FastAPI.
In local mode (CLOUD_MODE unset/false): all functions return a local CloudUser
with no auth checks, full tier access, and both DB paths pointing to SNIPE_DB.
In cloud mode (CLOUD_MODE=true): validates the cf_session JWT injected by Caddy
as X-CF-Session, resolves user_id, auto-provisions a free Heimdall license on
first visit, fetches the tier, and returns per-user DB paths.
Delegates JWT validation, Heimdall provisioning, tier resolution, and guest
session management to circuitforge_core.CloudSessionFactory. Snipe-specific
CloudUser (shared_db + user_db paths), SessionFeatures, and DB helpers are
kept here.
FastAPI usage:
@app.get("/api/search")
@ -18,15 +16,12 @@ from __future__ import annotations
import logging
import os
import re
import time
from dataclasses import dataclass
from pathlib import Path
from typing import Optional
import jwt as pyjwt
import requests
from fastapi import Depends, HTTPException, Request
from circuitforge_core.cloud_session import CloudSessionFactory as _CoreFactory
from fastapi import Depends, HTTPException, Request, Response
log = logging.getLogger(__name__)
@ -34,20 +29,13 @@ log = logging.getLogger(__name__)
CLOUD_MODE: bool = os.environ.get("CLOUD_MODE", "").lower() in ("1", "true", "yes")
CLOUD_DATA_ROOT: Path = Path(os.environ.get("CLOUD_DATA_ROOT", "/devl/snipe-cloud-data"))
DIRECTUS_JWT_SECRET: str = os.environ.get("DIRECTUS_JWT_SECRET", "")
CF_SERVER_SECRET: str = os.environ.get("CF_SERVER_SECRET", "")
HEIMDALL_URL: str = os.environ.get("HEIMDALL_URL", "https://license.circuitforge.tech")
HEIMDALL_ADMIN_TOKEN: str = os.environ.get("HEIMDALL_ADMIN_TOKEN", "")
# Local-mode DB paths (ignored in cloud mode)
_LOCAL_SNIPE_DB: Path = Path(os.environ.get("SNIPE_DB", "data/snipe.db"))
# Tier cache: user_id → (tier, fetched_at_epoch)
_TIER_CACHE: dict[str, tuple[str, float]] = {}
_TIER_CACHE_TTL = 300 # 5 minutes
TIERS = ["free", "paid", "premium", "ultra"]
_core = _CoreFactory(product="snipe")
# ── Domain ────────────────────────────────────────────────────────────────────
@ -90,97 +78,6 @@ def compute_features(tier: str) -> SessionFeatures:
)
# ── JWT validation ────────────────────────────────────────────────────────────
def _extract_session_token(header_value: str) -> str:
"""Extract cf_session value from a Cookie or X-CF-Session header string.
Returns the JWT token string, or "" if no valid session token is found.
Cookie strings like "snipe_guest=abc123" (no cf_session key) return ""
so the caller falls through to the guest/anonymous path rather than
passing a non-JWT string to validate_session_jwt().
"""
m = re.search(r'(?:^|;)\s*cf_session=([^;]+)', header_value)
if m:
return m.group(1).strip()
# Only treat as a raw JWT if it has exactly three base64url segments (header.payload.sig).
# Cookie strings like "snipe_guest=abc123" must NOT be forwarded to JWT validation.
stripped = header_value.strip()
if re.match(r'^[A-Za-z0-9\-_]+\.[A-Za-z0-9\-_]+\.[A-Za-z0-9\-_=]+$', stripped):
return stripped # bare JWT forwarded directly by Caddy
return "" # not a JWT and no cf_session cookie — treat as unauthenticated
def _extract_guest_token(cookie_header: str) -> str | None:
"""Extract snipe_guest UUID from the Cookie header, if present."""
m = re.search(r'(?:^|;)\s*snipe_guest=([^;]+)', cookie_header)
return m.group(1).strip() if m else None
def validate_session_jwt(token: str) -> str:
"""Validate a cf_session JWT and return the Directus user_id.
Uses HMAC-SHA256 verification against DIRECTUS_JWT_SECRET (same secret
cf-directus uses to sign session tokens). Returns user_id on success,
raises HTTPException(401) on failure.
Directus 11+ uses 'id' (not 'sub') for the user UUID in its JWT payload.
"""
try:
payload = pyjwt.decode(
token,
DIRECTUS_JWT_SECRET,
algorithms=["HS256"],
options={"require": ["id", "exp"]},
)
return payload["id"]
except Exception as exc:
log.debug("JWT validation failed: %s", exc)
raise HTTPException(status_code=401, detail="Session invalid or expired")
# ── Heimdall integration ──────────────────────────────────────────────────────
def _ensure_provisioned(user_id: str) -> None:
"""Idempotent: create a free Heimdall license for this user if none exists."""
if not HEIMDALL_ADMIN_TOKEN:
return
try:
requests.post(
f"{HEIMDALL_URL}/admin/provision",
json={"directus_user_id": user_id, "product": "snipe", "tier": "free"},
headers={"Authorization": f"Bearer {HEIMDALL_ADMIN_TOKEN}"},
timeout=5,
)
except Exception as exc:
log.warning("Heimdall provision failed for user %s: %s", user_id, exc)
def _fetch_cloud_tier(user_id: str) -> str:
"""Resolve tier from Heimdall with a 5-minute in-process cache."""
now = time.monotonic()
cached = _TIER_CACHE.get(user_id)
if cached and (now - cached[1]) < _TIER_CACHE_TTL:
return cached[0]
if not HEIMDALL_ADMIN_TOKEN:
return "free"
try:
resp = requests.post(
f"{HEIMDALL_URL}/admin/cloud/resolve",
json={"directus_user_id": user_id, "product": "snipe"},
headers={"Authorization": f"Bearer {HEIMDALL_ADMIN_TOKEN}"},
timeout=5,
)
tier = resp.json().get("tier", "free") if resp.ok else "free"
except Exception as exc:
log.warning("Heimdall tier resolve failed for user %s: %s", user_id, exc)
tier = "free"
_TIER_CACHE[user_id] = (tier, now)
return tier
# ── DB path helpers ───────────────────────────────────────────────────────────
def _shared_db_path() -> Path:
@ -209,58 +106,25 @@ def _anon_db_path() -> Path:
# ── FastAPI dependency ────────────────────────────────────────────────────────
def get_session(request: Request) -> CloudUser:
def get_session(request: Request, response: Response) -> CloudUser:
"""FastAPI dependency — resolves the current user from the request.
Local mode: returns a fully-privileged "local" user pointing at SNIPE_DB.
Delegates auth/tier resolution to cf-core CloudSessionFactory, then maps
the result to Snipe's CloudUser with shared_db + user_db paths.
Local mode: fully-privileged "local" user pointing at SNIPE_DB.
Cloud mode: validates X-CF-Session JWT, provisions Heimdall license,
resolves tier, returns per-user DB paths.
Unauthenticated cloud visitors: returns a free-tier anonymous user so
search and scoring work without an account.
Anonymous: guest session with free-tier access to shared scammer corpus.
"""
if not CLOUD_MODE:
return CloudUser(
user_id="local",
tier="local",
shared_db=_LOCAL_SNIPE_DB,
user_db=_LOCAL_SNIPE_DB,
)
core_user = _core.resolve(request, response)
uid, tier = core_user.user_id, core_user.tier
cookie_header = request.headers.get("cookie", "")
raw_header = request.headers.get("x-cf-session", "") or cookie_header
if not raw_header:
# No session at all — check for a guest UUID cookie set by /api/session
guest_uuid = _extract_guest_token(cookie_header)
user_id = f"guest:{guest_uuid}" if guest_uuid else "anonymous"
return CloudUser(
user_id=user_id,
tier="free",
shared_db=_shared_db_path(),
user_db=_anon_db_path(),
)
token = _extract_session_token(raw_header)
if not token:
guest_uuid = _extract_guest_token(cookie_header)
user_id = f"guest:{guest_uuid}" if guest_uuid else "anonymous"
return CloudUser(
user_id=user_id,
tier="free",
shared_db=_shared_db_path(),
user_db=_anon_db_path(),
)
user_id = validate_session_jwt(token)
_ensure_provisioned(user_id)
tier = _fetch_cloud_tier(user_id)
return CloudUser(
user_id=user_id,
tier=tier,
shared_db=_shared_db_path(),
user_db=_user_db_path(user_id),
)
if not CLOUD_MODE or uid in ("local", "local-dev"):
return CloudUser(user_id=uid, tier=tier, shared_db=_LOCAL_SNIPE_DB, user_db=_LOCAL_SNIPE_DB)
if uid.startswith("anon-"):
return CloudUser(user_id=uid, tier=tier, shared_db=_shared_db_path(), user_db=_anon_db_path())
return CloudUser(user_id=uid, tier=tier, shared_db=_shared_db_path(), user_db=_user_db_path(uid))
def require_tier(min_tier: str):

View file

@ -5,12 +5,14 @@ import asyncio
import csv
import dataclasses
import hashlib
import hashlib as _hashlib
import io
import json as _json
import logging
import os
import queue as _queue
import re
import time as _time
import uuid
from concurrent.futures import ThreadPoolExecutor
from contextlib import asynccontextmanager
@ -22,7 +24,7 @@ from circuitforge_core.affiliates import wrap_url as _wrap_affiliate_url
from circuitforge_core.api import make_corrections_router as _make_corrections_router
from circuitforge_core.api import make_feedback_router as _make_feedback_router
from circuitforge_core.config import load_env
from fastapi import Depends, FastAPI, File, HTTPException, Request, Response, UploadFile
from fastapi import Depends, FastAPI, File, HTTPException, Query, Request, Response, UploadFile
from fastapi.middleware.cors import CORSMiddleware
from fastapi.responses import StreamingResponse
from pydantic import BaseModel
@ -32,7 +34,7 @@ from api.ebay_webhook import router as ebay_webhook_router
from app.db.models import SavedSearch as SavedSearchModel
from app.db.models import ScammerEntry
from app.db.store import Store
from app.platforms import SearchFilters
from app.platforms import SUPPORTED_PLATFORMS, SearchFilters
from app.platforms.ebay.adapter import EbayAdapter
from app.platforms.ebay.auth import EbayTokenManager
from app.platforms.ebay.query_builder import expand_queries, parse_groups
@ -75,6 +77,61 @@ def _auth_label(user_id: str) -> str:
_update_queues: dict[str, _queue.SimpleQueue] = {}
# ── Short-TTL search result cache ────────────────────────────────────────────
# Caches raw eBay listings and market_price only — trust scores are NOT cached
# because they incorporate per-user signals (zero_feedback cap, etc.).
# On cache hit the trust scorer and seller lookups run against the local DB as
# normal; only the expensive Playwright/Browse API scrape is skipped.
#
# TTL is configurable via SEARCH_CACHE_TTL_S (default 300 s = 5 min).
# Listings are public eBay data — safe to share across all users.
_SEARCH_CACHE_TTL = int(os.environ.get("SEARCH_CACHE_TTL_S", "300"))
# key → ({"listings": [...raw dicts...], "market_price": float|None}, expiry_ts)
_search_result_cache: dict[str, tuple[dict, float]] = {}
# Throttle eviction sweeps to at most once per 60 s.
_last_eviction_ts: float = 0.0
def _cache_key(
q: str,
max_price: "float | None",
min_price: "float | None",
pages: int,
must_include: str,
must_include_mode: str,
must_exclude: str,
category_id: str,
) -> str:
"""Stable 16-char hex key for a search param set. Query is lower-cased + stripped."""
raw = (
f"{q.lower().strip()}|{max_price}|{min_price}|{pages}"
f"|{must_include.lower().strip()}|{must_include_mode}"
f"|{must_exclude.lower().strip()}|{category_id.strip()}"
)
return _hashlib.sha256(raw.encode()).hexdigest()[:16]
def _evict_expired_cache() -> None:
"""Remove stale entries from _search_result_cache.
Called opportunistically on each cache miss; rate-limited to once per 60 s
to avoid quadratic overhead when many concurrent misses arrive at once.
"""
global _last_eviction_ts
now = _time.time()
if now - _last_eviction_ts < 60.0:
return
_last_eviction_ts = now
expired = [k for k, (_, exp) in _search_result_cache.items() if exp <= now]
for k in expired:
_search_result_cache.pop(k, None)
if expired:
log.debug("cache: evicted %d expired entries", len(expired))
# ── Community DB (optional — only active when COMMUNITY_DB_URL is set) ────────
# Holds SnipeCommunityStore at module level so endpoints can publish signals
# without constructing a new connection pool on every request.
@ -97,6 +154,21 @@ def _get_query_translator():
@asynccontextmanager
async def _lifespan(app: FastAPI):
global _community_store
# Pre-warm the Chromium browser pool so the first scrape request does not
# pay the full cold-start cost (5-10s Xvfb + browser launch).
# Pool size is controlled via BROWSER_POOL_SIZE env var (default: 2).
import threading as _threading
from app.platforms.ebay.browser_pool import get_pool as _get_browser_pool
_browser_pool = _get_browser_pool()
_pool_thread = _threading.Thread(
target=_browser_pool.start, daemon=True, name="browser-pool-start"
)
_pool_thread.start()
log.info(
"BrowserPool: pre-warm started in background (BROWSER_POOL_SIZE=%s)",
os.environ.get("BROWSER_POOL_SIZE", "2"),
)
# Start vision/LLM background task scheduler.
# background_tasks queue lives in shared_db (cloud) or local_db (local)
# so the scheduler has a single stable DB path across all cloud users.
@ -204,6 +276,13 @@ async def _lifespan(app: FastAPI):
get_scheduler(sched_db).shutdown(timeout=10.0)
reset_scheduler()
log.info("Snipe task scheduler stopped.")
# Drain and close all pre-warmed browser pool slots.
try:
_browser_pool.stop()
except Exception:
log.warning("BrowserPool: error during shutdown", exc_info=True)
if _community_store is not None:
try:
_community_store._db.close()
@ -585,10 +664,10 @@ def _try_trading_api_enrichment(
return enriched
def _make_adapter(shared_store: Store, force: str = "auto"):
"""Return the appropriate adapter.
def _make_adapter(shared_store: Store, force: str = "auto", platform: str = "ebay"):
"""Return the appropriate adapter for the given platform.
force: "auto" | "api" | "scraper"
force: "auto" | "api" | "scraper" (ignored for non-eBay platforms)
auto API if creds present, else scraper
api Browse API (raises if no creds)
scraper Playwright scraper regardless of creds
@ -596,6 +675,11 @@ def _make_adapter(shared_store: Store, force: str = "auto"):
Adapters receive shared_store because they only read/write sellers and
market_comps never listings. Listings are returned and saved by the caller.
"""
if platform == "mercari":
from app.platforms.mercari import MercariAdapter
return MercariAdapter(shared_store)
# eBay
client_id, client_secret, env = _ebay_creds()
has_creds = bool(client_id and client_secret)
@ -612,8 +696,10 @@ def _make_adapter(shared_store: Store, force: str = "auto"):
return ScrapedEbayAdapter(shared_store)
def _adapter_name(force: str = "auto") -> str:
def _adapter_name(force: str = "auto", platform: str = "ebay") -> str:
"""Return the name of the adapter that would be used — without creating it."""
if platform != "ebay":
return platform
client_id, client_secret, _ = _ebay_creds()
if force == "scraper":
return "scraper"
@ -633,8 +719,16 @@ def search(
must_exclude: str = "", # comma-separated; forwarded to eBay -term + client-side
category_id: str = "", # eBay category ID — forwarded to Browse API / scraper _sacat
adapter: str = "auto", # "auto" | "api" | "scraper" — override adapter selection
refresh: bool = False, # when True, bypass cache read (still writes fresh result)
platform: str = Query("ebay", description="Marketplace platform to search"),
session: CloudUser = Depends(get_session),
):
if platform not in SUPPORTED_PLATFORMS:
raise HTTPException(
status_code=422,
detail=f"Platform {platform!r} is not yet supported. Supported: {sorted(SUPPORTED_PLATFORMS)}",
)
# If the user pasted an eBay listing or checkout URL, extract the item ID
# and use it as the search query so the exact item surfaces in results.
ebay_item_id = _extract_ebay_item_id(q)
@ -643,7 +737,7 @@ def search(
q = ebay_item_id
if not q.strip():
return {"listings": [], "trust_scores": {}, "sellers": {}, "market_price": None, "adapter_used": _adapter_name(adapter)}
return {"listings": [], "trust_scores": {}, "sellers": {}, "market_price": None, "adapter_used": _adapter_name(adapter, platform=platform)}
# Cap pages to the tier's maximum — free cloud users get 1 page, local gets unlimited.
features = compute_features(session.tier)
@ -651,9 +745,8 @@ def search(
must_exclude_terms = _parse_terms(must_exclude)
# In Groups mode, expand OR groups into multiple targeted eBay queries to
# guarantee comprehensive result coverage — eBay relevance won't silently drop variants.
if must_include_mode == "groups" and must_include.strip():
# OR-group expansion is eBay-specific; other platforms use the base query directly.
if platform == "ebay" and must_include_mode == "groups" and must_include.strip():
or_groups = parse_groups(must_include)
ebay_queries = expand_queries(q, or_groups)
else:
@ -680,18 +773,129 @@ def search(
category_id=category_id.strip() or None,
)
adapter_used = _adapter_name(adapter)
adapter_used = _adapter_name(adapter, platform=platform)
shared_db = session.shared_db
user_db = session.user_db
# ── Cache lookup (synchronous endpoint) ──────────────────────────────────
cache_key = _cache_key(q, max_price, min_price, pages, must_include, must_include_mode, must_exclude, category_id)
cached_listings_dicts: "list | None" = None
cached_market_price: "float | None" = None
if not refresh:
cached = _search_result_cache.get(cache_key)
if cached is not None:
payload, expiry = cached
if expiry > _time.time():
log.info("cache: hit key=%s q=%r", cache_key, q)
cached_listings_dicts = payload["listings"]
cached_market_price = payload["market_price"]
if cached_listings_dicts is not None:
# Cache hit path: reconstruct listings as plain dicts (already serialised),
# re-run trust scorer against the local DB so per-user signals are fresh,
# and kick off background enrichment as normal.
import sqlite3 as _sqlite3
affiliate_active = bool(os.environ.get("EBAY_AFFILIATE_CAMPAIGN_ID", "").strip())
session_id = str(uuid.uuid4())
_update_queues[session_id] = _queue.SimpleQueue()
try:
shared_store = Store(shared_db)
user_store = Store(user_db)
# Re-hydrate Listing dataclass instances from the cached dicts so the
# scorer and DB calls receive proper typed objects.
from app.db.models import Listing as _Listing
listings = [_Listing(**d) for d in cached_listings_dicts]
# Re-save to user_store so staging fields are current for this session.
user_store.save_listings(listings)
staged = user_store.get_listings_staged("ebay", [l.platform_listing_id for l in listings])
listings = [staged.get(l.platform_listing_id, l) for l in listings]
# Fresh trust scores against local DB (not cached — user-specific).
scorer = TrustScorer(shared_store)
trust_scores_list = scorer.score_batch(listings, q)
user_store.save_trust_scores(trust_scores_list)
features = compute_features(session.tier)
if features.photo_analysis:
_enqueue_vision_tasks(listings, trust_scores_list, session)
trust_map = {
listing.platform_listing_id: dataclasses.asdict(ts)
for listing, ts in zip(listings, trust_scores_list)
if ts is not None
}
seller_map = {
listing.seller_platform_id: dataclasses.asdict(
shared_store.get_seller(platform, listing.seller_platform_id)
)
for listing in listings
if listing.seller_platform_id
and shared_store.get_seller(platform, listing.seller_platform_id)
}
_is_unauthed = session.user_id == "anonymous" or session.user_id.startswith("guest:")
_pref_store = None if _is_unauthed else user_store
def _get_pref_cached(uid: Optional[str], path: str, default=None):
return _pref_store.get_user_preference(path, default=default) # type: ignore[union-attr]
def _serialize_listing_cached(l: object) -> dict:
d = dataclasses.asdict(l)
d["url"] = _wrap_affiliate_url(
d["url"],
retailer="ebay",
user_id=None if _is_unauthed else session.user_id,
get_preference=_get_pref_cached if _pref_store is not None else None,
)
return d
# Kick off BTF enrichment so live score updates still flow.
_trigger_scraper_enrichment(
listings, shared_store, shared_db,
user_db=user_db, query=comp_query, session_id=session_id,
)
return {
"listings": [_serialize_listing_cached(l) for l in listings],
"trust_scores": trust_map,
"sellers": seller_map,
"market_price": cached_market_price,
"adapter_used": adapter_used,
"affiliate_active": affiliate_active,
"session_id": session_id,
}
except _sqlite3.OperationalError as e:
log.warning("search (cache hit) DB contention: %s", e)
_update_queues.pop(session_id, None)
return {
"listings": cached_listings_dicts,
"trust_scores": {},
"sellers": {},
"market_price": cached_market_price,
"adapter_used": adapter_used,
"affiliate_active": affiliate_active,
"session_id": None,
}
# ── Cache miss — run full scrape ─────────────────────────────────────────
_evict_expired_cache()
log.info("cache: miss key=%s q=%r", cache_key, q)
# Each thread creates its own Store — sqlite3 check_same_thread=True.
def _run_search(ebay_query: str) -> list:
return _make_adapter(Store(shared_db), adapter).search(ebay_query, base_filters)
return _make_adapter(Store(shared_db), adapter, platform=platform).search(ebay_query, base_filters)
def _run_comps() -> None:
try:
_make_adapter(Store(shared_db), adapter).get_completed_sales(comp_query, pages)
_make_adapter(Store(shared_db), adapter, platform=platform).get_completed_sales(comp_query, pages)
except Exception:
log.warning("comps: unhandled exception for %r", comp_query, exc_info=True)
@ -718,8 +922,8 @@ def search(
raise HTTPException(status_code=502, detail=f"eBay search failed: {e}")
log.info(
"search auth=%s tier=%s adapter=%s pages=%d queries=%d listings=%d q=%r",
_auth_label(session.user_id), session.tier, adapter_used,
"search platform=%s auth=%s tier=%s adapter=%s pages=%d queries=%d listings=%d q=%r",
platform, _auth_label(session.user_id), session.tier, adapter_used,
pages, len(ebay_queries), len(listings), q,
)
@ -740,25 +944,23 @@ def search(
user_store.save_listings(listings)
# Derive category_history from accumulated listing data — free for API adapter
# (category_name comes from Browse API response), no-op for scraper listings (category_name=None).
# Reads listings from user_store, writes seller categories to shared_store.
# Derive category_history from accumulated listing data — eBay only
# (category_name comes from Browse API response; other platforms return None).
seller_ids = list({l.seller_platform_id for l in listings if l.seller_platform_id})
if platform == "ebay":
n_cat = shared_store.refresh_seller_categories("ebay", seller_ids, listing_store=user_store)
if n_cat:
log.info("Category history derived for %d sellers from listing data", n_cat)
# Re-fetch to hydrate staging fields (times_seen, first_seen_at, id, price_at_first_seen)
# that are only available from the DB after the upsert.
staged = user_store.get_listings_staged("ebay", [l.platform_listing_id for l in listings])
staged = user_store.get_listings_staged(platform, [l.platform_listing_id for l in listings])
listings = [staged.get(l.platform_listing_id, l) for l in listings]
# Trading API enrichment: if the user has connected their eBay account, use
# Trading API GetUser to instantly fill account_age_days for sellers missing it.
# This is synchronous (~200ms per seller) but only runs for sellers that need
# enrichment — typically a small subset. Sellers resolved here are excluded from
# the slower BTF Playwright background pass.
_main_adapter = _make_adapter(shared_store, adapter)
# Trading API enrichment and BTF scraping are eBay-specific.
_main_adapter = _make_adapter(shared_store, adapter, platform=platform)
trading_api_enriched: set[str] = set()
if platform == "ebay":
sellers_needing_age = [
l.seller_platform_id for l in listings
if l.seller_platform_id
@ -772,9 +974,7 @@ def search(
_main_adapter, sellers_needing_age, user_db
)
# BTF enrichment: scrape /itm/ pages for sellers still missing account_age_days
# after the Trading API pass. Runs in the background so it doesn't delay the
# response. Live score updates are pushed to the pre-registered SSE queue.
# BTF enrichment: scrape /itm/ pages for sellers still missing account_age_days.
_trigger_scraper_enrichment(
listings, shared_store, shared_db,
user_db=user_db, query=comp_query, session_id=session_id,
@ -793,9 +993,17 @@ def search(
_enqueue_vision_tasks(listings, trust_scores_list, session)
query_hash = hashlib.md5(comp_query.encode()).hexdigest()
comp = shared_store.get_market_comp("ebay", query_hash)
comp = shared_store.get_market_comp(platform, query_hash)
market_price = comp.median_price if comp else None
# Store raw listings (as dicts) + market_price in cache.
# Trust scores and seller enrichment are intentionally excluded — they
# incorporate per-user signals and must be computed fresh each time.
_search_result_cache[cache_key] = (
{"listings": [dataclasses.asdict(l) for l in listings], "market_price": market_price},
_time.time() + _SEARCH_CACHE_TTL,
)
# Serialize — keyed by platform_listing_id for easy Vue lookup
trust_map = {
listing.platform_listing_id: dataclasses.asdict(ts)
@ -804,11 +1012,11 @@ def search(
}
seller_map = {
listing.seller_platform_id: dataclasses.asdict(
shared_store.get_seller("ebay", listing.seller_platform_id)
shared_store.get_seller(platform, listing.seller_platform_id)
)
for listing in listings
if listing.seller_platform_id
and shared_store.get_seller("ebay", listing.seller_platform_id)
and shared_store.get_seller(platform, listing.seller_platform_id)
}
# Build a preference reader for affiliate URL wrapping.
@ -873,6 +1081,8 @@ def search_async(
must_exclude: str = "",
category_id: str = "",
adapter: str = "auto",
refresh: bool = False, # when True, bypass cache read (still writes fresh result)
platform: str = Query("ebay", description="Marketplace platform to search"),
session: CloudUser = Depends(get_session),
):
"""Async variant of GET /api/search.
@ -888,6 +1098,12 @@ def search_async(
"seller": {...}, "market_price": ...} (enrichment updates)
None (sentinel stream finished)
"""
if platform not in SUPPORTED_PLATFORMS:
raise HTTPException(
status_code=422,
detail=f"Platform {platform!r} is not yet supported. Supported: {sorted(SUPPORTED_PLATFORMS)}",
)
# Validate / normalise params — same logic as synchronous endpoint.
ebay_item_id = _extract_ebay_item_id(q)
if ebay_item_id:
@ -904,7 +1120,7 @@ def search_async(
"trust_scores": {},
"sellers": {},
"market_price": None,
"adapter_used": _adapter_name(adapter),
"adapter_used": _adapter_name(adapter, platform=platform),
"affiliate_active": bool(os.environ.get("EBAY_AFFILIATE_CAMPAIGN_ID", "").strip()),
})
_update_queues[empty_id].put(None)
@ -923,16 +1139,18 @@ def search_async(
_tier = session.tier
_user_id = session.user_id
_affiliate_active = bool(os.environ.get("EBAY_AFFILIATE_CAMPAIGN_ID", "").strip())
_refresh = refresh # capture before the closure is dispatched
def _background_search() -> None:
"""Run the full search pipeline and push SSE events to the queue."""
import hashlib as _hashlib
import hashlib as _hashlib_local
import sqlite3 as _sqlite3
q_norm = q # captured from outer scope
must_exclude_terms = _parse_terms(must_exclude)
if must_include_mode == "groups" and must_include.strip():
# OR-group expansion is eBay-specific; other platforms use the base query directly.
if platform == "ebay" and must_include_mode == "groups" and must_include.strip():
or_groups = parse_groups(must_include)
ebay_queries = expand_queries(q_norm, or_groups)
else:
@ -954,7 +1172,7 @@ def search_async(
category_id=category_id.strip() or None,
)
adapter_used = _adapter_name(adapter)
adapter_used = _adapter_name(adapter, platform=platform)
q_ref = _update_queues.get(session_id)
if q_ref is None:
return # client disconnected before we even started
@ -965,13 +1183,107 @@ def search_async(
if sq is not None:
sq.put(event)
# ── Cache lookup (async background worker) ────────────────────────────
async_cache_key = _cache_key(
q_norm, max_price, min_price, pages,
must_include, must_include_mode, must_exclude, category_id,
)
if not _refresh:
cached = _search_result_cache.get(async_cache_key)
if cached is not None:
payload, expiry = cached
if expiry > _time.time():
log.info("cache: hit key=%s q=%r", async_cache_key, q_norm)
from app.db.models import Listing as _Listing
cached_listings_raw = payload["listings"]
cached_market_price = payload["market_price"]
try:
shared_store = Store(_shared_db)
user_store = Store(_user_db)
listings = [_Listing(**d) for d in cached_listings_raw]
user_store.save_listings(listings)
staged = user_store.get_listings_staged(
"ebay", [l.platform_listing_id for l in listings]
)
listings = [staged.get(l.platform_listing_id, l) for l in listings]
scorer = TrustScorer(shared_store)
trust_scores_list = scorer.score_batch(listings, q_norm)
user_store.save_trust_scores(trust_scores_list)
features_obj = compute_features(_tier)
if features_obj.photo_analysis:
from api.cloud_session import CloudUser as _CloudUser
_sess_stub = _CloudUser(
user_id=_user_id, tier=_tier,
shared_db=_shared_db, user_db=_user_db,
)
_enqueue_vision_tasks(listings, trust_scores_list, _sess_stub)
trust_map = {
listing.platform_listing_id: dataclasses.asdict(ts)
for listing, ts in zip(listings, trust_scores_list)
if ts is not None
}
seller_map = {
listing.seller_platform_id: dataclasses.asdict(
shared_store.get_seller("ebay", listing.seller_platform_id)
)
for listing in listings
if listing.seller_platform_id
and shared_store.get_seller("ebay", listing.seller_platform_id)
}
_is_unauthed = _user_id == "anonymous" or _user_id.startswith("guest:")
_pref_store_hit = None if _is_unauthed else user_store
def _get_pref_hit(uid: Optional[str], path: str, default=None):
return _pref_store_hit.get_user_preference(path, default=default) # type: ignore[union-attr]
def _serialize_hit(l: object) -> dict:
d = dataclasses.asdict(l)
d["url"] = _wrap_affiliate_url(
d["url"],
retailer="ebay",
user_id=None if _is_unauthed else _user_id,
get_preference=_get_pref_hit if _pref_store_hit is not None else None,
)
return d
_push({
"type": "listings",
"listings": [_serialize_hit(l) for l in listings],
"trust_scores": trust_map,
"sellers": seller_map,
"market_price": cached_market_price,
"adapter_used": adapter_used,
"affiliate_active": _affiliate_active,
"session_id": session_id,
})
# Enrichment still runs so live score updates flow.
_trigger_scraper_enrichment(
listings, shared_store, _shared_db,
user_db=_user_db, query=comp_query, session_id=session_id,
)
return # done — no scraping needed
except Exception as exc:
log.warning(
"cache hit path failed, falling through to scrape: %s", exc
)
# Fall through to full scrape below.
# ── Cache miss — evict stale entries, then scrape ─────────────────────
_evict_expired_cache()
log.info("cache: miss key=%s q=%r", async_cache_key, q_norm)
try:
def _run_search(ebay_query: str) -> list:
return _make_adapter(Store(_shared_db), adapter).search(ebay_query, base_filters)
return _make_adapter(Store(_shared_db), adapter, platform=platform).search(ebay_query, base_filters)
def _run_comps() -> None:
try:
_make_adapter(Store(_shared_db), adapter).get_completed_sales(comp_query, pages)
_make_adapter(Store(_shared_db), adapter, platform=platform).get_completed_sales(comp_query, pages)
except Exception:
log.warning("async comps: unhandled exception for %r", comp_query, exc_info=True)
@ -990,8 +1302,8 @@ def search_async(
comps_future.result()
log.info(
"async_search auth=%s tier=%s adapter=%s pages=%d listings=%d q=%r",
_auth_label(_user_id), _tier, adapter_used, pages, len(listings), q_norm,
"async_search platform=%s auth=%s tier=%s adapter=%s pages=%d listings=%d q=%r",
platform, _auth_label(_user_id), _tier, adapter_used, pages, len(listings), q_norm,
)
shared_store = Store(_shared_db)
@ -1000,14 +1312,17 @@ def search_async(
user_store.save_listings(listings)
seller_ids = list({l.seller_platform_id for l in listings if l.seller_platform_id})
if platform == "ebay":
n_cat = shared_store.refresh_seller_categories("ebay", seller_ids, listing_store=user_store)
if n_cat:
log.info("async_search: category history derived for %d sellers", n_cat)
staged = user_store.get_listings_staged("ebay", [l.platform_listing_id for l in listings])
staged = user_store.get_listings_staged(platform, [l.platform_listing_id for l in listings])
listings = [staged.get(l.platform_listing_id, l) for l in listings]
_main_adapter = _make_adapter(shared_store, adapter)
_main_adapter = _make_adapter(shared_store, adapter, platform=platform)
sellers_needing_age: list[str] = []
if platform == "ebay":
sellers_needing_age = [
l.seller_platform_id for l in listings
if l.seller_platform_id
@ -1017,7 +1332,7 @@ def search_async(
seen_set: set[str] = set()
sellers_needing_age = [s for s in sellers_needing_age if not (s in seen_set or seen_set.add(s))] # type: ignore[func-returns-value]
# Use a temporary CloudUser-like object for Trading API enrichment
# Use a temporary CloudUser-like object for Trading API enrichment (eBay only)
from api.cloud_session import CloudUser as _CloudUser
_session_stub = _CloudUser(
user_id=_user_id,
@ -1025,6 +1340,8 @@ def search_async(
shared_db=_shared_db,
user_db=_user_db,
)
trading_api_enriched: set[str] = set()
if platform == "ebay":
trading_api_enriched = _try_trading_api_enrichment(
_main_adapter, sellers_needing_age, _user_db
)
@ -1038,10 +1355,16 @@ def search_async(
if features_obj.photo_analysis:
_enqueue_vision_tasks(listings, trust_scores_list, _session_stub)
query_hash = _hashlib.md5(comp_query.encode()).hexdigest()
comp = shared_store.get_market_comp("ebay", query_hash)
query_hash = _hashlib_local.md5(comp_query.encode()).hexdigest()
comp = shared_store.get_market_comp(platform, query_hash)
market_price = comp.median_price if comp else None
# Store raw listings + market_price in cache (trust scores excluded).
_search_result_cache[async_cache_key] = (
{"listings": [dataclasses.asdict(l) for l in listings], "market_price": market_price},
_time.time() + _SEARCH_CACHE_TTL,
)
trust_map = {
listing.platform_listing_id: dataclasses.asdict(ts)
for listing, ts in zip(listings, trust_scores_list)
@ -1049,11 +1372,11 @@ def search_async(
}
seller_map = {
listing.seller_platform_id: dataclasses.asdict(
shared_store.get_seller("ebay", listing.seller_platform_id)
shared_store.get_seller(platform, listing.seller_platform_id)
)
for listing in listings
if listing.seller_platform_id
and shared_store.get_seller("ebay", listing.seller_platform_id)
and shared_store.get_seller(platform, listing.seller_platform_id)
}
_is_unauthed = _user_id == "anonymous" or _user_id.startswith("guest:")
@ -1084,12 +1407,17 @@ def search_async(
"session_id": session_id,
})
# Kick off background enrichment — it pushes "update" events and the sentinel.
# BTF background enrichment is eBay-specific.
if platform == "ebay":
_trigger_scraper_enrichment(
listings, shared_store, _shared_db,
user_db=_user_db, query=comp_query, session_id=session_id,
skip_seller_ids=trading_api_enriched,
)
else:
# For non-eBay platforms, push the sentinel directly since there's no
# background enrichment pass.
_push(None)
except _sqlite3.OperationalError as e:
log.warning("async_search DB contention: %s", e)

View file

@ -0,0 +1,20 @@
-- Migration 013: eBay user OAuth tokens
--
-- Stores per-user eBay Authorization Code tokens so the app can call
-- Trading API GetUser for instant account_age_days + category feedback
-- without Playwright scraping.
--
-- Stored in the per-user DB (user.db), never the shared DB.
-- access_token is short-lived (2h); refresh_token is valid 18 months.
-- The API layer refreshes access_token automatically before expiry.
CREATE TABLE IF NOT EXISTS ebay_user_tokens (
id INTEGER PRIMARY KEY,
-- Single row per user DB — upsert on reconnect
access_token TEXT NOT NULL,
refresh_token TEXT NOT NULL,
expires_at REAL NOT NULL, -- epoch seconds; access token expiry
scopes TEXT NOT NULL DEFAULT '',
connected_at TEXT NOT NULL DEFAULT (datetime('now')),
last_refreshed TEXT
);

View file

@ -0,0 +1,24 @@
-- Migration 014: background monitor settings on saved_searches + watch_alerts table
ALTER TABLE saved_searches ADD COLUMN monitor_enabled INTEGER NOT NULL DEFAULT 0;
ALTER TABLE saved_searches ADD COLUMN poll_interval_min INTEGER NOT NULL DEFAULT 60;
ALTER TABLE saved_searches ADD COLUMN min_trust_score INTEGER NOT NULL DEFAULT 60;
ALTER TABLE saved_searches ADD COLUMN last_checked_at TEXT;
CREATE TABLE IF NOT EXISTS watch_alerts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
saved_search_id INTEGER NOT NULL REFERENCES saved_searches(id) ON DELETE CASCADE,
platform_listing_id TEXT NOT NULL,
title TEXT NOT NULL,
price REAL NOT NULL,
currency TEXT NOT NULL DEFAULT 'USD',
trust_score INTEGER NOT NULL,
url TEXT,
first_alerted_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
dismissed_at TEXT,
UNIQUE(saved_search_id, platform_listing_id)
);
CREATE INDEX IF NOT EXISTS idx_watch_alerts_undismissed
ON watch_alerts(saved_search_id)
WHERE dismissed_at IS NULL;

View file

@ -0,0 +1,20 @@
-- Migration 015: cross-user monitor registry for the background polling loop
--
-- In cloud mode this table lives in shared.db — the polling loop queries it
-- to find all due monitors without scanning per-user DB files.
-- In local mode it lives in the single local DB (same result, one user).
--
-- user_db_path references the per-user snipe user.db so the poller knows
-- which DB to open for the full SavedSearch config and to write alerts.
CREATE TABLE IF NOT EXISTS active_monitors (
id INTEGER PRIMARY KEY AUTOINCREMENT,
user_db_path TEXT NOT NULL,
saved_search_id INTEGER NOT NULL,
poll_interval_min INTEGER NOT NULL DEFAULT 60,
last_checked_at TEXT,
UNIQUE(user_db_path, saved_search_id)
);
CREATE INDEX IF NOT EXISTS idx_active_monitors_due
ON active_monitors(last_checked_at);

View file

@ -81,6 +81,26 @@ class SavedSearch:
id: Optional[int] = None
created_at: Optional[str] = None
last_run_at: Optional[str] = None
# Monitor settings (migration 014)
monitor_enabled: bool = False
poll_interval_min: int = 60
min_trust_score: int = 60
last_checked_at: Optional[str] = None
@dataclass
class WatchAlert:
"""A new listing surfaced by the background monitor for a saved search."""
saved_search_id: int
platform_listing_id: str
title: str
price: float
trust_score: int
currency: str = "USD"
url: Optional[str] = None
id: Optional[int] = None
first_alerted_at: Optional[str] = None
dismissed_at: Optional[str] = None
@dataclass

View file

@ -8,7 +8,7 @@ from typing import Optional
from circuitforge_core.db import get_connection, run_migrations
from .models import Listing, MarketComp, SavedSearch, ScammerEntry, Seller, TrustScore
from .models import Listing, MarketComp, SavedSearch, ScammerEntry, Seller, TrustScore, WatchAlert
MIGRATIONS_DIR = Path(__file__).parent / "migrations"
@ -310,15 +310,66 @@ class Store:
def list_saved_searches(self) -> list[SavedSearch]:
rows = self._conn.execute(
"SELECT name, query, platform, filters_json, id, created_at, last_run_at "
"SELECT name, query, platform, filters_json, id, created_at, last_run_at, "
"monitor_enabled, poll_interval_min, min_trust_score, last_checked_at "
"FROM saved_searches ORDER BY created_at DESC"
).fetchall()
return [
SavedSearch(name=r[0], query=r[1], platform=r[2], filters_json=r[3],
id=r[4], created_at=r[5], last_run_at=r[6])
SavedSearch(
name=r[0], query=r[1], platform=r[2], filters_json=r[3],
id=r[4], created_at=r[5], last_run_at=r[6],
monitor_enabled=bool(r[7]), poll_interval_min=r[8],
min_trust_score=r[9], last_checked_at=r[10],
)
for r in rows
]
def update_monitor_settings(
self,
saved_id: int,
*,
monitor_enabled: bool,
poll_interval_min: int,
min_trust_score: int,
) -> None:
self._conn.execute(
"UPDATE saved_searches "
"SET monitor_enabled=?, poll_interval_min=?, min_trust_score=? "
"WHERE id=?",
(int(monitor_enabled), poll_interval_min, min_trust_score, saved_id),
)
self._conn.commit()
def list_monitored_searches(self) -> list[SavedSearch]:
"""Return all saved searches with monitoring enabled (used by background poller)."""
rows = self._conn.execute(
"SELECT name, query, platform, filters_json, id, created_at, last_run_at, "
"monitor_enabled, poll_interval_min, min_trust_score, last_checked_at "
"FROM saved_searches WHERE monitor_enabled=1"
).fetchall()
return [
SavedSearch(
name=r[0], query=r[1], platform=r[2], filters_json=r[3],
id=r[4], created_at=r[5], last_run_at=r[6],
monitor_enabled=True, poll_interval_min=r[8],
min_trust_score=r[9], last_checked_at=r[10],
)
for r in rows
]
def mark_search_checked(self, saved_id: int) -> None:
self._conn.execute(
"UPDATE saved_searches SET last_checked_at=? WHERE id=?",
(datetime.now(timezone.utc).isoformat(), saved_id),
)
self._conn.commit()
def count_active_monitors(self) -> int:
row = self._conn.execute(
"SELECT COUNT(*) FROM saved_searches WHERE monitor_enabled=1"
).fetchone()
return row[0] if row else 0
def delete_saved_search(self, saved_id: int) -> None:
self._conn.execute("DELETE FROM saved_searches WHERE id=?", (saved_id,))
self._conn.commit()
@ -330,6 +381,112 @@ class Store:
)
self._conn.commit()
# --- WatchAlerts ---
def upsert_alert(self, alert: WatchAlert) -> tuple[int, bool]:
"""Insert alert if not already present. Returns (id, is_new)."""
existing = self._conn.execute(
"SELECT id FROM watch_alerts WHERE saved_search_id=? AND platform_listing_id=?",
(alert.saved_search_id, alert.platform_listing_id),
).fetchone()
if existing:
return existing[0], False
cur = self._conn.execute(
"INSERT INTO watch_alerts "
"(saved_search_id, platform_listing_id, title, price, currency, trust_score, url) "
"VALUES (?,?,?,?,?,?,?)",
(alert.saved_search_id, alert.platform_listing_id, alert.title,
alert.price, alert.currency, alert.trust_score, alert.url),
)
self._conn.commit()
return cur.lastrowid, True
def list_alerts(self, *, include_dismissed: bool = False) -> list[WatchAlert]:
where = "" if include_dismissed else "WHERE dismissed_at IS NULL"
rows = self._conn.execute(
f"SELECT id, saved_search_id, platform_listing_id, title, price, currency, "
f"trust_score, url, first_alerted_at, dismissed_at "
f"FROM watch_alerts {where} ORDER BY first_alerted_at DESC"
).fetchall()
return [
WatchAlert(
id=r[0], saved_search_id=r[1], platform_listing_id=r[2],
title=r[3], price=r[4], currency=r[5], trust_score=r[6],
url=r[7], first_alerted_at=r[8], dismissed_at=r[9],
)
for r in rows
]
def count_undismissed_alerts(self) -> int:
row = self._conn.execute(
"SELECT COUNT(*) FROM watch_alerts WHERE dismissed_at IS NULL"
).fetchone()
return row[0] if row else 0
def dismiss_alert(self, alert_id: int) -> None:
self._conn.execute(
"UPDATE watch_alerts SET dismissed_at=? WHERE id=?",
(datetime.now(timezone.utc).isoformat(), alert_id),
)
self._conn.commit()
def dismiss_all_alerts(self) -> int:
"""Dismiss all undismissed alerts. Returns count dismissed."""
cur = self._conn.execute(
"UPDATE watch_alerts SET dismissed_at=? WHERE dismissed_at IS NULL",
(datetime.now(timezone.utc).isoformat(),),
)
self._conn.commit()
return cur.rowcount
# --- ActiveMonitors (sched_db / shared_db) ---
def upsert_active_monitor(
self,
user_db_path: str,
saved_search_id: int,
poll_interval_min: int,
) -> None:
"""Register or update a monitor in the cross-user registry (sched_db)."""
self._conn.execute(
"INSERT INTO active_monitors (user_db_path, saved_search_id, poll_interval_min) "
"VALUES (?,?,?) "
"ON CONFLICT(user_db_path, saved_search_id) DO UPDATE SET "
" poll_interval_min=excluded.poll_interval_min",
(user_db_path, saved_search_id, poll_interval_min),
)
self._conn.commit()
def remove_active_monitor(self, user_db_path: str, saved_search_id: int) -> None:
self._conn.execute(
"DELETE FROM active_monitors WHERE user_db_path=? AND saved_search_id=?",
(user_db_path, saved_search_id),
)
self._conn.commit()
def list_due_active_monitors(self) -> list[tuple[str, int, int]]:
"""Return (user_db_path, saved_search_id, poll_interval_min) for monitors that are due.
Due = never checked OR last_checked_at is old enough given poll_interval_min.
Uses SQLite's strftime('%s') for epoch arithmetic without Python datetime overhead.
"""
rows = self._conn.execute(
"SELECT user_db_path, saved_search_id, poll_interval_min "
"FROM active_monitors "
"WHERE last_checked_at IS NULL "
" OR (strftime('%s','now') - strftime('%s', last_checked_at)) "
" >= poll_interval_min * 60"
).fetchall()
return [(r[0], r[1], r[2]) for r in rows]
def mark_active_monitor_checked(self, user_db_path: str, saved_search_id: int) -> None:
self._conn.execute(
"UPDATE active_monitors SET last_checked_at=? "
"WHERE user_db_path=? AND saved_search_id=?",
(datetime.now(timezone.utc).isoformat(), user_db_path, saved_search_id),
)
self._conn.commit()
# --- ScammerBlocklist ---
def add_to_blocklist(self, entry: ScammerEntry) -> ScammerEntry:

View file

@ -7,6 +7,10 @@ from typing import Optional
from app.db.models import Listing, Seller
# Single source of truth for platform validation.
# Phase 2 will extend this set as new adapters are implemented.
SUPPORTED_PLATFORMS: frozenset[str] = frozenset({"ebay", "mercari"})
@dataclass
class SearchFilters:
@ -18,6 +22,8 @@ class SearchFilters:
must_include: list[str] = field(default_factory=list) # client-side title filter
must_exclude: list[str] = field(default_factory=list) # forwarded to eBay -term AND client-side
category_id: Optional[str] = None # eBay category ID (e.g. "27386" = GPUs)
must_include_mode: str = "all" # "all" | "any" | "groups"
adapter: str = "auto" # "auto" | "api" | "scraper"
class PlatformAdapter(ABC):

View file

@ -1,8 +1,9 @@
"""eBay Browse API adapter."""
"""eBay Browse + Trading API adapter."""
from __future__ import annotations
import hashlib
import logging
import xml.etree.ElementTree as ET
from dataclasses import replace
from datetime import datetime, timedelta, timezone
from typing import Optional
@ -210,6 +211,70 @@ class EbayAdapter(PlatformAdapter):
except Exception as e:
log.debug("Shopping API enrich failed for %s: %s", username, e)
# ── Trading API GetUser (requires user OAuth token) ───────────────────────
_TRADING_API_URL = "https://api.ebay.com/ws/api.dll"
_TRADING_API_COMPATIBILITY = "1283"
def enrich_seller_trading_api(self, username: str, user_access_token: str) -> bool:
"""Enrich a seller's account_age_days using Trading API GetUser.
Uses the connected user's OAuth access token (Authorization Code flow),
which bypasses Shopping API rate limits and works even when the Shopping
API GetUserProfile call is throttled.
Unlike BTF scraping, this is a clean API call (~200ms, no Playwright).
Called from the search endpoint when the requesting user has connected
their eBay account.
Returns True if enrichment succeeded, False on any failure.
"""
xml_body = (
'<?xml version="1.0" encoding="utf-8"?>'
'<GetUserRequest xmlns="urn:ebay:apis:eBLBaseComponents">'
f'<UserID>{username}</UserID>'
'</GetUserRequest>'
)
try:
resp = requests.post(
self._TRADING_API_URL,
headers={
"X-EBAY-API-CALL-NAME": "GetUser",
"X-EBAY-API-SITEID": "0",
"X-EBAY-API-COMPATIBILITY-LEVEL": self._TRADING_API_COMPATIBILITY,
"X-EBAY-API-IAF-TOKEN": f"Bearer {user_access_token}",
"Content-Type": "text/xml",
},
data=xml_body.encode("utf-8"),
timeout=10,
)
resp.raise_for_status()
root = ET.fromstring(resp.text)
ns = {"e": "urn:ebay:apis:eBLBaseComponents"}
ack = root.findtext("e:Ack", namespaces=ns)
if ack not in ("Success", "Warning"):
errors = [e.findtext("e:LongMessage", namespaces=ns, default="")
for e in root.findall("e:Errors", namespaces=ns)]
log.debug("Trading API GetUser failed for %s: %s", username, errors)
return False
reg_date = root.findtext("e:User/e:RegistrationDate", namespaces=ns)
if not reg_date:
return False
dt = datetime.fromisoformat(reg_date.replace("Z", "+00:00"))
age_days = (datetime.now(timezone.utc) - dt).days
seller = self._store.get_seller("ebay", username)
if seller:
self._store.save_seller(replace(seller, account_age_days=age_days))
log.debug("Trading API GetUser: %s registered %d days ago", username, age_days)
return True
except Exception as exc:
log.debug("Trading API GetUser failed for %s: %s", username, exc)
return False
def get_seller(self, seller_platform_id: str) -> Optional[Seller]:
cached = self._store.get_seller("ebay", seller_platform_id)
if cached:

View file

@ -0,0 +1,400 @@
"""Thread-local Playwright browser manager for the eBay scraper.
Each uvicorn worker thread that calls fetch_html() gets its own Playwright
instance, browser, and context created lazily on first use. This avoids
the "cannot switch to a different thread" error that arises when Playwright
sync API instances are shared across threads (they bind their greenlet event
loop to the creating thread).
Key design:
- Thread-local: _thread_local.slot holds the _PooledBrowser for the current
thread. No slot is ever handed to another thread.
- Lazy creation: slots are created on first fetch_html() call per thread, not
at startup. start() is a lightweight lifecycle marker only.
- Registry: _slot_registry (keyed by thread-id) lets stop() close every active
slot across all threads without walking thread-local storage.
- Replenishment: after each use the dirty context is closed and a fresh one
opened on the same browser. Browser launch overhead is paid at most once
per worker thread lifetime.
- Graceful degradation: if Playwright / Xvfb is unavailable, fetch_html falls
back to _fetch_fresh (identical behavior to before this module existed).
Pool size is read from BROWSER_POOL_SIZE env var (default: 2) but is now a
soft limit used only for documentation; actual concurrency is bounded by
uvicorn's thread count.
"""
from __future__ import annotations
import itertools
import logging
import os
import subprocess
import threading
import time
from dataclasses import dataclass, field
from typing import Optional
log = logging.getLogger(__name__)
_pool_display_counter = itertools.cycle(range(200, 400))
_CHROMIUM_ARGS = ["--no-sandbox", "--disable-dev-shm-usage"]
_XVFB_ARGS = ["-screen", "0", "1280x800x24", "-ac"]
_USER_AGENT = (
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36"
)
_VIEWPORT = {"width": 1280, "height": 800}
# Thread-local storage: each thread gets its own _PooledBrowser slot.
_thread_local = threading.local()
@dataclass
class _PooledBrowser:
"""One browser slot, bound to a single thread."""
xvfb: subprocess.Popen
pw: object # playwright instance (sync_playwright().__enter__())
browser: object # playwright Browser
ctx: object # playwright BrowserContext (fresh per use)
display_num: int
last_used_ts: float = field(default_factory=time.time)
def _launch_slot() -> _PooledBrowser:
"""Launch a new Xvfb display + headed Chromium browser + fresh context.
Must be called from the thread that will use the slot.
"""
from playwright.sync_api import sync_playwright
from playwright_stealth import Stealth # noqa: F401
display_num = next(_pool_display_counter)
display = f":{display_num}"
env = os.environ.copy()
env["DISPLAY"] = display
xvfb = subprocess.Popen(
["Xvfb", display] + _XVFB_ARGS,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
time.sleep(0.3)
pw = sync_playwright().start()
try:
browser = pw.chromium.launch(
headless=False,
env=env,
args=_CHROMIUM_ARGS,
)
ctx = browser.new_context(
user_agent=_USER_AGENT,
viewport=_VIEWPORT,
)
except Exception:
pw.stop()
xvfb.terminate()
xvfb.wait()
raise
return _PooledBrowser(
xvfb=xvfb,
pw=pw,
browser=browser,
ctx=ctx,
display_num=display_num,
last_used_ts=time.time(),
)
def _close_slot(slot: _PooledBrowser) -> None:
"""Cleanly close a slot: context → browser → Playwright → Xvfb."""
try:
slot.ctx.close()
except Exception:
pass
try:
slot.browser.close()
except Exception:
pass
try:
slot.pw.stop()
except Exception:
pass
try:
slot.xvfb.terminate()
slot.xvfb.wait(timeout=5)
except Exception:
pass
def _replenish_slot(slot: _PooledBrowser) -> _PooledBrowser:
"""Close the used context and open a fresh one on the same browser."""
try:
slot.ctx.close()
except Exception:
pass
new_ctx = slot.browser.new_context(
user_agent=_USER_AGENT,
viewport=_VIEWPORT,
)
return _PooledBrowser(
xvfb=slot.xvfb,
pw=slot.pw,
browser=slot.browser,
ctx=new_ctx,
display_num=slot.display_num,
last_used_ts=time.time(),
)
class BrowserPool:
"""Thread-local Playwright browser manager.
Each thread that calls fetch_html() owns its own browser instance.
No slots are shared between threads.
"""
def __init__(self, size: int = 2) -> None:
self._size = size
self._lock = threading.Lock()
self._started = False
self._stopped = False
self._playwright_available: Optional[bool] = None
# Registry of all active slots keyed by thread id — used only by stop().
self._slot_registry: dict[int, _PooledBrowser] = {}
# ------------------------------------------------------------------
# Lifecycle
# ------------------------------------------------------------------
def start(self) -> None:
"""Mark the pool as started. Slots are created lazily per thread."""
with self._lock:
if self._started:
return
self._started = True
if not self._check_playwright():
log.warning(
"BrowserPool: Playwright / Xvfb not available — "
"pool disabled, falling back to per-call fresh browser."
)
return
log.info("BrowserPool: started (thread-local mode, size hint=%d)", self._size)
def stop(self) -> None:
"""Close all active slots across all threads."""
with self._lock:
self._stopped = True
registry_snapshot = dict(self._slot_registry)
closed = 0
for slot in registry_snapshot.values():
_close_slot(slot)
closed += 1
self._slot_registry.clear()
log.info("BrowserPool: stopped, closed %d slot(s)", closed)
# ------------------------------------------------------------------
# Core fetch
# ------------------------------------------------------------------
def fetch_html(
self,
url: str,
delay: float = 1.0,
wait_for_selector: Optional[str] = None,
wait_for_timeout_ms: int = 2000,
) -> str:
"""Navigate to *url* and return the rendered HTML.
Uses the calling thread's browser slot (creates one if needed).
Falls back to a fresh browser if Playwright is unavailable or the
slot fails.
"""
time.sleep(delay)
slot = self._get_or_create_thread_slot()
if slot is not None:
try:
html = self._fetch_with_slot(
slot, url,
wait_for_selector=wait_for_selector,
wait_for_timeout_ms=wait_for_timeout_ms,
)
try:
fresh_slot = _replenish_slot(slot)
self._register_slot(fresh_slot)
except Exception as exc:
log.warning("BrowserPool: replenish failed, slot discarded: %s", exc)
_close_slot(slot)
self._unregister_slot()
return html
except Exception as exc:
log.warning("BrowserPool: pooled fetch failed (%s) — closing slot", exc)
_close_slot(slot)
self._unregister_slot()
return self._fetch_fresh(
url,
wait_for_selector=wait_for_selector,
wait_for_timeout_ms=wait_for_timeout_ms,
)
# ------------------------------------------------------------------
# Thread-local slot management
# ------------------------------------------------------------------
def _get_or_create_thread_slot(self) -> Optional[_PooledBrowser]:
"""Return the calling thread's slot, creating it if absent."""
if not self._check_playwright():
return None
slot: Optional[_PooledBrowser] = getattr(_thread_local, "slot", None)
if slot is not None:
return slot
try:
slot = _launch_slot()
self._register_slot(slot)
log.debug("BrowserPool: launched slot :%d for thread %d",
slot.display_num, threading.get_ident())
return slot
except Exception as exc:
log.warning("BrowserPool: slot launch failed: %s", exc)
return None
def _register_slot(self, slot: _PooledBrowser) -> None:
"""Bind slot to the calling thread (both thread-local and registry)."""
_thread_local.slot = slot
with self._lock:
self._slot_registry[threading.get_ident()] = slot
def _unregister_slot(self) -> None:
"""Remove the calling thread's slot from thread-local and registry."""
_thread_local.slot = None
with self._lock:
self._slot_registry.pop(threading.get_ident(), None)
# ------------------------------------------------------------------
# Internal helpers
# ------------------------------------------------------------------
def _check_playwright(self) -> bool:
if self._playwright_available is not None:
return self._playwright_available
try:
import playwright # noqa: F401
from playwright_stealth import Stealth # noqa: F401
self._playwright_available = True
except ImportError:
self._playwright_available = False
return self._playwright_available
def _fetch_with_slot(
self,
slot: _PooledBrowser,
url: str,
wait_for_selector: Optional[str] = None,
wait_for_timeout_ms: int = 2000,
) -> str:
from playwright_stealth import Stealth
page = slot.ctx.new_page()
try:
Stealth().apply_stealth_sync(page)
page.goto(url, wait_until="domcontentloaded", timeout=30_000)
if wait_for_selector:
try:
page.wait_for_selector(wait_for_selector, timeout=15_000)
except Exception:
pass
else:
page.wait_for_timeout(wait_for_timeout_ms)
return page.content()
finally:
try:
page.close()
except Exception:
pass
def _fetch_fresh(
self,
url: str,
wait_for_selector: Optional[str] = None,
wait_for_timeout_ms: int = 2000,
) -> str:
import subprocess as _subprocess
try:
from playwright.sync_api import sync_playwright
from playwright_stealth import Stealth
except ImportError as exc:
raise RuntimeError(
"Playwright not installed — cannot fetch pages. "
"Install playwright and playwright-stealth in the Docker image."
) from exc
display_num = next(_pool_display_counter)
display = f":{display_num}"
env = os.environ.copy()
env["DISPLAY"] = display
xvfb = _subprocess.Popen(
["Xvfb", display] + _XVFB_ARGS,
stdout=_subprocess.DEVNULL,
stderr=_subprocess.DEVNULL,
)
time.sleep(0.3)
try:
with sync_playwright() as pw:
browser = pw.chromium.launch(
headless=False,
env=env,
args=_CHROMIUM_ARGS,
)
ctx = browser.new_context(
user_agent=_USER_AGENT,
viewport=_VIEWPORT,
)
page = ctx.new_page()
Stealth().apply_stealth_sync(page)
page.goto(url, wait_until="domcontentloaded", timeout=30_000)
if wait_for_selector:
try:
page.wait_for_selector(wait_for_selector, timeout=15_000)
except Exception:
pass
else:
page.wait_for_timeout(wait_for_timeout_ms)
html = page.content()
browser.close()
finally:
xvfb.terminate()
xvfb.wait()
return html
# ---------------------------------------------------------------------------
# Module-level singleton
# ---------------------------------------------------------------------------
_pool: Optional[BrowserPool] = None
_pool_lock = threading.Lock()
def get_pool() -> BrowserPool:
"""Return the module-level BrowserPool singleton (creates it if needed)."""
global _pool
if _pool is None:
with _pool_lock:
if _pool is None:
size = int(os.environ.get("BROWSER_POOL_SIZE", "2"))
_pool = BrowserPool(size)
return _pool

View file

@ -291,7 +291,7 @@ class ScrapedEbayAdapter(PlatformAdapter):
self._delay = delay
def _fetch_url(self, url: str) -> str:
"""Core Playwright fetch — stealthed headed Chromium via Xvfb.
"""Core Playwright fetch — stealthed headed Chromium via pre-warmed browser pool.
Shared by both search (_get) and BTF item-page enrichment (_fetch_item_html).
Results cached for _HTML_CACHE_TTL seconds.
@ -300,44 +300,8 @@ class ScrapedEbayAdapter(PlatformAdapter):
if cached and time.time() < cached[1]:
return cached[0]
time.sleep(self._delay)
import os
import subprocess
display_num = next(_display_counter)
display = f":{display_num}"
xvfb = subprocess.Popen(
["Xvfb", display, "-screen", "0", "1280x800x24"],
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
)
env = os.environ.copy()
env["DISPLAY"] = display
try:
from playwright.sync_api import (
sync_playwright, # noqa: PLC0415 — lazy: only needed in Docker
)
from playwright_stealth import Stealth # noqa: PLC0415
with sync_playwright() as pw:
browser = pw.chromium.launch(
headless=False,
env=env,
args=["--no-sandbox", "--disable-dev-shm-usage"],
)
ctx = browser.new_context(
user_agent=_HEADERS["User-Agent"],
viewport={"width": 1280, "height": 800},
)
page = ctx.new_page()
Stealth().apply_stealth_sync(page)
page.goto(url, wait_until="domcontentloaded", timeout=30_000)
page.wait_for_timeout(2000) # let any JS challenges resolve
html = page.content()
browser.close()
finally:
xvfb.terminate()
xvfb.wait()
from app.platforms.ebay.browser_pool import get_pool # noqa: PLC0415 — lazy import
html = get_pool().fetch_html(url, delay=self._delay)
_html_cache[url] = (html, time.time() + _HTML_CACHE_TTL)
return html

View file

@ -0,0 +1,4 @@
"""Mercari platform adapter."""
from app.platforms.mercari.adapter import MercariAdapter
__all__ = ["MercariAdapter"]

View file

@ -0,0 +1,173 @@
"""MercariAdapter — scraper-based Mercari platform adapter.
Trust signal coverage vs eBay:
feedback_count (NumSales from listing page)
feedback_ratio (ReviewStarsWrapper data-stars / 5)
account_age_days (requires seller profile page future work)
category_history (not exposed in HTML future work)
price_vs_market (computed by trust scorer from comps, same as eBay)
Because account_age and category_history are always None, TrustScore.score_is_partial
will be True for all Mercari results. The aggregator handles this correctly
by scoring only from available signals.
seller_platform_id on Listing objects holds the product_id (e.g. "m86032668393")
rather than the seller username, because search results don't expose seller identity.
get_seller() resolves the product_id seller by fetching the listing page.
The DB lookup key is (platform="mercari", platform_seller_id=product_id).
"""
from __future__ import annotations
import json
import logging
import time
from typing import Optional
from app.db.models import Listing, MarketComp, Seller
from app.db.store import Store
from app.platforms import PlatformAdapter, SearchFilters
from app.platforms.mercari.scraper import (
build_search_url,
parse_listing_html,
parse_search_html,
)
log = logging.getLogger(__name__)
_SELLER_CACHE_TTL_HOURS = 6
_BETWEEN_LISTING_FETCH_SECS = 1.5
class MercariAdapter(PlatformAdapter):
def __init__(self, store: Store) -> None:
self._store = store
def search(self, query: str, filters: SearchFilters) -> list[Listing]:
from app.platforms.ebay.browser_pool import get_pool
url = build_search_url(query, filters.max_price, filters.min_price)
log.info("mercari: fetching search URL: %s", url)
html = get_pool().fetch_html(
url,
delay=1.0,
wait_for_timeout_ms=8000,
)
raw_listings = parse_search_html(html)
listings: list[Listing] = []
seen: set[str] = set()
for raw in raw_listings:
pid = raw["product_id"]
if pid in seen:
continue
seen.add(pid)
listings.append(_normalise_listing(raw, query))
log.info("mercari: parsed %d listings for %r", len(listings), query)
# Client-side keyword filter (mirrors eBay scraper behaviour).
if filters.must_include:
listings = _apply_keyword_filter(listings, filters.must_include, filters.must_include_mode)
if filters.must_exclude:
listings = _apply_exclude_filter(listings, filters.must_exclude)
return listings
def get_seller(self, seller_platform_id: str) -> Optional[Seller]:
"""Fetch seller data from the listing page identified by seller_platform_id.
For Mercari, seller_platform_id is the product_id (e.g. "m86032668393")
because seller usernames aren't available from search results HTML.
"""
cached = self._store.get_seller("mercari", seller_platform_id)
if cached:
return cached
from app.platforms.ebay.browser_pool import get_pool
url = f"https://www.mercari.com/us/item/{seller_platform_id}/"
try:
time.sleep(_BETWEEN_LISTING_FETCH_SECS)
html = get_pool().fetch_html(
url,
delay=0.5,
wait_for_timeout_ms=6000,
)
raw = parse_listing_html(html, seller_platform_id)
seller = _normalise_seller(raw)
self._store.save_seller(seller)
return seller
except Exception as exc:
log.warning("mercari: get_seller failed for %s: %s", seller_platform_id, exc)
return None
def get_completed_sales(self, query: str, pages: int = 1) -> list[Listing]:
"""Mercari sold-listing comps — stubbed for Phase 3.
Mercari exposes sold listings via ?status=ITEM_STATUS_TRADING but the
data is sparse. Phase 3 will implement comp extraction here; for now
the trust scorer falls back to price_vs_market=None (partial score).
"""
return []
# ---------------------------------------------------------------------------
# Normalisation helpers
# ---------------------------------------------------------------------------
def _normalise_listing(raw: dict, query: str) -> Listing:
return Listing(
platform="mercari",
platform_listing_id=raw["product_id"],
title=raw["title"],
price=raw["price"],
currency="USD",
condition="", # not available from search results; get_seller() populates this
seller_platform_id=raw["product_id"], # see module docstring
url=raw["url"],
photo_urls=[raw["photo_url"]] if raw.get("photo_url") else [],
listing_age_days=0,
buying_format="fixed_price",
category_name=None,
)
def _normalise_seller(raw: dict) -> Seller:
stars = raw.get("stars", 0.0)
feedback_ratio = min(stars / 5.0, 1.0) if stars > 0 else 0.0
return Seller(
platform="mercari",
platform_seller_id=raw["product_id"],
username=raw.get("username", ""),
account_age_days=None, # not available without seller profile page
feedback_count=raw.get("num_sales", 0),
feedback_ratio=feedback_ratio,
category_history_json=json.dumps({}),
)
def _apply_keyword_filter(listings: list[Listing], must_include: list[str], mode: str) -> list[Listing]:
if not must_include:
return listings
def _matches(listing: Listing) -> bool:
title = listing.title.lower()
if mode == "any":
return any(kw.lower() in title for kw in must_include)
# "all" (default) and "groups" both require all terms present
return all(kw.lower() in title for kw in must_include)
return [l for l in listings if _matches(l)]
def _apply_exclude_filter(listings: list[Listing], must_exclude: list[str]) -> list[Listing]:
if not must_exclude:
return listings
def _clean(listing: Listing) -> bool:
title = listing.title.lower()
return not any(term.lower() in title for term in must_exclude)
return [l for l in listings if _clean(l)]

View file

@ -0,0 +1,165 @@
"""Mercari search + listing page scraper.
Uses the shared eBay browser pool (headed Chromium + Xvfb + playwright-stealth)
which already bypasses Cloudflare Turnstile. Import the pool singleton from
ebay.browser_pool so both platforms share the same warm Chromium instances.
Seller data is NOT available from search results HTML only from individual
listing pages. The adapter lazily fetches listing pages in get_seller().
"""
from __future__ import annotations
import logging
import re
from typing import Optional
from urllib.parse import urlencode
from bs4 import BeautifulSoup, NavigableString
log = logging.getLogger(__name__)
_BASE = "https://www.mercari.com"
_SEARCH_PATH = "/search/"
_ITEM_PATH = "/us/item/"
_PRICE_RE = re.compile(r"[\d,]+\.?\d*")
_POSTED_RE = re.compile(r"(\d{2})/(\d{2})/(\d{2,4})") # MM/DD/YY or MM/DD/YYYY
def build_search_url(query: str, max_price: Optional[float] = None, min_price: Optional[float] = None) -> str:
# No explicit sortBy — Mercari's default (relevance) is the most useful order.
# "sortBy=SORT_SCORE" was a deprecated value that returns an empty results page.
params: dict = {"keyword": query}
# Mercari accepts priceMin/priceMax as whole dollar strings (not cents)
if min_price is not None and min_price > 0:
params["priceMin"] = str(int(min_price))
if max_price is not None and max_price > 0:
params["priceMax"] = str(int(max_price))
return f"{_BASE}{_SEARCH_PATH}?{urlencode(params)}"
def parse_search_html(html: str) -> list[dict]:
"""Parse Mercari search results HTML into a list of raw listing dicts."""
soup = BeautifulSoup(html, "html.parser")
results: list[dict] = []
for item in soup.find_all(attrs={"data-testid": "ItemContainer"}):
pid = item.get("data-productid", "")
if not pid:
continue
parent = item.parent
href = parent.get("href") if parent and parent.name == "a" else None
url = f"{_BASE}{href}" if href else f"{_BASE}{_ITEM_PATH}{pid}/"
name_el = item.find(attrs={"data-testid": "ItemName"})
title = name_el.get_text(strip=True) if name_el else ""
price = _extract_current_price(item)
img_el = item.find("img")
photo_url = img_el.get("src", "") if img_el else ""
results.append({
"product_id": pid,
"url": url,
"title": title,
"price": price,
"photo_url": photo_url,
"brand": item.get("data-brand", ""),
"is_on_sale": item.get("data-is-on-sale") == "true",
})
return results
def _extract_current_price(item: BeautifulSoup) -> float:
"""Return the current (non-strikethrough) price from an ItemContainer."""
price_el = item.find(attrs={"data-testid": "ProductThumbItemPrice"})
if not price_el:
return 0.0
# Direct text nodes are the current price; the nested span is the original.
price_text = "".join(
str(c) for c in price_el.children if isinstance(c, NavigableString)
).strip()
m = _PRICE_RE.search(price_text)
if m:
try:
return float(m.group().replace(",", ""))
except ValueError:
pass
return 0.0
def parse_listing_html(html: str, product_id: str) -> dict:
"""Parse a Mercari listing page into a raw seller dict."""
soup = BeautifulSoup(html, "html.parser")
def _text(testid: str) -> str:
el = soup.find(attrs={"data-testid": testid})
return el.get_text(strip=True) if el else ""
username_raw = _text("ItemDetailsSellerUserName")
username = username_raw.lstrip("@")
num_sales = _safe_int(_text("NumSales"))
rating_count = _safe_int(_text("SellerRatingCount"))
stars = 0.0
rw = soup.find(attrs={"data-testid": "ReviewStarsWrapper"})
if rw:
try:
stars = float(rw.get("data-stars", 0))
except (ValueError, TypeError):
pass
condition = _text("ItemDetailsCondition").lower()
posted_text = _text("ItemDetailsPosted")
listing_age_days = _parse_listing_age(posted_text)
price_text = _text("ItemPrice")
price = 0.0
m = _PRICE_RE.search(price_text.replace(",", ""))
if m:
try:
price = float(m.group())
except ValueError:
pass
return {
"product_id": product_id,
"username": username,
"num_sales": num_sales, # completed sales → maps to feedback_count
"rating_count": rating_count, # number of reviews (additional signal)
"stars": stars, # 0.05.0 → divide by 5 = feedback_ratio
"condition": condition,
"listing_age_days": listing_age_days,
"price": price,
}
def _safe_int(text: str) -> int:
m = _PRICE_RE.search(text.replace(",", ""))
if m:
try:
return int(float(m.group()))
except ValueError:
pass
return 0
def _parse_listing_age(posted_text: str) -> int:
"""Convert a posted date like '04/10/26' to days since posted."""
from datetime import datetime, timezone
m = _POSTED_RE.search(posted_text)
if not m:
return 0
try:
month, day, year = int(m.group(1)), int(m.group(2)), int(m.group(3))
if year < 100:
year += 2000
posted = datetime(year, month, day, tzinfo=timezone.utc)
return (datetime.now(timezone.utc) - posted).days
except (ValueError, OverflowError):
return 0

145
app/tasks/monitor.py Normal file
View file

@ -0,0 +1,145 @@
# app/tasks/monitor.py
"""Background saved-search monitor — polls eBay and writes WatchAlerts for new listings.
Design notes:
- Runs synchronously inside an asyncio.to_thread() call from the polling loop.
- Uses the same eBay adapter + trust scoring pipeline as the live search endpoint.
- Dedup via watch_alerts (saved_search_id, platform_listing_id) UNIQUE constraint.
- Never takes any transactional action alert only.
"""
from __future__ import annotations
import json
import logging
from pathlib import Path
from app.db.models import SavedSearch, WatchAlert
from app.db.store import Store
log = logging.getLogger(__name__)
_AUCTION_ALERT_WINDOW_HOURS = 24 # alert on auctions ending within this window
def should_alert(
*,
trust_score: int,
score_is_partial: bool,
price: float,
buying_format: str,
min_trust_score: int,
ends_at: "str | None" = None,
) -> bool:
"""Return True if a listing qualifies for a watch alert.
BIN (fixed_price / best_offer): alert immediately these sell on a first-come
basis, so speed matters. Require a higher trust bar on partial scores to reduce
false positives while BTF scraping is still in flight.
Auction: only alert when the auction is within _AUCTION_ALERT_WINDOW_HOURS of
ending. Alerting on a 7-day auction 6 days early is noise the user can't act
usefully until the end window anyway. Bid scheduling (paid+) and sniping algo
(premium) are separate features built on top of this alert layer.
"""
from datetime import datetime, timezone
# Partial scores: apply a +10 buffer so we don't surface unreliable signals.
effective_min = min_trust_score + 10 if score_is_partial else min_trust_score
if trust_score < effective_min:
return False
if buying_format in ("fixed_price", "best_offer"):
# BIN: alert immediately — inventory can disappear any time.
return True
if buying_format == "auction":
if not ends_at:
# No end time recorded — alert anyway rather than silently skip.
return True
try:
end = datetime.fromisoformat(ends_at.replace("Z", "+00:00"))
hours_remaining = (end - datetime.now(timezone.utc)).total_seconds() / 3600
return 0 < hours_remaining <= _AUCTION_ALERT_WINDOW_HOURS
except (ValueError, TypeError):
log.debug("should_alert: could not parse ends_at=%r, alerting anyway", ends_at)
return True
# Unknown format — alert and let the user decide.
return True
def run_monitor_search(
search: SavedSearch,
*,
user_db: Path,
shared_db: Path,
) -> int:
"""Execute one background monitor run for a saved search.
Fetches current listings, scores them, writes new high-trust finds
to watch_alerts. Returns the count of new alerts written.
Called from the async polling loop via asyncio.to_thread().
"""
from app.platforms.ebay.adapter import EbayAdapter
from app.trust import TrustScorer
log.info("Monitor: checking saved search %d (%r)", search.id, search.name)
filters = json.loads(search.filters_json or "{}")
query = filters.pop("query_raw", search.query)
try:
adapter = EbayAdapter()
raw_listings = adapter.search(query, **filters)
except Exception as exc:
log.warning("Monitor: eBay search failed for search %d: %s", search.id, exc)
return 0
shared_store = Store(shared_db)
user_store = Store(user_db)
scorer = TrustScorer(shared_store)
try:
trust_scores = scorer.score_batch(raw_listings, query)
except Exception as exc:
log.warning("Monitor: trust scoring failed for search %d: %s", search.id, exc)
return 0
new_alert_count = 0
for listing, trust in zip(raw_listings, trust_scores):
qualifies = should_alert(
trust_score=trust.composite_score,
score_is_partial=trust.score_is_partial,
price=listing.price,
buying_format=listing.buying_format,
min_trust_score=search.min_trust_score,
ends_at=listing.ends_at,
)
if not qualifies:
continue
alert = WatchAlert(
saved_search_id=search.id,
platform_listing_id=listing.platform_listing_id,
title=listing.title,
price=listing.price,
currency=listing.currency,
trust_score=trust.composite_score,
url=listing.url,
)
_, is_new = user_store.upsert_alert(alert)
if is_new:
new_alert_count += 1
log.info(
"Monitor: new alert — search %d, listing %s, score=%d",
search.id, listing.platform_listing_id, trust.composite_score,
)
user_store.mark_search_checked(search.id)
log.info(
"Monitor: search %d done — %d new alerts from %d listings",
search.id, new_alert_count, len(raw_listings),
)
return new_alert_count

View file

@ -11,6 +11,15 @@ HARD_FILTER_AGE_DAYS = 7
HARD_FILTER_BAD_RATIO_MIN_COUNT = 20
HARD_FILTER_BAD_RATIO_THRESHOLD = 0.80
# Above this lifetime count the 12-month ratio may cover only a tiny recent sample,
# making a hard bad-actor flag disproportionate. Instead we emit the softer
# "declining_ratio" flag and let the composite score carry the penalty.
# Note: buyer-feedback-only accounts (e.g. longtime buyers who start selling) are a
# related edge case that requires profile-page scraping to detect properly — tracked
# in snipe#52 as a medium-term fix.
HARD_FILTER_BAD_RATIO_MAX_COUNT = 500
HARD_FILTER_BAD_RATIO_HIGH_THRESHOLD = 0.60 # catastrophically bad even for high-volume
# Sellers above this feedback count are treated as established retailers.
# Stock photo reuse (duplicate_photo) is suppressed for them — large retailers
# legitimately share manufacturer images across many listings.
@ -117,11 +126,23 @@ class Aggregator:
# Hard filters
if seller and seller.account_age_days is not None and seller.account_age_days < HARD_FILTER_AGE_DAYS:
red_flags.append("new_account")
if seller and (
seller.feedback_ratio < HARD_FILTER_BAD_RATIO_THRESHOLD
and seller.feedback_count > HARD_FILTER_BAD_RATIO_MIN_COUNT
):
if seller and seller.feedback_ratio == 0.0 and seller.feedback_count > 0:
# 12-month ratio missing from page — returning seller or buyer-only account.
# Score will be partial (metadata._feedback_ratio returns None). Soft flag
# only: do NOT fire established_bad_actor on what is likely missing data.
red_flags.append("no_recent_seller_data")
elif seller and seller.feedback_ratio < HARD_FILTER_BAD_RATIO_THRESHOLD:
if HARD_FILTER_BAD_RATIO_MIN_COUNT < seller.feedback_count <= HARD_FILTER_BAD_RATIO_MAX_COUNT:
# Moderate-volume account with consistently bad ratio → hard flag.
red_flags.append("established_bad_actor")
elif seller.feedback_count > HARD_FILTER_BAD_RATIO_MAX_COUNT:
if seller.feedback_ratio < HARD_FILTER_BAD_RATIO_HIGH_THRESHOLD:
# High-volume seller with catastrophic ratio → still hard flag.
red_flags.append("established_bad_actor")
else:
# High-volume seller with declining but not catastrophic ratio.
# 12-month window may cover only a small recent sample — soft flag only.
red_flags.append("declining_ratio")
if seller and seller.feedback_count == 0:
red_flags.append("zero_feedback")
# Zero feedback is a deliberate signal, not missing data — cap composite score

View file

@ -44,7 +44,13 @@ class MetadataScorer:
if count < 200: return 15
return 20
def _feedback_ratio(self, ratio: float, count: int) -> int:
def _feedback_ratio(self, ratio: float, count: int) -> Optional[int]:
# ratio=0.0 with count>0 means the 12-month percentage wasn't on the page —
# eBay omits the ratio for returning/buyer-only sellers with no recent sales.
# Treat as missing rather than "literally 0% positive" (which eBay doesn't allow
# on active accounts — those get suspended long before reaching 0%).
if ratio == 0.0 and count > 0:
return None
if ratio < 0.80 and count > 20: return 0
if ratio < 0.90: return 5
if ratio < 0.95: return 10

View file

@ -23,6 +23,7 @@ services:
# CF_ORCH_URL routes LLM query builder through cf-orch for VRAM-aware scheduling.
# Override in .env to use a different coordinator URL.
CF_ORCH_URL: "http://host.docker.internal:7700"
CF_APP_NAME: snipe
extra_hosts:
- "host.docker.internal:host-gateway"
# No network_mode: host — isolated on snipe-cloud-net; nginx reaches it via 'api:8510'

View file

@ -16,6 +16,10 @@ server {
# Forward the session header injected by Caddy from the cf_session cookie.
# Caddy adds: header_up X-CF-Session {http.request.cookie.cf_session}
proxy_set_header X-CF-Session $http_x_cf_session;
# eBay search + comps can take 60-90s (Marketplace Insights 404 → Browse fallback).
# Default 60s proxy_read_timeout drops slow searches with a NetworkError on the client.
proxy_read_timeout 120s;
proxy_send_timeout 120s;
}
# index.html — never cache; ensures clients always get the latest entry point

Binary file not shown.

Before

Width:  |  Height:  |  Size: 118 KiB

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

View file

@ -0,0 +1,64 @@
"""Reproduce the exact FastAPI code path: pool warmup → slot close → _fetch_fresh.
Run inside the container:
docker exec -it snipe-api-1 python /app/snipe/scripts/debug_fetch_fresh.py
"""
import sys, time, threading
sys.path.insert(0, '/app/snipe')
from bs4 import BeautifulSoup
from app.platforms.ebay.browser_pool import BrowserPool, _close_slot
URL = "https://www.mercari.com/search/?keyword=rtx+4090&sortBy=SORT_SCORE&priceMax=800"
print("=== Test 1: _fetch_fresh with no pool (baseline) ===", flush=True)
pool0 = BrowserPool(size=0)
t0 = time.time()
html = pool0._fetch_fresh(URL, wait_for_timeout_ms=8000)
items = BeautifulSoup(html, "html.parser").find_all(attrs={"data-testid": "ItemContainer"})
print(f"Items: {len(items)}, HTML: {len(html)}b, elapsed: {time.time()-t0:.1f}s", flush=True)
print("\n=== Test 2: pool warmup (size=2), grab slot, close it, then _fetch_fresh ===", flush=True)
pool2 = BrowserPool(size=2)
# Warmup in background (blocks until done)
warm_done = threading.Event()
def do_warmup():
pool2.start()
warm_done.set()
t = threading.Thread(target=do_warmup, daemon=True)
t.start()
warm_done.wait(timeout=30)
print(f"Pool size after warmup: {pool2._q.qsize()}", flush=True)
# Grab a slot and close it (simulating the thread-error path)
import queue
try:
slot = pool2._q.get(timeout=3.0)
print(f"Got slot on display :{slot.display_num}", flush=True)
_close_slot(slot)
print("Slot closed", flush=True)
except queue.Empty:
print("Pool empty — no slot to simulate", flush=True)
# Now call _fetch_fresh in this thread (same as FastAPI handler thread)
print("Calling _fetch_fresh from warmup-thread context...", flush=True)
t0 = time.time()
html2 = pool2._fetch_fresh(URL, wait_for_timeout_ms=8000)
items2 = BeautifulSoup(html2, "html.parser").find_all(attrs={"data-testid": "ItemContainer"})
print(f"Items: {len(items2)}, HTML: {len(html2)}b, elapsed: {time.time()-t0:.1f}s", flush=True)
# Save HTML for inspection if empty
if len(items2) == 0:
with open("/tmp/debug_mercari.html", "w") as f:
f.write(html2)
print("Saved HTML to /tmp/debug_mercari.html", flush=True)
title = BeautifulSoup(html2, "html.parser").find("title")
print("Page title:", title.get_text() if title else "(none)", flush=True)
if "Just a moment" in html2 or "turnstile" in html2.lower():
print("BLOCKED: Cloudflare challenge", flush=True)
else:
body = BeautifulSoup(html2, "html.parser").find("body")
if body:
print("Body snippet:", body.get_text(separator=" ", strip=True)[:300], flush=True)

113
scripts/probe_mercari.py Normal file
View file

@ -0,0 +1,113 @@
"""One-shot Mercari probe using the same headed Chromium + Xvfb + stealth stack
as the eBay scraper. Run inside the snipe-api container:
docker exec -it snipe-api-1 python /app/scripts/probe_mercari.py
"""
from __future__ import annotations
import itertools
import os
import subprocess
import sys
import time
_display_counter = itertools.count(200)
_CHROMIUM_ARGS = ["--no-sandbox", "--disable-dev-shm-usage"]
_USER_AGENT = (
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36"
)
SEARCH_URL = "https://www.mercari.com/search/?keyword=rtx+4090"
# Give Cloudflare challenge time to resolve (if it does)
WAIT_MS = 8_000
def probe(url: str) -> str:
from playwright.sync_api import sync_playwright
from playwright_stealth import Stealth
display_num = next(_display_counter)
display = f":{display_num}"
env = os.environ.copy()
env["DISPLAY"] = display
xvfb = subprocess.Popen(
["Xvfb", display, "-screen", "0", "1280x800x24"],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
time.sleep(0.5)
try:
with sync_playwright() as pw:
browser = pw.chromium.launch(
headless=False,
env=env,
args=_CHROMIUM_ARGS,
)
ctx = browser.new_context(
user_agent=_USER_AGENT,
viewport={"width": 1280, "height": 800},
)
page = ctx.new_page()
Stealth().apply_stealth_sync(page)
print(f"[probe] Navigating to {url}", flush=True)
response = page.goto(url, wait_until="domcontentloaded", timeout=40_000)
print(f"[probe] HTTP status: {response.status if response else 'unknown'}", flush=True)
print(f"[probe] Waiting {WAIT_MS}ms for JS / Turnstile …", flush=True)
page.wait_for_timeout(WAIT_MS)
html = page.content()
title = page.title()
print(f"[probe] Page title: {title!r}", flush=True)
browser.close()
finally:
xvfb.terminate()
xvfb.wait()
return html
def analyse(html: str) -> None:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, "html.parser")
# Cloudflare challenge indicators
if "Just a moment" in html or "cf-challenge" in html or "turnstile" in html.lower():
print("[result] BLOCKED — Cloudflare Turnstile still active")
return
print("[result] Cloudflare challenge NOT detected — page appears to have loaded")
# Try to find listing cards
# Mercari US uses data-testid or item cards in the DOM
candidates = [
soup.select("[data-testid='ItemCell']"),
soup.select("[data-testid='item-cell']"),
soup.select("li[data-testid]"),
soup.select(".merList .merListItem"),
soup.select("[class*='ItemCell']"),
soup.select("[class*='item-cell']"),
]
for sel_result in candidates:
if sel_result:
print(f"[result] Found {len(sel_result)} listing card(s) via selector")
card = sel_result[0]
print(f"[result] First card snippet:\n{card.prettify()[:800]}")
return
# Fallback: show body text summary
body = soup.find("body")
text = body.get_text(separator=" ", strip=True)[:500] if body else html[:500]
print(f"[result] No listing cards found. Body text preview:\n{text}")
# Save full HTML for manual inspection
out = "/tmp/mercari_probe.html"
with open(out, "w") as fh:
fh.write(html)
print(f"[result] Full HTML saved to {out}")
if __name__ == "__main__":
html = probe(SEARCH_URL)
analyse(html)

View file

@ -0,0 +1,456 @@
"""Tests for app.platforms.ebay.browser_pool (thread-local design).
All tests run without real Chromium / Xvfb / Playwright.
Playwright, Xvfb subprocess calls, and Stealth are mocked throughout.
"""
from __future__ import annotations
import subprocess
import threading
import time
from typing import Any
from unittest.mock import MagicMock, patch
import pytest
# ---------------------------------------------------------------------------
# Helpers to reset the module-level singleton between tests
# ---------------------------------------------------------------------------
def _reset_pool_singleton():
import app.platforms.ebay.browser_pool as _mod
_mod._pool = None
def _reset_thread_local():
import app.platforms.ebay.browser_pool as _mod
_mod._thread_local.slot = None
@pytest.fixture(autouse=True)
def reset_pool():
_reset_pool_singleton()
_reset_thread_local()
yield
_reset_pool_singleton()
_reset_thread_local()
def _make_fake_slot():
from app.platforms.ebay.browser_pool import _PooledBrowser
xvfb = MagicMock(spec=subprocess.Popen)
pw = MagicMock()
browser = MagicMock()
ctx = MagicMock()
return _PooledBrowser(
xvfb=xvfb, pw=pw, browser=browser, ctx=ctx,
display_num=100, last_used_ts=time.time(),
)
# ---------------------------------------------------------------------------
# Singleton tests
# ---------------------------------------------------------------------------
class TestGetPoolSingleton:
def test_returns_same_instance(self):
from app.platforms.ebay.browser_pool import get_pool, BrowserPool
assert get_pool() is get_pool()
def test_returns_browser_pool_instance(self):
from app.platforms.ebay.browser_pool import get_pool, BrowserPool
assert isinstance(get_pool(), BrowserPool)
def test_default_size_is_two(self):
from app.platforms.ebay.browser_pool import get_pool
assert get_pool()._size == 2
def test_custom_size_from_env(self, monkeypatch):
monkeypatch.setenv("BROWSER_POOL_SIZE", "5")
from app.platforms.ebay.browser_pool import get_pool
assert get_pool()._size == 5
# ---------------------------------------------------------------------------
# start() / stop() lifecycle tests
# ---------------------------------------------------------------------------
class TestLifecycle:
def test_start_is_noop_when_playwright_unavailable(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
with patch.object(pool, "_check_playwright", return_value=False):
pool.start()
assert pool._started is True
assert pool._slot_registry == {}
def test_start_only_runs_once(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
with patch.object(pool, "_check_playwright", return_value=False):
pool.start()
pool.start()
assert pool._started is True
def test_stop_closes_all_registry_slots(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
slot1 = _make_fake_slot()
slot2 = _make_fake_slot()
pool._slot_registry[1001] = slot1
pool._slot_registry[1002] = slot2
with patch("app.platforms.ebay.browser_pool._close_slot") as mock_close:
pool.stop()
assert mock_close.call_count == 2
assert pool._slot_registry == {}
assert pool._stopped is True
def test_stop_on_empty_registry_is_safe(self):
from app.platforms.ebay.browser_pool import BrowserPool
BrowserPool(size=2).stop()
# ---------------------------------------------------------------------------
# fetch_html — thread-local slot hit path
# ---------------------------------------------------------------------------
class TestFetchHtmlSlotHit:
def test_uses_existing_slot_and_replenishes(self):
from app.platforms.ebay.browser_pool import BrowserPool
import app.platforms.ebay.browser_pool as _mod
pool = BrowserPool(size=1)
slot = _make_fake_slot()
_mod._thread_local.slot = slot
fresh_slot = _make_fake_slot()
with (
patch.object(pool, "_fetch_with_slot", return_value="<html>ok</html>") as mock_fetch,
patch("app.platforms.ebay.browser_pool._replenish_slot", return_value=fresh_slot),
patch.object(pool, "_register_slot") as mock_register,
patch("time.sleep"),
):
html = pool.fetch_html("https://www.ebay.com/sch/i.html?_nkw=test", delay=0)
assert html == "<html>ok</html>"
mock_fetch.assert_called_once_with(
slot, "https://www.ebay.com/sch/i.html?_nkw=test",
wait_for_selector=None, wait_for_timeout_ms=2000,
)
mock_register.assert_called_once_with(fresh_slot)
def test_delay_is_respected(self):
from app.platforms.ebay.browser_pool import BrowserPool
import app.platforms.ebay.browser_pool as _mod
pool = BrowserPool(size=1)
_mod._thread_local.slot = _make_fake_slot()
with (
patch.object(pool, "_fetch_with_slot", return_value="<html/>"),
patch("app.platforms.ebay.browser_pool._replenish_slot", return_value=_make_fake_slot()),
patch.object(pool, "_register_slot"),
patch("app.platforms.ebay.browser_pool.time") as mock_time,
):
pool.fetch_html("https://example.com", delay=1.5)
mock_time.sleep.assert_called_once_with(1.5)
# ---------------------------------------------------------------------------
# fetch_html — no slot / fallback path
# ---------------------------------------------------------------------------
class TestFetchHtmlFallback:
def test_falls_back_when_no_slot_and_playwright_unavailable(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
# No thread-local slot; playwright unavailable → _get_or_create returns None.
with (
patch.object(pool, "_get_or_create_thread_slot", return_value=None),
patch.object(pool, "_fetch_fresh", return_value="<html>fresh</html>") as mock_fresh,
patch("time.sleep"),
):
html = pool.fetch_html("https://www.ebay.com/sch/i.html?_nkw=widget", delay=0)
assert html == "<html>fresh</html>"
mock_fresh.assert_called_once_with(
"https://www.ebay.com/sch/i.html?_nkw=widget",
wait_for_selector=None, wait_for_timeout_ms=2000,
)
def test_falls_back_when_pooled_fetch_raises(self):
from app.platforms.ebay.browser_pool import BrowserPool
import app.platforms.ebay.browser_pool as _mod
pool = BrowserPool(size=1)
slot = _make_fake_slot()
_mod._thread_local.slot = slot
with (
patch.object(pool, "_fetch_with_slot", side_effect=RuntimeError("Chromium crashed")),
patch.object(pool, "_fetch_fresh", return_value="<html>recovered</html>") as mock_fresh,
patch("app.platforms.ebay.browser_pool._close_slot") as mock_close,
patch.object(pool, "_unregister_slot"),
patch("time.sleep"),
):
html = pool.fetch_html("https://www.ebay.com/", delay=0)
assert html == "<html>recovered</html>"
mock_close.assert_called_once_with(slot)
mock_fresh.assert_called_once()
# ---------------------------------------------------------------------------
# Thread-local slot management
# ---------------------------------------------------------------------------
class TestThreadLocalSlotManagement:
def test_get_or_create_returns_existing_slot(self):
import app.platforms.ebay.browser_pool as _mod
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
pool._playwright_available = True
existing = _make_fake_slot()
_mod._thread_local.slot = existing
result = pool._get_or_create_thread_slot()
assert result is existing
def test_get_or_create_launches_new_slot_when_absent(self):
import app.platforms.ebay.browser_pool as _mod
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
pool._playwright_available = True
_mod._thread_local.slot = None
new_slot = _make_fake_slot()
with (
patch("app.platforms.ebay.browser_pool._launch_slot", return_value=new_slot),
patch.object(pool, "_register_slot") as mock_register,
):
result = pool._get_or_create_thread_slot()
assert result is new_slot
mock_register.assert_called_once_with(new_slot)
def test_get_or_create_returns_none_when_playwright_unavailable(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
pool._playwright_available = False
assert pool._get_or_create_thread_slot() is None
def test_register_slot_sets_thread_local_and_registry(self):
import app.platforms.ebay.browser_pool as _mod
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
slot = _make_fake_slot()
pool._register_slot(slot)
assert _mod._thread_local.slot is slot
assert threading.get_ident() in pool._slot_registry
def test_unregister_slot_clears_thread_local_and_registry(self):
import app.platforms.ebay.browser_pool as _mod
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
slot = _make_fake_slot()
pool._register_slot(slot)
pool._unregister_slot()
assert getattr(_mod._thread_local, "slot", None) is None
assert threading.get_ident() not in pool._slot_registry
def test_different_threads_get_independent_slots(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
pool._playwright_available = True
slots_seen: list = []
errors: list = []
def worker():
new_slot = _make_fake_slot()
with patch("app.platforms.ebay.browser_pool._launch_slot", return_value=new_slot):
s = pool._get_or_create_thread_slot()
slots_seen.append(s)
t1 = threading.Thread(target=worker)
t2 = threading.Thread(target=worker)
t1.start(); t2.start()
t1.join(); t2.join()
assert len(slots_seen) == 2
# Each thread got its own slot object (they may differ or coincidentally share
# the same mock; what matters is both threads succeeded without interference).
assert all(s is not None for s in slots_seen)
# ---------------------------------------------------------------------------
# ImportError graceful fallback
# ---------------------------------------------------------------------------
class TestImportErrorHandling:
def test_check_playwright_returns_false_on_import_error(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=2)
with patch.dict("sys.modules", {"playwright": None, "playwright_stealth": None}):
pool._playwright_available = None
result = pool._check_playwright()
assert result is False
assert pool._playwright_available is False
def test_start_logs_warning_when_playwright_missing(self, caplog):
import logging
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
pool._playwright_available = False
with patch.object(pool, "_check_playwright", return_value=False):
with caplog.at_level(logging.WARNING, logger="app.platforms.ebay.browser_pool"):
pool.start()
assert any("not available" in r.message for r in caplog.records)
def test_fetch_fresh_raises_runtime_error_when_playwright_missing(self):
from app.platforms.ebay.browser_pool import BrowserPool
pool = BrowserPool(size=1)
with patch.dict("sys.modules", {"playwright": None, "playwright.sync_api": None}):
with pytest.raises(RuntimeError, match="Playwright not installed"):
pool._fetch_fresh("https://www.ebay.com/")
# ---------------------------------------------------------------------------
# _replenish_slot helper
# ---------------------------------------------------------------------------
class TestReplenishSlot:
def test_replenish_closes_old_context_and_opens_new(self):
from app.platforms.ebay.browser_pool import _replenish_slot, _PooledBrowser
old_ctx = MagicMock()
new_ctx = MagicMock()
browser = MagicMock()
browser.new_context.return_value = new_ctx
slot = _PooledBrowser(
xvfb=MagicMock(), pw=MagicMock(), browser=browser,
ctx=old_ctx, display_num=101, last_used_ts=time.time() - 10,
)
result = _replenish_slot(slot)
old_ctx.close.assert_called_once()
browser.new_context.assert_called_once()
assert result.ctx is new_ctx
assert result.browser is browser
assert result.xvfb is slot.xvfb
assert result.last_used_ts > slot.last_used_ts
# ---------------------------------------------------------------------------
# _close_slot helper
# ---------------------------------------------------------------------------
class TestCloseSlot:
def test_close_slot_closes_all_components(self):
from app.platforms.ebay.browser_pool import _close_slot, _PooledBrowser
xvfb = MagicMock(spec=subprocess.Popen)
pw = MagicMock()
browser = MagicMock()
ctx = MagicMock()
slot = _PooledBrowser(
xvfb=xvfb, pw=pw, browser=browser, ctx=ctx,
display_num=102, last_used_ts=time.time(),
)
_close_slot(slot)
ctx.close.assert_called_once()
browser.close.assert_called_once()
pw.stop.assert_called_once()
xvfb.terminate.assert_called_once()
xvfb.wait.assert_called_once()
def test_close_slot_ignores_exceptions(self):
from app.platforms.ebay.browser_pool import _close_slot, _PooledBrowser
xvfb = MagicMock(spec=subprocess.Popen)
xvfb.terminate.side_effect = OSError("already dead")
xvfb.wait.side_effect = OSError("already dead")
pw = MagicMock()
pw.stop.side_effect = RuntimeError("stopped")
browser = MagicMock()
browser.close.side_effect = RuntimeError("gone")
ctx = MagicMock()
ctx.close.side_effect = RuntimeError("gone")
slot = _PooledBrowser(
xvfb=xvfb, pw=pw, browser=browser, ctx=ctx,
display_num=103, last_used_ts=time.time(),
)
_close_slot(slot) # must not raise
# ---------------------------------------------------------------------------
# Scraper integration — _fetch_url uses pool
# ---------------------------------------------------------------------------
class TestScraperUsesPool:
def test_fetch_url_delegates_to_pool(self):
from app.platforms.ebay.browser_pool import BrowserPool
from app.platforms.ebay.scraper import ScrapedEbayAdapter
from app.db.store import Store
store = MagicMock(spec=Store)
adapter = ScrapedEbayAdapter(store, delay=0)
fake_pool = MagicMock(spec=BrowserPool)
fake_pool.fetch_html.return_value = "<html>pooled</html>"
with patch("app.platforms.ebay.browser_pool.get_pool", return_value=fake_pool):
import app.platforms.ebay.scraper as scraper_mod
scraper_mod._html_cache.clear()
html = adapter._fetch_url("https://www.ebay.com/sch/i.html?_nkw=test")
assert html == "<html>pooled</html>"
fake_pool.fetch_html.assert_called_once_with(
"https://www.ebay.com/sch/i.html?_nkw=test", delay=0
)
def test_fetch_url_uses_cache_before_pool(self):
from app.platforms.ebay.scraper import ScrapedEbayAdapter, _html_cache, _HTML_CACHE_TTL
from app.db.store import Store
store = MagicMock(spec=Store)
adapter = ScrapedEbayAdapter(store, delay=0)
url = "https://www.ebay.com/sch/i.html?_nkw=cached"
_html_cache[url] = ("<html>cached</html>", time.time() + _HTML_CACHE_TTL)
fake_pool = MagicMock()
with patch("app.platforms.ebay.browser_pool.get_pool", return_value=fake_pool):
html = adapter._fetch_url(url)
assert html == "<html>cached</html>"
fake_pool.fetch_html.assert_not_called()
_html_cache.pop(url, None)

402
tests/test_search_cache.py Normal file
View file

@ -0,0 +1,402 @@
"""Tests for the short-TTL search result cache in api/main.py.
Covers:
- _cache_key stability (same inputs same key)
- _cache_key uniqueness (different inputs different keys)
- cache hit path returns early without scraping (async worker)
- cache miss path stores result in _search_result_cache
- refresh=True bypasses cache read (still writes fresh result)
- TTL expiry: expired entries are not returned as hits
- _evict_expired_cache removes expired entries
"""
from __future__ import annotations
import os
import queue as _queue
import time
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
# ── Helpers ───────────────────────────────────────────────────────────────────
def _clear_cache():
"""Reset module-level cache state between tests."""
import api.main as _main
_main._search_result_cache.clear()
_main._last_eviction_ts = 0.0
@pytest.fixture(autouse=True)
def isolated_cache():
"""Ensure each test starts with an empty cache."""
_clear_cache()
yield
_clear_cache()
@pytest.fixture
def client(tmp_path):
"""TestClient backed by a fresh tmp DB."""
os.environ["SNIPE_DB"] = str(tmp_path / "snipe.db")
from api.main import app
from fastapi.testclient import TestClient
return TestClient(app, raise_server_exceptions=False)
def _make_mock_listing(listing_id: str = "123456789", seller_id: str = "test_seller"):
"""Return a MagicMock listing (for use where asdict() is NOT called on it)."""
m = MagicMock()
m.platform_listing_id = listing_id
m.seller_platform_id = seller_id
m.title = "Test GPU"
m.price = 100.0
m.currency = "USD"
m.condition = "Used"
m.url = f"https://www.ebay.com/itm/{listing_id}"
m.photo_urls = []
m.listing_age_days = 5
m.buying_format = "fixed_price"
m.ends_at = None
m.fetched_at = None
m.trust_score_id = None
m.id = 1
m.category_name = None
return m
def _make_real_listing(listing_id: str = "123456789", seller_id: str = "test_seller"):
"""Return a real Listing dataclass instance (for use where asdict() is called)."""
from app.db.models import Listing
return Listing(
platform="ebay",
platform_listing_id=listing_id,
title="Test GPU",
price=100.0,
currency="USD",
condition="Used",
seller_platform_id=seller_id,
url=f"https://www.ebay.com/itm/{listing_id}",
photo_urls=[],
listing_age_days=5,
buying_format="fixed_price",
id=None,
)
# ── _cache_key unit tests ─────────────────────────────────────────────────────
def test_cache_key_stable_for_same_inputs():
"""The same parameter set always produces the same key."""
from api.main import _cache_key
k1 = _cache_key("rtx 3080", 400.0, 100.0, 2, "rtx,3080", "all", "mining", "27386")
k2 = _cache_key("rtx 3080", 400.0, 100.0, 2, "rtx,3080", "all", "mining", "27386")
assert k1 == k2
def test_cache_key_case_normalised():
"""Query is normalised to lower-case + stripped before hashing."""
from api.main import _cache_key
k1 = _cache_key("RTX 3080", None, None, 1, "", "all", "", "")
k2 = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
assert k1 == k2
def test_cache_key_differs_on_query_change():
"""Different query strings must produce different keys."""
from api.main import _cache_key
k1 = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
k2 = _cache_key("gtx 1080", None, None, 1, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_price_filter():
"""Different max_price must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", 400.0, None, 1, "", "all", "", "")
k2 = _cache_key("gpu", 500.0, None, 1, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_min_price():
"""Different min_price must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, 50.0, 1, "", "all", "", "")
k2 = _cache_key("gpu", None, 100.0, 1, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_pages():
"""Different page count must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "", "all", "", "")
k2 = _cache_key("gpu", None, None, 2, "", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_must_include():
"""Different must_include terms must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "rtx", "all", "", "")
k2 = _cache_key("gpu", None, None, 1, "gtx", "all", "", "")
assert k1 != k2
def test_cache_key_differs_on_must_exclude():
"""Different must_exclude terms must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "", "all", "mining", "")
k2 = _cache_key("gpu", None, None, 1, "", "all", "defective", "")
assert k1 != k2
def test_cache_key_differs_on_category_id():
"""Different category_id must produce a different key."""
from api.main import _cache_key
k1 = _cache_key("gpu", None, None, 1, "", "all", "", "27386")
k2 = _cache_key("gpu", None, None, 1, "", "all", "", "12345")
assert k1 != k2
def test_cache_key_is_16_chars():
"""Key must be exactly 16 hex characters."""
from api.main import _cache_key
k = _cache_key("gpu", None, None, 1, "", "all", "", "")
assert len(k) == 16
assert all(c in "0123456789abcdef" for c in k)
# ── TTL / eviction unit tests ─────────────────────────────────────────────────
def test_expired_entry_is_not_returned_as_hit():
"""An entry past its TTL must not be treated as a cache hit."""
import api.main as _main
from api.main import _cache_key
key = _cache_key("gpu", None, None, 1, "", "all", "", "")
# Write an already-expired entry.
_main._search_result_cache[key] = (
{"listings": [], "market_price": None},
time.time() - 1.0, # expired 1 second ago
)
cached = _main._search_result_cache.get(key)
assert cached is not None
payload, expiry = cached
# Simulate the hit-check used in main.py
assert expiry <= time.time(), "Entry should be expired"
def test_evict_expired_cache_removes_stale_entries():
"""_evict_expired_cache must remove entries whose expiry has passed."""
import api.main as _main
from api.main import _cache_key, _evict_expired_cache
key_expired = _cache_key("old query", None, None, 1, "", "all", "", "")
key_valid = _cache_key("new query", None, None, 1, "", "all", "", "")
_main._search_result_cache[key_expired] = (
{"listings": [], "market_price": None},
time.time() - 10.0, # already expired
)
_main._search_result_cache[key_valid] = (
{"listings": [], "market_price": 99.0},
time.time() + 300.0, # valid for 5 min
)
# Reset throttle so eviction runs immediately.
_main._last_eviction_ts = 0.0
_evict_expired_cache()
assert key_expired not in _main._search_result_cache
assert key_valid in _main._search_result_cache
def test_evict_is_rate_limited():
"""_evict_expired_cache should skip eviction if called within 60 s."""
import api.main as _main
from api.main import _cache_key, _evict_expired_cache
key_expired = _cache_key("stale", None, None, 1, "", "all", "", "")
_main._search_result_cache[key_expired] = (
{"listings": [], "market_price": None},
time.time() - 5.0,
)
# Pretend eviction just ran.
_main._last_eviction_ts = time.time()
_evict_expired_cache()
# Entry should still be present because eviction was throttled.
assert key_expired in _main._search_result_cache
# ── Integration tests — async endpoint cache hit ──────────────────────────────
def test_async_cache_hit_skips_scraper(client, tmp_path):
"""On a warm cache hit the scraper adapter must not be called."""
import threading
import api.main as _main
from api.main import _cache_key
# Pre-seed a valid cache entry.
key = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
_main._search_result_cache[key] = (
{"listings": [], "market_price": 250.0},
time.time() + 300.0,
)
scraper_called = threading.Event()
def _fake_search(query, filters):
scraper_called.set()
return []
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
patch("api.main.Store") as mock_store_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.side_effect = _fake_search
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
mock_store = MagicMock()
mock_store.get_listings_staged.return_value = {}
mock_store.refresh_seller_categories.return_value = 0
mock_store.save_listings.return_value = None
mock_store.save_trust_scores.return_value = None
mock_store.get_market_comp.return_value = None
mock_store.get_seller.return_value = None
mock_store.get_user_preference.return_value = None
mock_store_cls.return_value = mock_store
resp = client.get("/api/search/async?q=rtx+3080")
assert resp.status_code == 202
# Give the background worker a moment to run.
scraper_called.wait(timeout=3.0)
# Scraper must NOT have been called on a cache hit.
assert not scraper_called.is_set(), "Scraper was called despite a warm cache hit"
def test_async_cache_miss_stores_result(client, tmp_path):
"""After a cache miss the result must be stored in _search_result_cache."""
import threading
import api.main as _main
from api.main import _cache_key
search_done = threading.Event()
real_listing = _make_real_listing()
def _fake_search(query, filters):
return [real_listing]
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment") as mock_enrich,
patch("api.main.TrustScorer") as mock_scorer_cls,
patch("api.main.Store") as mock_store_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.side_effect = _fake_search
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
mock_store = MagicMock()
mock_store.get_listings_staged.return_value = {
real_listing.platform_listing_id: real_listing
}
mock_store.refresh_seller_categories.return_value = 0
mock_store.save_listings.return_value = None
mock_store.save_trust_scores.return_value = None
mock_store.get_market_comp.return_value = None
mock_store.get_seller.return_value = None
mock_store.get_user_preference.return_value = None
mock_store_cls.return_value = mock_store
def _enrich_side_effect(*args, **kwargs):
search_done.set()
mock_enrich.side_effect = _enrich_side_effect
resp = client.get("/api/search/async?q=rtx+3080")
assert resp.status_code == 202
# Wait until the background worker reaches _trigger_scraper_enrichment.
search_done.wait(timeout=5.0)
assert search_done.is_set(), "Background search worker never completed"
key = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
assert key in _main._search_result_cache, "Result was not stored in cache after miss"
payload, expiry = _main._search_result_cache[key]
assert expiry > time.time(), "Cache entry has already expired"
assert "listings" in payload
# ── Integration tests — async endpoint refresh=True ──────────────────────────
def test_async_refresh_bypasses_cache_read(client, tmp_path):
"""refresh=True must bypass cache read and invoke the scraper."""
import threading
import api.main as _main
from api.main import _cache_key
# Seed a valid cache entry so we can confirm it is bypassed.
key = _cache_key("rtx 3080", None, None, 1, "", "all", "", "")
_main._search_result_cache[key] = (
{"listings": [], "market_price": 100.0},
time.time() + 300.0,
)
scraper_called = threading.Event()
def _fake_search(query, filters):
scraper_called.set()
return []
with (
patch("api.main._make_adapter") as mock_adapter_factory,
patch("api.main._trigger_scraper_enrichment"),
patch("api.main.TrustScorer") as mock_scorer_cls,
patch("api.main.Store") as mock_store_cls,
):
mock_adapter = MagicMock()
mock_adapter.search.side_effect = _fake_search
mock_adapter.get_completed_sales.return_value = None
mock_adapter_factory.return_value = mock_adapter
mock_scorer = MagicMock()
mock_scorer.score_batch.return_value = []
mock_scorer_cls.return_value = mock_scorer
mock_store = MagicMock()
mock_store.get_listings_staged.return_value = {}
mock_store.refresh_seller_categories.return_value = 0
mock_store.save_listings.return_value = None
mock_store.save_trust_scores.return_value = None
mock_store.get_market_comp.return_value = None
mock_store.get_seller.return_value = None
mock_store.get_user_preference.return_value = None
mock_store_cls.return_value = mock_store
resp = client.get("/api/search/async?q=rtx+3080&refresh=true")
assert resp.status_code == 202
scraper_called.wait(timeout=5.0)
assert scraper_called.is_set(), "Scraper was not called even though refresh=True"

View file

@ -0,0 +1,372 @@
"""Tests for the background monitor: should_alert logic, store alert methods, and run_monitor_search."""
from __future__ import annotations
import sqlite3
from datetime import datetime, timedelta, timezone
from pathlib import Path
from unittest.mock import MagicMock, patch
import pytest
from app.tasks.monitor import _AUCTION_ALERT_WINDOW_HOURS, should_alert
# ---------------------------------------------------------------------------
# should_alert — pure function, no I/O
# ---------------------------------------------------------------------------
class TestShouldAlert:
def test_bin_above_threshold_alerts(self):
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="fixed_price",
min_trust_score=60,
) is True
def test_bin_below_threshold_no_alert(self):
assert should_alert(
trust_score=55, score_is_partial=False,
price=100.0, buying_format="fixed_price",
min_trust_score=60,
) is False
def test_partial_score_applies_buffer(self):
# Score 65 with min 60 passes normally but fails with the +10 partial buffer.
assert should_alert(
trust_score=65, score_is_partial=True,
price=100.0, buying_format="fixed_price",
min_trust_score=60,
) is False
def test_partial_score_above_buffered_threshold_alerts(self):
assert should_alert(
trust_score=75, score_is_partial=True,
price=100.0, buying_format="fixed_price",
min_trust_score=60,
) is True
def test_best_offer_treated_like_bin(self):
assert should_alert(
trust_score=80, score_is_partial=False,
price=200.0, buying_format="best_offer",
min_trust_score=60,
) is True
def test_auction_within_window_alerts(self):
soon = (datetime.now(timezone.utc) + timedelta(hours=12)).isoformat()
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="auction",
min_trust_score=60, ends_at=soon,
) is True
def test_auction_outside_window_no_alert(self):
far = (datetime.now(timezone.utc) + timedelta(hours=48)).isoformat()
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="auction",
min_trust_score=60, ends_at=far,
) is False
def test_auction_no_ends_at_alerts_anyway(self):
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="auction",
min_trust_score=60, ends_at=None,
) is True
def test_auction_bad_ends_at_alerts_anyway(self):
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="auction",
min_trust_score=60, ends_at="not-a-date",
) is True
def test_auction_expired_no_alert(self):
past = (datetime.now(timezone.utc) - timedelta(hours=1)).isoformat()
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="auction",
min_trust_score=60, ends_at=past,
) is False
def test_unknown_format_alerts(self):
# Fail-open: unknown buying_format should not silently suppress.
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="mystery_format",
min_trust_score=60,
) is True
def test_score_exactly_at_threshold_passes(self):
assert should_alert(
trust_score=60, score_is_partial=False,
price=100.0, buying_format="fixed_price",
min_trust_score=60,
) is True
def test_auction_exactly_at_window_boundary_alerts(self):
boundary = (datetime.now(timezone.utc) + timedelta(hours=_AUCTION_ALERT_WINDOW_HOURS - 0.1)).isoformat()
assert should_alert(
trust_score=70, score_is_partial=False,
price=100.0, buying_format="auction",
min_trust_score=60, ends_at=boundary,
) is True
# ---------------------------------------------------------------------------
# Store alert methods — integration against real SQLite
# ---------------------------------------------------------------------------
def _create_monitor_db(path: Path) -> None:
conn = sqlite3.connect(path)
conn.executescript("""
CREATE TABLE IF NOT EXISTS saved_searches (
id INTEGER PRIMARY KEY AUTOINCREMENT,
name TEXT NOT NULL,
query TEXT NOT NULL,
platform TEXT NOT NULL DEFAULT 'ebay',
filters_json TEXT,
created_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
last_run_at TEXT,
monitor_enabled INTEGER NOT NULL DEFAULT 0,
poll_interval_min INTEGER NOT NULL DEFAULT 60,
min_trust_score INTEGER NOT NULL DEFAULT 60,
last_checked_at TEXT
);
CREATE TABLE IF NOT EXISTS watch_alerts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
saved_search_id INTEGER NOT NULL REFERENCES saved_searches(id) ON DELETE CASCADE,
platform_listing_id TEXT NOT NULL,
title TEXT NOT NULL,
price REAL NOT NULL,
currency TEXT NOT NULL DEFAULT 'USD',
trust_score INTEGER NOT NULL,
url TEXT,
first_alerted_at TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP,
dismissed_at TEXT,
UNIQUE(saved_search_id, platform_listing_id)
);
INSERT INTO saved_searches (name, query, monitor_enabled) VALUES ('RTX 4090', 'rtx 4090', 1);
""")
conn.commit()
conn.close()
@pytest.fixture
def monitor_db(tmp_path: Path) -> Path:
db = tmp_path / "snipe.db"
_create_monitor_db(db)
return db
class TestStoreAlertMethods:
def test_upsert_alert_new(self, monitor_db: Path):
from app.db.models import WatchAlert
from app.db.store import Store
store = Store(monitor_db)
alert = WatchAlert(
saved_search_id=1, platform_listing_id="ebay-001",
title="RTX 4090", price=750.0, trust_score=72, currency="USD",
url="https://ebay.com/itm/001",
)
alert_id, is_new = store.upsert_alert(alert)
assert is_new is True
assert alert_id > 0
def test_upsert_alert_dedup(self, monitor_db: Path):
from app.db.models import WatchAlert
from app.db.store import Store
store = Store(monitor_db)
alert = WatchAlert(
saved_search_id=1, platform_listing_id="ebay-002",
title="RTX 4090 FE", price=800.0, trust_score=68,
)
id1, new1 = store.upsert_alert(alert)
id2, new2 = store.upsert_alert(alert)
assert id1 == id2
assert new1 is True
assert new2 is False
def test_list_alerts_returns_undismissed(self, monitor_db: Path):
from app.db.models import WatchAlert
from app.db.store import Store
store = Store(monitor_db)
alert = WatchAlert(
saved_search_id=1, platform_listing_id="ebay-003",
title="Test listing", price=500.0, trust_score=75,
)
store.upsert_alert(alert)
alerts = store.list_alerts(include_dismissed=False)
assert len(alerts) == 1
assert alerts[0].platform_listing_id == "ebay-003"
def test_count_undismissed_alerts(self, monitor_db: Path):
from app.db.models import WatchAlert
from app.db.store import Store
store = Store(monitor_db)
for i in range(3):
store.upsert_alert(WatchAlert(
saved_search_id=1, platform_listing_id=f"ebay-{i:03d}",
title=f"Listing {i}", price=float(100 + i), trust_score=70,
))
assert store.count_undismissed_alerts() == 3
def test_dismiss_alert(self, monitor_db: Path):
from app.db.models import WatchAlert
from app.db.store import Store
store = Store(monitor_db)
alert = WatchAlert(
saved_search_id=1, platform_listing_id="ebay-dismiss",
title="To dismiss", price=400.0, trust_score=65,
)
alert_id, _ = store.upsert_alert(alert)
store.dismiss_alert(alert_id)
alerts = store.list_alerts(include_dismissed=False)
assert all(a.id != alert_id for a in alerts)
def test_dismiss_all_alerts(self, monitor_db: Path):
from app.db.models import WatchAlert
from app.db.store import Store
store = Store(monitor_db)
for i in range(3):
store.upsert_alert(WatchAlert(
saved_search_id=1, platform_listing_id=f"all-{i}",
title=f"All {i}", price=float(100 * i), trust_score=70,
))
count = store.dismiss_all_alerts()
assert count == 3
assert store.count_undismissed_alerts() == 0
def test_mark_search_checked_updates_timestamp(self, monitor_db: Path):
from app.db.store import Store
store = Store(monitor_db)
store.mark_search_checked(1)
searches = store.list_monitored_searches()
assert searches[0].last_checked_at is not None
# ---------------------------------------------------------------------------
# run_monitor_search — mocked adapter + trust aggregator
# ---------------------------------------------------------------------------
class TestRunMonitorSearch:
def test_new_qualifying_listing_creates_alert(self, monitor_db: Path):
from app.db.models import Listing, SavedSearch, TrustScore
from app.db.store import Store
from app.tasks.monitor import run_monitor_search
search = SavedSearch(
id=1, name="RTX 4090", query="rtx 4090",
platform="ebay", monitor_enabled=True,
min_trust_score=60,
)
mock_listing = Listing(
platform="ebay", platform_listing_id="ebay-new",
title="ASUS RTX 4090", price=750.0, currency="USD",
condition="used", url="https://ebay.com/itm/new",
buying_format="fixed_price", seller_platform_id="seller123",
)
mock_trust = TrustScore(
listing_id=0, composite_score=72, score_is_partial=False,
account_age_score=0, feedback_count_score=0, feedback_ratio_score=0,
price_vs_market_score=0, category_history_score=0,
)
with patch("app.platforms.ebay.adapter.EbayAdapter") as MockAdapter, \
patch("app.trust.TrustScorer") as MockAgg:
MockAdapter.return_value.search.return_value = [mock_listing]
MockAgg.return_value.score_batch.return_value = [mock_trust]
count = run_monitor_search(search, user_db=monitor_db, shared_db=monitor_db)
assert count == 1
alerts = Store(monitor_db).list_alerts()
assert len(alerts) == 1
assert alerts[0].platform_listing_id == "ebay-new"
def test_below_threshold_listing_not_alerted(self, monitor_db: Path):
from app.db.models import Listing, SavedSearch, TrustScore
from app.tasks.monitor import run_monitor_search
search = SavedSearch(
id=1, name="RTX 4090", query="rtx 4090",
platform="ebay", monitor_enabled=True,
min_trust_score=70,
)
mock_listing = Listing(
platform="ebay", platform_listing_id="ebay-low",
title="Sketchy RTX 4090", price=500.0, currency="USD",
condition="used", url="https://ebay.com/itm/low",
buying_format="fixed_price", seller_platform_id="s1",
)
mock_trust = TrustScore(
listing_id=0, composite_score=55, score_is_partial=False,
account_age_score=0, feedback_count_score=0, feedback_ratio_score=0,
price_vs_market_score=0, category_history_score=0,
)
with patch("app.platforms.ebay.adapter.EbayAdapter") as MockAdapter, \
patch("app.trust.TrustScorer") as MockAgg:
MockAdapter.return_value.search.return_value = [mock_listing]
MockAgg.return_value.score_batch.return_value = [mock_trust]
count = run_monitor_search(search, user_db=monitor_db, shared_db=monitor_db)
assert count == 0
def test_duplicate_listing_not_double_alerted(self, monitor_db: Path):
from app.db.models import Listing, SavedSearch, TrustScore
from app.tasks.monitor import run_monitor_search
search = SavedSearch(
id=1, name="RTX 4090", query="rtx 4090",
platform="ebay", monitor_enabled=True, min_trust_score=60,
)
mock_listing = Listing(
platform="ebay", platform_listing_id="ebay-dupe",
title="RTX 4090", price=700.0, currency="USD",
condition="used", url="https://ebay.com/itm/dupe",
buying_format="fixed_price", seller_platform_id="s1",
)
mock_trust = TrustScore(
listing_id=0, composite_score=75, score_is_partial=False,
account_age_score=0, feedback_count_score=0, feedback_ratio_score=0,
price_vs_market_score=0, category_history_score=0,
)
with patch("app.platforms.ebay.adapter.EbayAdapter") as MockAdapter, \
patch("app.trust.TrustScorer") as MockAgg:
MockAdapter.return_value.search.return_value = [mock_listing]
MockAgg.return_value.score_batch.return_value = [mock_trust]
count1 = run_monitor_search(search, user_db=monitor_db, shared_db=monitor_db)
count2 = run_monitor_search(search, user_db=monitor_db, shared_db=monitor_db)
assert count1 == 1
assert count2 == 0 # deduped by UNIQUE constraint
def test_adapter_failure_returns_zero(self, monitor_db: Path):
from app.db.models import SavedSearch
from app.tasks.monitor import run_monitor_search
search = SavedSearch(
id=1, name="RTX 4090", query="rtx 4090",
platform="ebay", monitor_enabled=True, min_trust_score=60,
)
with patch("app.platforms.ebay.adapter.EbayAdapter") as MockAdapter:
MockAdapter.return_value.search.side_effect = RuntimeError("eBay down")
count = run_monitor_search(search, user_db=monitor_db, shared_db=monitor_db)
assert count == 0

View file

@ -232,6 +232,46 @@ def test_significant_price_drop_not_flagged_when_no_prior_price():
assert "significant_price_drop" not in result.red_flags_json
# ── declining_ratio (high-volume seller edge case, snipe#52) ─────────────────
def test_declining_ratio_soft_flag_for_high_volume_seller():
"""High-volume seller (count > 500) with declining but not catastrophic ratio
gets declining_ratio soft flag, NOT the hard established_bad_actor flag.
Edge case: 12-month ratio may reflect only a small recent sample for sellers
with large lifetime feedback counts hard-flagging is disproportionate.
"""
agg = Aggregator()
scores = {k: 10 for k in ["account_age", "feedback_count",
"feedback_ratio", "price_vs_market", "category_history"]}
high_vol = Seller(
platform="ebay", platform_seller_id="u", username="u",
account_age_days=2000, feedback_count=800, # count > 500
feedback_ratio=0.75, # < 0.80 but > 0.60
category_history_json="{}",
)
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=high_vol)
assert "declining_ratio" in result.red_flags_json
assert "established_bad_actor" not in result.red_flags_json
def test_established_bad_actor_still_fires_for_catastrophic_high_volume_ratio():
"""High-volume seller (count > 500) with catastrophically bad ratio (< 60%)
still gets the hard established_bad_actor flag not just declining_ratio."""
agg = Aggregator()
scores = {k: 10 for k in ["account_age", "feedback_count",
"feedback_ratio", "price_vs_market", "category_history"]}
bad_high_vol = Seller(
platform="ebay", platform_seller_id="u", username="u",
account_age_days=2000, feedback_count=800,
feedback_ratio=0.50, # < 0.60 threshold → still hard flag
category_history_json="{}",
)
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=bad_high_vol)
assert "established_bad_actor" in result.red_flags_json
assert "declining_ratio" not in result.red_flags_json
# ── established retailer ──────────────────────────────────────────────────────
def test_established_retailer_suppresses_duplicate_photo():
@ -256,3 +296,37 @@ def test_non_retailer_does_not_suppress_duplicate_photo():
)
result = agg.aggregate(_ALL_20.copy(), photo_hash_duplicate=True, seller=seller)
assert "duplicate_photo" in result.red_flags_json
# ── #52: buyer-only / returning seller (ratio=0.0, count>0) ──────────────────
def test_zero_ratio_with_count_gives_no_recent_seller_data_flag():
"""Seller with 117 lifetime feedbacks (buyer-only) has ratio=0.0 parsed from page.
Must get no_recent_seller_data soft flag, NOT established_bad_actor."""
agg = Aggregator()
scores = {k: 10 for k in ["account_age", "feedback_count",
"feedback_ratio", "price_vs_market", "category_history"]}
buyer_only = Seller(
platform="ebay", platform_seller_id="u", username="jjcpryz",
account_age_days=1200, feedback_count=117, feedback_ratio=0.0,
category_history_json="{}",
)
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=buyer_only)
assert "no_recent_seller_data" in result.red_flags_json
assert "established_bad_actor" not in result.red_flags_json
def test_established_bad_actor_still_fires_for_genuinely_bad_ratio():
"""ratio=0.75 (not zero) with moderate count → established_bad_actor still fires."""
agg = Aggregator()
scores = {k: 10 for k in ["account_age", "feedback_count",
"feedback_ratio", "price_vs_market", "category_history"]}
bad = Seller(
platform="ebay", platform_seller_id="u", username="u",
account_age_days=500, feedback_count=100, feedback_ratio=0.75,
category_history_json="{}",
)
result = agg.aggregate(scores, photo_hash_duplicate=False, seller=bad)
assert "established_bad_actor" in result.red_flags_json
assert "no_recent_seller_data" not in result.red_flags_json

View file

@ -43,3 +43,26 @@ def test_no_market_data_returns_none():
scores = scorer.score(_seller(), market_median=None, listing_price=950.0)
# None signals "data unavailable" — aggregator will set score_is_partial=True
assert scores["price_vs_market"] is None
def test_zero_ratio_with_nonzero_count_returns_none():
"""ratio=0.0 with count>0 means eBay didn't show a 12-month percentage.
Must return None (missing data) not 0 (catastrophically bad)."""
scorer = MetadataScorer()
scores = scorer.score(
_seller(feedback_ratio=0.0, feedback_count=117),
market_median=None, listing_price=500.0,
)
assert scores["feedback_ratio"] is None
def test_zero_ratio_with_zero_count_scores_low():
"""feedback_ratio=0.0 with count=0 is a real 'no data at all' case, not missing."""
scorer = MetadataScorer()
scores = scorer.score(
_seller(feedback_ratio=0.0, feedback_count=0),
market_median=None, listing_price=500.0,
)
# count=0 means zero_feedback; ratio=0 with count=0 is the standard no-history path
# (not the "missing 12-month window" path)
assert scores["feedback_ratio"] == 5 # ratio < 0.90 → 5

View file

@ -21,6 +21,7 @@ import { useMotion } from './composables/useMotion'
import { useSnipeMode } from './composables/useSnipeMode'
import { useTheme } from './composables/useTheme'
import { useKonamiCode } from './composables/useKonamiCode'
import { useCandycoreMode } from './composables/useCandycoreMode'
import { useSessionStore } from './stores/session'
import { useBlocklistStore } from './stores/blocklist'
import { usePreferencesStore } from './stores/preferences'
@ -31,6 +32,8 @@ import FeedbackButton from './components/FeedbackButton.vue'
const motion = useMotion()
const { activate, restore } = useSnipeMode()
const { restore: restoreTheme } = useTheme()
const { restore: restoreCandy, useWordTrigger } = useCandycoreMode()
useWordTrigger()
const session = useSessionStore()
const blocklistStore = useBlocklistStore()
const preferencesStore = usePreferencesStore()
@ -42,6 +45,7 @@ useKonamiCode(activate)
onMounted(async () => {
restore() // re-apply snipe mode from localStorage on hard reload
restoreTheme() // re-apply explicit theme override on hard reload
restoreCandy() // re-apply candycore mode from localStorage on hard reload
await session.bootstrap() // fetch tier + feature flags from API
blocklistStore.fetchBlocklist() // pre-load so card block buttons reflect state immediately
preferencesStore.load() // load user preferences after session resolves
@ -57,6 +61,12 @@ onMounted(async () => {
padding: 0;
}
/* Global keyboard focus indicator — safety net so no stylesheet can silently remove focus rings */
:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
html {
font-family: var(--font-body, sans-serif);
color: var(--color-text, #e6edf3);

View file

@ -87,7 +87,7 @@
Snipe Mode data attribute overrides this via higher specificity.
*/
/* Explicit dark override — beats OS preference when user forces dark in Settings */
[data-theme="dark"]:not([data-snipe-mode="active"]) {
[data-theme="dark"]:not([data-snipe-mode="active"]):not([data-candycore="active"]) {
--color-surface: #0d1117;
--color-surface-2: #161b22;
--color-surface-raised: #1c2129;
@ -113,7 +113,7 @@
}
@media (prefers-color-scheme: light) {
:root:not([data-theme="dark"]):not([data-snipe-mode="active"]) {
:root:not([data-theme="dark"]):not([data-snipe-mode="active"]):not([data-candycore="active"]) {
/* Surfaces — warm cream, like a tactical field notebook */
--color-surface: #f8f5ee;
--color-surface-2: #f0ece3;
@ -153,7 +153,7 @@
}
/* Explicit light override — beats OS preference when user forces light in Settings */
[data-theme="light"]:not([data-snipe-mode="active"]) {
[data-theme="light"]:not([data-snipe-mode="active"]):not([data-candycore="active"]) {
--color-surface: #f8f5ee;
--color-surface-2: #f0ece3;
--color-surface-raised: #e8e3d8;
@ -178,6 +178,56 @@
--shadow-lg: 0 10px 30px rgba(60,45,20,0.2), 0 4px 8px rgba(60,45,20,0.1);
}
/* Candycore easter egg theme
Activated by typing "neon" outside a form field (tribute to artist Neon).
Palette sourced from snipe_v0_Neon_IPad_Paint.jpeg:
purple-black sky + lavender primary + cyan glow + yellow crown + pink text.
Stored as 'cf-candycore' in localStorage.
Applied: document.documentElement.dataset.candycore = 'active'
NOTE: Snipe Mode is declared last and overrides this when both are active.
*/
[data-candycore="active"] {
--app-primary: #c77dff;
--app-primary-hover: #a855f7;
--app-primary-light: rgba(199, 125, 255, 0.15);
/* Purple-black night sky */
--color-surface: #08051a;
--color-surface-2: #100d28;
--color-surface-raised: #1a1248;
/* Purple glow borders */
--color-border: rgba(199, 125, 255, 0.20);
--color-border-light: rgba(199, 125, 255, 0.10);
/* Candy-floss text — pink-white, muted bubblegum */
--color-text: #ffd6f5;
--color-text-muted: #f09099;
--color-text-inverse: #08051a;
/* Trust signals — straight from the painting */
--trust-high: #00c8e0; /* cyan (outline glow) = good */
--trust-mid: #ffe520; /* yellow (crown stripe) = caution */
--trust-low: #ff6eb4; /* hot pink = danger */
/* Semantic */
--color-success: #00c8e0;
--color-error: #ff6eb4;
--color-warning: #ffe520;
--color-info: #c77dff;
--color-accent: #00c8e0; /* cyan accent */
/* Purple glow shadows */
--shadow-sm: 0 1px 3px rgba(199, 125, 255, 0.12);
--shadow-md: 0 4px 12px rgba(199, 125, 255, 0.20);
--shadow-lg: 0 10px 30px rgba(199, 125, 255, 0.28);
/* Glow helpers (used in scoped styles if needed) */
--candy-glow-xs: rgba(199, 125, 255, 0.08);
--candy-glow-sm: rgba(199, 125, 255, 0.18);
--candy-glow-md: rgba(199, 125, 255, 0.45);
}
/* ── Snipe Mode easter egg theme ─────────────────── */
/* Activated by Konami code; stored as 'cf-snipe-mode' in localStorage */
/* Applied: document.documentElement.dataset.snipeMode = 'active' */

View file

@ -0,0 +1,398 @@
<template>
<div class="alert-bell-wrap" ref="wrapRef">
<!-- Bell trigger button -->
<button
ref="bellRef"
class="alert-bell"
:class="{ 'alert-bell--active': panelOpen }"
:aria-label="unreadCount > 0 ? `${unreadCount} new watch alert${unreadCount === 1 ? '' : 's'}` : 'Watch alerts'"
:aria-expanded="panelOpen"
aria-haspopup="true"
@click="togglePanel"
>
<BellIcon class="alert-bell__icon" aria-hidden="true" />
<span
v-if="unreadCount > 0"
class="alert-badge"
aria-hidden="true"
>{{ unreadCount > 99 ? '99+' : unreadCount }}</span>
</button>
<!-- Polite live region announces count changes without moving focus -->
<div aria-live="polite" aria-atomic="true" class="sr-only">
{{ liveAnnouncement }}
</div>
<!-- Alert panel -->
<Transition name="panel">
<div
v-if="panelOpen"
class="alert-panel"
role="dialog"
aria-label="Watch alerts"
aria-modal="false"
>
<div class="alert-panel__header">
<span class="alert-panel__title">Watch Alerts</span>
<button
v-if="store.alerts.length > 0"
class="alert-panel__clear"
@click="onDismissAll"
>
Clear all
</button>
<button
class="alert-panel__close"
aria-label="Close alerts panel"
@click="closePanel"
>
</button>
</div>
<div v-if="store.loading" class="alert-panel__state">
Loading
</div>
<div v-else-if="store.alerts.length === 0" class="alert-panel__state">
<span aria-hidden="true">🔔</span>
<p>No new alerts. Enable monitoring on a saved search to get notified.</p>
</div>
<ul v-else class="alert-list" role="list">
<li
v-for="alert in store.alerts"
:key="alert.id"
class="alert-card"
>
<div class="alert-card__body">
<p class="alert-card__title">{{ alert.title }}</p>
<div class="alert-card__meta">
<span class="alert-card__price">${{ alert.price.toFixed(2) }}</span>
<span class="alert-card__score" :class="scoreClass(alert.trust_score)">
Trust {{ alert.trust_score }}
</span>
</div>
</div>
<div class="alert-card__actions">
<a
v-if="alert.url"
:href="alert.url"
target="_blank"
rel="noopener noreferrer"
class="alert-card__view"
:aria-label="`View listing: ${alert.title}`"
>
View on eBay
</a>
<button
class="alert-card__dismiss"
:aria-label="`Dismiss alert: ${alert.title}`"
@click="onDismiss(alert.id)"
@keydown.delete.prevent="onDismiss(alert.id)"
>
</button>
</div>
</li>
</ul>
</div>
</Transition>
</div>
</template>
<script setup lang="ts">
import { computed, onBeforeUnmount, onMounted, ref, watch } from 'vue'
import { BellIcon } from '@heroicons/vue/24/outline'
import { useAlertsStore } from '../stores/alerts'
const store = useAlertsStore()
const panelOpen = ref(false)
const bellRef = ref<HTMLButtonElement | null>(null)
const wrapRef = ref<HTMLDivElement | null>(null)
const liveAnnouncement = ref('')
const unreadCount = computed(() => store.unreadCount)
// Announce count changes to screen readers via the polite live region.
watch(unreadCount, (count, prev) => {
if (count > prev) {
liveAnnouncement.value = `${count} new watch alert${count === 1 ? '' : 's'}`
// Reset after announcement so repeat counts still fire.
setTimeout(() => { liveAnnouncement.value = '' }, 1500)
}
})
function togglePanel() {
panelOpen.value = !panelOpen.value
if (panelOpen.value) store.fetchAlerts()
}
function closePanel() {
panelOpen.value = false
bellRef.value?.focus()
}
async function onDismiss(id: number) {
await store.dismiss(id)
if (store.alerts.length === 0) {
// Return focus to bell when last alert is dismissed.
panelOpen.value = false
bellRef.value?.focus()
}
}
async function onDismissAll() {
await store.dismissAll()
panelOpen.value = false
bellRef.value?.focus()
}
function scoreClass(score: number) {
if (score >= 75) return 'score--high'
if (score >= 50) return 'score--medium'
return 'score--low'
}
// Close on outside click.
function handleOutsideClick(e: MouseEvent) {
if (wrapRef.value && !wrapRef.value.contains(e.target as Node)) {
panelOpen.value = false
}
}
onMounted(() => {
store.fetchAlerts()
// Poll for new alerts every 2 minutes while the app is open.
const interval = setInterval(() => store.fetchAlerts(), 120_000)
document.addEventListener('click', handleOutsideClick)
onBeforeUnmount(() => {
clearInterval(interval)
document.removeEventListener('click', handleOutsideClick)
})
})
</script>
<style scoped>
.sr-only {
position: absolute;
width: 1px;
height: 1px;
padding: 0;
margin: -1px;
overflow: hidden;
clip: rect(0, 0, 0, 0);
white-space: nowrap;
border: 0;
}
.alert-bell-wrap {
position: relative;
}
.alert-bell {
position: relative;
display: flex;
align-items: center;
justify-content: center;
width: 40px;
height: 40px;
background: transparent;
border: 1px solid var(--color-border);
border-radius: var(--radius-md);
color: var(--color-text-muted);
cursor: pointer;
transition: border-color 150ms ease, color 150ms ease, background 150ms ease;
}
.alert-bell:hover,
.alert-bell--active {
border-color: var(--app-primary);
color: var(--app-primary);
background: var(--app-primary-light);
}
.alert-bell__icon {
width: 1.25rem;
height: 1.25rem;
}
.alert-badge {
position: absolute;
top: -6px;
right: -6px;
min-width: 18px;
height: 18px;
padding: 0 4px;
background: var(--color-error, #ef4444);
color: #fff;
font-size: 0.625rem;
font-weight: 700;
font-family: var(--font-mono);
border-radius: 9px;
display: flex;
align-items: center;
justify-content: center;
line-height: 1;
}
/* Panel */
.alert-panel {
position: absolute;
top: calc(100% + 8px);
right: 0;
width: min(360px, 92vw);
background: var(--color-surface-2);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
box-shadow: 0 8px 24px rgba(0, 0, 0, 0.4);
z-index: 200;
overflow: hidden;
}
.alert-panel__header {
display: flex;
align-items: center;
gap: var(--space-2);
padding: var(--space-3) var(--space-4);
border-bottom: 1px solid var(--color-border);
}
.alert-panel__title {
flex: 1;
font-weight: 600;
font-size: 0.875rem;
color: var(--color-text);
}
.alert-panel__clear {
font-size: 0.75rem;
color: var(--color-text-muted);
background: none;
border: none;
cursor: pointer;
padding: var(--space-1) var(--space-2);
border-radius: var(--radius-sm);
transition: color 150ms ease;
}
.alert-panel__clear:hover { color: var(--color-error); }
.alert-panel__close {
background: none;
border: none;
color: var(--color-text-muted);
font-size: 0.75rem;
cursor: pointer;
padding: var(--space-1);
border-radius: var(--radius-sm);
line-height: 1;
transition: color 150ms ease;
min-width: 24px;
min-height: 24px;
}
.alert-panel__close:hover { color: var(--color-error); }
.alert-panel__state {
padding: var(--space-8) var(--space-4);
text-align: center;
color: var(--color-text-muted);
font-size: 0.875rem;
display: flex;
flex-direction: column;
align-items: center;
gap: var(--space-2);
}
/* Alert list */
.alert-list {
list-style: none;
max-height: 360px;
overflow-y: auto;
}
.alert-card {
display: flex;
align-items: center;
gap: var(--space-3);
padding: var(--space-3) var(--space-4);
border-bottom: 1px solid var(--color-border-light);
transition: background 150ms ease;
}
.alert-card:hover { background: var(--color-surface); }
.alert-card:last-child { border-bottom: none; }
.alert-card__body { flex: 1; min-width: 0; }
.alert-card__title {
font-size: 0.8125rem;
color: var(--color-text);
margin: 0 0 var(--space-1);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.alert-card__meta {
display: flex;
gap: var(--space-2);
align-items: center;
}
.alert-card__price {
font-family: var(--font-mono);
font-size: 0.75rem;
color: var(--app-primary);
}
.alert-card__score {
font-size: 0.6875rem;
font-weight: 600;
padding: 1px var(--space-2);
border-radius: var(--radius-sm);
}
.score--high { background: rgba(34,197,94,0.15); color: #22c55e; }
.score--medium { background: rgba(234,179,8,0.15); color: #eab308; }
.score--low { background: rgba(239,68,68,0.15); color: #ef4444; }
.alert-card__actions {
display: flex;
align-items: center;
gap: var(--space-2);
flex-shrink: 0;
}
.alert-card__view {
font-size: 0.75rem;
color: var(--app-primary);
text-decoration: none;
white-space: nowrap;
padding: var(--space-1) var(--space-2);
border-radius: var(--radius-sm);
transition: background 150ms ease;
}
.alert-card__view:hover { background: var(--app-primary-light); }
.alert-card__dismiss {
background: none;
border: 1px solid var(--color-border);
border-radius: var(--radius-sm);
color: var(--color-text-muted);
font-size: 0.625rem;
cursor: pointer;
min-width: 24px;
min-height: 24px;
transition: border-color 150ms ease, color 150ms ease;
}
.alert-card__dismiss:hover { border-color: var(--color-error); color: var(--color-error); }
/* Transition */
.panel-enter-active,
.panel-leave-active { transition: opacity 120ms ease, transform 120ms ease; }
.panel-enter-from,
.panel-leave-to { opacity: 0; transform: translateY(-6px); }
@media (prefers-reduced-motion: reduce) {
.panel-enter-active,
.panel-leave-active { transition: none; }
}
</style>

View file

@ -1,7 +1,7 @@
<template>
<!-- Desktop: persistent sidebar (1024px) -->
<!-- Mobile: bottom tab bar (<1024px) -->
<nav class="app-sidebar" role="navigation" aria-label="Main navigation">
<nav class="app-sidebar" role="navigation" aria-label="Sidebar">
<!-- Brand -->
<div class="sidebar__brand">
<RouterLink to="/" class="sidebar__logo">
@ -32,17 +32,20 @@
</button>
</div>
<!-- Settings at bottom -->
<!-- Settings + alert bell at bottom -->
<div class="sidebar__footer">
<div class="sidebar__footer-row">
<RouterLink to="/settings" class="sidebar__link sidebar__link--footer" active-class="sidebar__link--active">
<Cog6ToothIcon class="sidebar__icon" aria-hidden="true" />
<span class="sidebar__label">Settings</span>
</RouterLink>
<AlertBell v-if="session.isLoggedIn || session.tier === 'local'" class="sidebar__bell" />
</div>
</div>
</nav>
<!-- Mobile bottom tab bar -->
<nav class="app-tabbar" role="navigation" aria-label="Main navigation">
<nav class="app-tabbar" role="navigation" aria-label="Tab bar">
<ul class="tabbar__links" role="list">
<li v-for="link in mobileLinks" :key="link.to">
<RouterLink
@ -69,8 +72,11 @@ import {
ShieldExclamationIcon,
} from '@heroicons/vue/24/outline'
import { useSnipeMode } from '../composables/useSnipeMode'
import { useSessionStore } from '../stores/session'
import AlertBell from './AlertBell.vue'
const { active: isSnipeMode, deactivate } = useSnipeMode()
const session = useSessionStore()
const navLinks = computed(() => [
{ to: '/', icon: MagnifyingGlassIcon, label: 'Search' },
@ -81,7 +87,7 @@ const navLinks = computed(() => [
const mobileLinks = [
{ to: '/', icon: MagnifyingGlassIcon, label: 'Search' },
{ to: '/saved', icon: BookmarkIcon, label: 'Saved' },
{ to: '/blocklist', icon: ShieldExclamationIcon, label: 'Block' },
{ to: '/blocklist', icon: ShieldExclamationIcon, label: 'Blocklist' },
{ to: '/settings', icon: Cog6ToothIcon, label: 'Settings' },
]
</script>
@ -202,6 +208,20 @@ const mobileLinks = [
border-top: 1px solid var(--color-border-light);
}
.sidebar__footer-row {
display: flex;
align-items: center;
gap: var(--space-2);
}
.sidebar__footer-row .sidebar__link {
flex: 1;
}
.sidebar__bell {
flex-shrink: 0;
}
/* ── Mobile tab bar (<1024px) ───────────────────────── */
.app-tabbar {
display: none;

View file

@ -0,0 +1,181 @@
<template>
<div class="search-progress" role="status" aria-label="Search in progress" aria-live="polite">
<!-- Indeterminate progress bar -->
<div class="progress-track" aria-hidden="true">
<div class="progress-bar"></div>
</div>
<!-- Status line -->
<p class="progress-label">
Searching <strong>{{ platformLabel }}</strong> for <strong>{{ query }}</strong>
</p>
<!-- Skeleton listing cards -->
<div class="skeleton-list" aria-hidden="true">
<div v-for="n in 4" :key="n" class="skeleton-card">
<div class="skeleton-thumb"></div>
<div class="skeleton-body">
<div class="skeleton-line skeleton-line--title"></div>
<div class="skeleton-line skeleton-line--meta"></div>
<div class="skeleton-footer">
<div class="skeleton-chip"></div>
<div class="skeleton-chip skeleton-chip--price"></div>
</div>
</div>
</div>
</div>
</div>
</template>
<script setup lang="ts">
import { computed } from 'vue'
const props = defineProps<{ query: string; platform?: string }>()
const PLATFORM_LABELS: Record<string, string> = {
ebay: 'eBay',
mercari: 'Mercari',
poshmark: 'Poshmark',
}
const platformLabel = computed(() =>
PLATFORM_LABELS[props.platform ?? 'ebay'] ?? props.platform ?? 'eBay'
)
</script>
<style scoped>
.search-progress {
padding: var(--space-6);
display: flex;
flex-direction: column;
gap: var(--space-5);
}
/* ── Indeterminate progress bar ───────────────── */
.progress-track {
height: 3px;
background: var(--color-surface-raised);
border-radius: var(--radius-full);
overflow: hidden;
}
.progress-bar {
height: 100%;
width: 40%;
background: var(--app-primary);
border-radius: var(--radius-full);
animation: progress-slide 1.6s ease-in-out infinite;
}
@keyframes progress-slide {
0% { transform: translateX(-100%); }
100% { transform: translateX(300%); }
}
/* ── Status label ─────────────────────────────── */
.progress-label {
font-size: 0.875rem;
color: var(--color-text-muted);
margin: 0;
}
.progress-label strong {
color: var(--color-text);
font-weight: 600;
}
/* ── Skeleton cards ───────────────────────────── */
.skeleton-list {
display: flex;
flex-direction: column;
gap: var(--space-3);
}
.skeleton-card {
display: flex;
gap: var(--space-4);
padding: var(--space-4);
background: var(--color-surface-2);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
}
.skeleton-thumb {
width: 100px;
height: 80px;
flex-shrink: 0;
background: var(--color-surface-raised);
border-radius: var(--radius-md);
animation: shimmer 1.8s ease-in-out infinite;
}
.skeleton-body {
flex: 1;
min-width: 0;
display: flex;
flex-direction: column;
gap: var(--space-3);
justify-content: center;
}
.skeleton-line {
height: 12px;
background: var(--color-surface-raised);
border-radius: var(--radius-sm);
animation: shimmer 1.8s ease-in-out infinite;
}
.skeleton-line--title {
width: 70%;
height: 14px;
}
.skeleton-line--meta {
width: 45%;
}
.skeleton-footer {
display: flex;
gap: var(--space-2);
margin-top: var(--space-1);
}
.skeleton-chip {
height: 22px;
width: 64px;
background: var(--color-surface-raised);
border-radius: var(--radius-full);
animation: shimmer 1.8s ease-in-out infinite;
}
.skeleton-chip--price {
width: 80px;
}
/* Stagger shimmer so cards don't all pulse in sync */
.skeleton-card:nth-child(2) .skeleton-line,
.skeleton-card:nth-child(2) .skeleton-thumb,
.skeleton-card:nth-child(2) .skeleton-chip { animation-delay: 0.15s; }
.skeleton-card:nth-child(3) .skeleton-line,
.skeleton-card:nth-child(3) .skeleton-thumb,
.skeleton-card:nth-child(3) .skeleton-chip { animation-delay: 0.3s; }
.skeleton-card:nth-child(4) .skeleton-line,
.skeleton-card:nth-child(4) .skeleton-thumb,
.skeleton-card:nth-child(4) .skeleton-chip { animation-delay: 0.45s; }
@keyframes shimmer {
0%, 100% { opacity: 1; }
50% { opacity: 0.4; }
}
@media (max-width: 480px) {
.skeleton-thumb {
width: 72px;
height: 60px;
}
.skeleton-line--title { width: 85%; }
}
</style>

View file

@ -0,0 +1,98 @@
import { ref, onMounted, onUnmounted } from 'vue'
import { useSnipeMode } from './useSnipeMode'
const LS_KEY = 'cf-candycore'
const DATA_ATTR = 'candycore'
// Module-level ref — shared across all callers
const active = ref(false)
/**
* Candycore easter egg theme activated by typing "neon" outside a form field.
* Tribute to artist Neon, whose iPad painting (snipe_v0_Neon_IPad_Paint.jpeg)
* defined the candy palette: lavender primary, cyan glow, yellow crown, bubblegum pink.
*
* Mutually exclusive with Snipe Mode (each deactivates the other).
* Stores state in localStorage under 'cf-candycore'.
*/
export function useCandycoreMode() {
const snipe = useSnipeMode(false /* no sound on deactivate */)
function _playCandySound() {
try {
const ctx = new AudioContext()
// Ascending arpeggio: C5 → E5 → G5 → C6
const notes = [523.25, 659.25, 783.99, 1046.50]
const step = 0.08
notes.forEach((freq, i) => {
const t = ctx.currentTime + i * step
const osc = ctx.createOscillator()
const gain = ctx.createGain()
osc.type = 'sine'
osc.frequency.setValueAtTime(freq, t)
gain.gain.setValueAtTime(0, t)
gain.gain.linearRampToValueAtTime(0.22, t + 0.01)
gain.gain.exponentialRampToValueAtTime(0.001, t + step * 1.4)
osc.connect(gain)
gain.connect(ctx.destination)
osc.start(t)
osc.stop(t + step * 1.5)
})
setTimeout(() => ctx.close(), (notes.length * step + 0.3) * 1000)
} catch {
// Web Audio API unavailable
}
}
function activate() {
// Deactivate Snipe Mode if it's running — can't have both
if (snipe.active.value) snipe.deactivate()
active.value = true
document.documentElement.dataset[DATA_ATTR] = 'active'
localStorage.setItem(LS_KEY, 'active')
_playCandySound()
}
function deactivate() {
active.value = false
delete document.documentElement.dataset[DATA_ATTR]
localStorage.removeItem(LS_KEY)
}
function restore() {
if (localStorage.getItem(LS_KEY) === 'active') {
active.value = true
document.documentElement.dataset[DATA_ATTR] = 'active'
}
}
/**
* Registers a document keydown listener that fires activate() when the user
* types "neon" outside of any form field. Call from component setup().
* The listener is automatically removed when the calling component unmounts.
*/
function useWordTrigger() {
const TARGET = 'neon'
let buffer = ''
function handleKey(e: KeyboardEvent) {
const tag = (e.target as HTMLElement | null)?.tagName ?? ''
if (tag === 'INPUT' || tag === 'TEXTAREA' || tag === 'SELECT') return
if (e.key.length !== 1) return // skip modifier/arrow keys
buffer = (buffer + e.key.toLowerCase()).slice(-TARGET.length)
if (buffer === TARGET) {
buffer = ''
if (active.value) deactivate()
else activate()
}
}
onMounted(() => document.addEventListener('keydown', handleKey))
onUnmounted(() => document.removeEventListener('keydown', handleKey))
}
return { active, activate, deactivate, restore, useWordTrigger }
}

View file

@ -58,6 +58,9 @@ export function useSnipeMode(audioEnabled = true) {
}
function activate() {
// Clear candycore if it's on — can't have both
delete document.documentElement.dataset.candycore
localStorage.removeItem('cf-candycore')
active.value = true
document.documentElement.dataset[DATA_ATTR] = 'active'
localStorage.setItem(LS_KEY, 'active')

63
web/src/stores/alerts.ts Normal file
View file

@ -0,0 +1,63 @@
import { defineStore } from 'pinia'
import { ref } from 'vue'
export interface WatchAlert {
id: number
saved_search_id: number
platform_listing_id: string
title: string
price: number
currency: string
trust_score: number
url: string | null
first_alerted_at: string
dismissed_at: string | null
}
const BASE = import.meta.env.VITE_API_BASE ?? ''
export const useAlertsStore = defineStore('alerts', () => {
const alerts = ref<WatchAlert[]>([])
const unreadCount = ref(0)
const loading = ref(false)
const error = ref<string | null>(null)
async function fetchAlerts(includeDismissed = false) {
loading.value = true
error.value = null
try {
const res = await fetch(
`${BASE}/api/alerts${includeDismissed ? '?include_dismissed=true' : ''}`,
{ credentials: 'include' },
)
if (!res.ok) throw new Error(`${res.status}`)
const data = await res.json()
alerts.value = data.alerts
unreadCount.value = data.unread_count
} catch (e) {
error.value = e instanceof Error ? e.message : 'Failed to load alerts'
} finally {
loading.value = false
}
}
async function dismiss(alertId: number) {
await fetch(`${BASE}/api/alerts/${alertId}/dismiss`, {
method: 'POST',
credentials: 'include',
})
alerts.value = alerts.value.filter((a) => a.id !== alertId)
unreadCount.value = Math.max(0, unreadCount.value - 1)
}
async function dismissAll() {
await fetch(`${BASE}/api/alerts/dismiss-all`, {
method: 'POST',
credentials: 'include',
})
alerts.value = []
unreadCount.value = 0
}
return { alerts, unreadCount, loading, error, fetchAlerts, dismiss, dismissAll }
})

View file

@ -59,6 +59,11 @@ export interface SavedSearch {
filters_json: string // JSON blob of SearchFilters subset
created_at: string | null
last_run_at: string | null
// Monitor settings (migration 014)
monitor_enabled: boolean
poll_interval_min: number
min_trust_score: number
last_checked_at: string | null
}
export interface SearchParamsResult {
@ -93,6 +98,7 @@ export interface SearchFilters {
mustExclude?: string // comma-separated; forwarded to eBay -term AND client-side
categoryId?: string // eBay category ID (e.g. "27386" = Graphics/Video Cards)
adapter?: 'auto' | 'api' | 'scraper' // override adapter selection
platform?: string // target platform; defaults to 'ebay' when omitted
}
// ── Session cache ─────────────────────────────────────────────────────────────
@ -173,6 +179,7 @@ export const useSearchStore = defineStore('search', () => {
if (filters.mustExclude?.trim()) params.set('must_exclude', filters.mustExclude.trim())
if (filters.categoryId?.trim()) params.set('category_id', filters.categoryId.trim())
if (filters.adapter && filters.adapter !== 'auto') params.set('adapter', filters.adapter)
if (filters.platform && filters.platform !== 'ebay') params.set('platform', filters.platform)
// Use the async endpoint: returns 202 immediately with a session_id, then
// streams listings + trust scores via SSE as the scrape completes.

View file

@ -31,8 +31,63 @@
<span v-if="item.last_run_at">Last run {{ formatDate(item.last_run_at) }}</span>
<span v-else>Never run</span>
· Saved {{ formatDate(item.created_at) }}
<span v-if="item.last_checked_at" class="saved-card-checked">
· Monitored {{ formatDate(item.last_checked_at) }}
</span>
</p>
</div>
<div class="saved-card-right">
<!-- Monitor toggle only shown to paid+ users -->
<div v-if="session.isPaid || session.tier === 'local'" class="monitor-section">
<label class="monitor-toggle-label">
<input
type="checkbox"
class="monitor-toggle-input"
:checked="item.monitor_enabled"
:aria-label="`Monitor ${item.name}`"
@change="onToggleMonitor(item, ($event.target as HTMLInputElement).checked)"
/>
<span class="monitor-toggle-track" aria-hidden="true" />
<span class="monitor-toggle-text">Monitor</span>
</label>
<!-- Inline settings only when enabled -->
<Transition name="slide">
<div v-if="item.monitor_enabled" class="monitor-settings">
<label class="monitor-setting-label">
Check every
<input
type="number"
class="monitor-setting-input"
:value="item.poll_interval_min"
min="15"
max="1440"
step="15"
:aria-label="`Poll interval for ${item.name} in minutes`"
@change="onIntervalChange(item, ($event.target as HTMLInputElement).valueAsNumber)"
/>
min
<span class="monitor-hint">Min 15. 60 = hourly.</span>
</label>
<label class="monitor-setting-label">
Trust
<input
type="number"
class="monitor-setting-input"
:value="item.min_trust_score"
min="0"
max="100"
step="5"
:aria-label="`Minimum trust score for ${item.name}`"
@change="onThresholdChange(item, ($event.target as HTMLInputElement).valueAsNumber)"
/>
<span class="monitor-hint">0100. 60 = medium confidence.</span>
</label>
</div>
</Transition>
</div>
<div class="saved-card-actions">
<button class="saved-run-btn" type="button" @click="onRun(item)">
Run
@ -41,31 +96,47 @@
class="saved-delete-btn"
type="button"
:aria-label="`Delete saved search: ${item.name}`"
@click="onDelete(item.id)"
@click="onDelete(item)"
>
</button>
</div>
</div>
</li>
</ul>
<!-- Undo toast for delete -->
<Transition name="toast">
<div v-if="pendingDelete" class="undo-toast" role="status" aria-live="polite">
<span>Deleted "{{ pendingDelete.name }}"</span>
<button class="undo-btn" @click="onUndoDelete">Undo</button>
</div>
</Transition>
</div>
</template>
<script setup lang="ts">
import { onMounted } from 'vue'
import { onMounted, ref } from 'vue'
import { useRouter, RouterLink } from 'vue-router'
import { useSavedSearchesStore } from '../stores/savedSearches'
import { useSessionStore } from '../stores/session'
import type { SavedSearch } from '../stores/savedSearches'
const store = useSavedSearchesStore()
const session = useSessionStore()
const router = useRouter()
const BASE = import.meta.env.VITE_API_BASE ?? ''
// Soft-delete state holds for 3 seconds before committing
const pendingDelete = ref<SavedSearch | null>(null)
let deleteTimer: ReturnType<typeof setTimeout> | null = null
onMounted(() => store.fetchAll())
function formatDate(iso: string | null): string {
if (!iso) return '—'
const d = new Date(iso)
return d.toLocaleDateString(undefined, { month: 'short', day: 'numeric', year: 'numeric' })
return new Date(iso).toLocaleDateString(undefined, { month: 'short', day: 'numeric', year: 'numeric' })
}
async function onRun(item: SavedSearch) {
@ -75,8 +146,65 @@ async function onRun(item: SavedSearch) {
router.push({ path: '/', query })
}
async function onDelete(id: number) {
await store.remove(id)
function onDelete(item: SavedSearch) {
// Soft-delete: show undo toast, commit after 3s.
if (deleteTimer) clearTimeout(deleteTimer)
pendingDelete.value = item
deleteTimer = setTimeout(async () => {
if (pendingDelete.value?.id === item.id) {
await store.remove(item.id)
pendingDelete.value = null
}
}, 3000)
}
function onUndoDelete() {
if (deleteTimer) clearTimeout(deleteTimer)
pendingDelete.value = null
}
async function onToggleMonitor(item: SavedSearch, enabled: boolean) {
await fetch(`${BASE}/api/saved-searches/${item.id}/monitor`, {
method: 'PATCH',
credentials: 'include',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
monitor_enabled: enabled,
poll_interval_min: item.poll_interval_min,
min_trust_score: item.min_trust_score,
}),
})
await store.fetchAll()
}
async function onIntervalChange(item: SavedSearch, minutes: number) {
if (isNaN(minutes) || minutes < 15) return
await fetch(`${BASE}/api/saved-searches/${item.id}/monitor`, {
method: 'PATCH',
credentials: 'include',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
monitor_enabled: item.monitor_enabled,
poll_interval_min: minutes,
min_trust_score: item.min_trust_score,
}),
})
await store.fetchAll()
}
async function onThresholdChange(item: SavedSearch, score: number) {
if (isNaN(score)) return
await fetch(`${BASE}/api/saved-searches/${item.id}/monitor`, {
method: 'PATCH',
credentials: 'include',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
monitor_enabled: item.monitor_enabled,
poll_interval_min: item.poll_interval_min,
min_trust_score: score,
}),
})
await store.fetchAll()
}
</script>
@ -127,12 +255,12 @@ async function onDelete(id: number) {
display: flex;
flex-direction: column;
gap: var(--space-3);
max-width: 720px;
max-width: 800px;
}
.saved-card {
display: flex;
align-items: center;
align-items: flex-start;
gap: var(--space-4);
padding: var(--space-4) var(--space-5);
background: var(--color-surface-2);
@ -174,13 +302,131 @@ async function onDelete(id: number) {
margin: 0;
}
.saved-card-checked {
color: var(--app-primary);
}
/* Right column: monitor section + action buttons */
.saved-card-right {
display: flex;
flex-direction: column;
align-items: flex-end;
gap: var(--space-3);
flex-shrink: 0;
}
.saved-card-actions {
display: flex;
align-items: center;
gap: var(--space-2);
}
/* Monitor toggle */
.monitor-section {
display: flex;
flex-direction: column;
align-items: flex-end;
gap: var(--space-2);
}
.monitor-toggle-label {
display: flex;
align-items: center;
gap: var(--space-2);
cursor: pointer;
user-select: none;
}
/* Visually hide the native checkbox but keep it accessible */
.monitor-toggle-input {
position: absolute;
width: 1px;
height: 1px;
opacity: 0;
pointer-events: none;
}
.monitor-toggle-track {
display: inline-block;
width: 32px;
height: 18px;
border-radius: 9px;
background: var(--color-border);
position: relative;
transition: background 150ms ease;
flex-shrink: 0;
}
.monitor-toggle-track::after {
content: '';
position: absolute;
top: 2px;
left: 2px;
width: 14px;
height: 14px;
border-radius: 50%;
background: #fff;
transition: transform 150ms ease;
}
.monitor-toggle-input:checked + .monitor-toggle-track {
background: var(--app-primary);
}
.monitor-toggle-input:checked + .monitor-toggle-track::after {
transform: translateX(14px);
}
/* Focus ring on the label when the hidden checkbox is focused */
.monitor-toggle-label:has(.monitor-toggle-input:focus-visible) .monitor-toggle-track {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.monitor-toggle-text {
font-size: 0.8125rem;
color: var(--color-text-muted);
white-space: nowrap;
}
/* Inline monitor settings */
.monitor-settings {
display: flex;
flex-direction: column;
gap: var(--space-2);
padding: var(--space-3);
background: var(--color-surface);
border: 1px solid var(--color-border-light);
border-radius: var(--radius-md);
font-size: 0.8125rem;
color: var(--color-text-muted);
}
.monitor-setting-label {
display: flex;
align-items: center;
gap: var(--space-2);
flex-wrap: wrap;
}
.monitor-setting-input {
width: 60px;
padding: var(--space-1) var(--space-2);
background: var(--color-surface-2);
border: 1px solid var(--color-border);
border-radius: var(--radius-sm);
color: var(--color-text);
font-family: var(--font-mono);
font-size: 0.8125rem;
text-align: center;
}
.monitor-hint {
font-size: 0.6875rem;
color: var(--color-text-muted);
opacity: 0.75;
}
.saved-run-btn {
padding: var(--space-2) var(--space-4);
background: var(--app-primary);
@ -206,13 +452,65 @@ async function onDelete(id: number) {
cursor: pointer;
transition: border-color 150ms ease, color 150ms ease;
min-width: 28px;
min-height: 28px;
}
.saved-delete-btn:hover { border-color: var(--color-error); color: var(--color-error); }
/* Undo toast */
.undo-toast {
position: fixed;
bottom: calc(var(--space-6) + env(safe-area-inset-bottom));
left: 50%;
transform: translateX(-50%);
display: flex;
align-items: center;
gap: var(--space-3);
padding: var(--space-3) var(--space-5);
background: var(--color-surface-2);
border: 1px solid var(--color-border);
border-radius: var(--radius-lg);
box-shadow: 0 4px 16px rgba(0,0,0,0.4);
font-size: 0.875rem;
color: var(--color-text);
z-index: 300;
white-space: nowrap;
}
.undo-btn {
padding: var(--space-1) var(--space-3);
background: var(--app-primary);
border: none;
border-radius: var(--radius-sm);
color: var(--color-text-inverse);
font-family: var(--font-body);
font-size: 0.8125rem;
font-weight: 600;
cursor: pointer;
}
/* Transitions */
.slide-enter-active,
.slide-leave-active { transition: opacity 150ms ease, max-height 200ms ease; max-height: 200px; overflow: hidden; }
.slide-enter-from,
.slide-leave-to { opacity: 0; max-height: 0; }
.toast-enter-active,
.toast-leave-active { transition: opacity 200ms ease, transform 200ms ease; }
.toast-enter-from,
.toast-leave-to { opacity: 0; transform: translateX(-50%) translateY(8px); }
@media (prefers-reduced-motion: reduce) {
.slide-enter-active, .slide-leave-active,
.toast-enter-active, .toast-leave-active { transition: none; }
}
@media (max-width: 767px) {
.saved-header { padding: var(--space-4); }
.saved-list { padding: var(--space-4); }
.saved-card { flex-direction: column; align-items: flex-start; gap: var(--space-3); }
.saved-card-right { width: 100%; align-items: flex-start; }
.saved-card-actions { width: 100%; justify-content: flex-end; }
.monitor-section { width: 100%; align-items: flex-start; }
.monitor-settings { width: 100%; }
}
</style>

View file

@ -2,8 +2,29 @@
<div class="search-view">
<!-- Search bar -->
<header class="search-header">
<div class="platform-tabs" role="tablist" aria-label="Search platform">
<button
v-for="p in PLATFORMS"
:key="p.value"
type="button"
role="tab"
class="platform-tab"
:class="{
'platform-tab--active': filters.platform === p.value,
'platform-tab--soon': !p.available,
}"
:aria-selected="filters.platform === p.value"
:disabled="!p.available"
:title="p.available ? p.label : `${p.label} — coming soon`"
@click="p.available && (filters.platform = p.value)"
>
{{ p.label }}
<span v-if="!p.available" class="platform-tab__soon">soon</span>
</button>
</div>
<form class="search-form" @submit.prevent="onSearch" role="search">
<div class="search-form-row1">
<template v-if="filters.platform === 'ebay' || !filters.platform">
<label for="cat-select" class="sr-only">Category</label>
<select
id="cat-select"
@ -20,6 +41,7 @@
</option>
</optgroup>
</select>
</template>
<label for="search-input" class="sr-only">Search listings</label>
<input
id="search-input"
@ -116,6 +138,7 @@
<!-- eBay Search Parameters -->
<!-- These are sent to eBay. Changes require a new search to take effect. -->
<template v-if="filters.platform === 'ebay' || !filters.platform">
<h2 class="filter-section-heading filter-section-heading--search">
eBay Search
</h2>
@ -216,6 +239,7 @@
<p class="filter-pages-hint">Excludes forwarded to eBay on re-search</p>
</div>
</fieldset>
</template>
<!-- Post-search Filters -->
<!-- Applied locally to current results no re-search needed. -->
@ -355,6 +379,9 @@
</div>
</div>
<!-- Loading (scraping in progress, no results yet) -->
<SearchProgress v-else-if="store.loading && !store.results.length" :query="store.query" :platform="filters.platform ?? 'ebay'" />
<!-- No results -->
<div v-else-if="!store.results.length && !store.loading && store.query" class="results-empty">
<p>No listings found for <strong>{{ store.query }}</strong>.</p>
@ -375,8 +402,13 @@
</span>
</p>
<div class="toolbar-actions">
<!-- Re-search indicator loading while stale results are still visible -->
<span v-if="store.loading && store.results.length" class="enriching-badge enriching-badge--searching" aria-live="polite" title="Fetching new results…">
<span class="enriching-dot" aria-hidden="true"></span>
Re-searching
</span>
<!-- Live enrichment indicator visible while SSE stream is open -->
<span v-if="store.enriching" class="enriching-badge" aria-live="polite" title="Scores updating as seller data arrives">
<span v-else-if="store.enriching" class="enriching-badge" aria-live="polite" title="Scores updating as seller data arrives">
<span class="enriching-dot" aria-hidden="true"></span>
Updating scores
</span>
@ -456,6 +488,7 @@ import { useBlocklistStore } from '../stores/blocklist'
import { useReportedStore } from '../stores/reported'
import ListingCard from '../components/ListingCard.vue'
import LLMQueryPanel from '../components/LLMQueryPanel.vue'
import SearchProgress from '../components/SearchProgress.vue'
const route = useRoute()
const store = useSearchStore()
@ -627,6 +660,7 @@ const DEFAULT_FILTERS: SearchFilters = {
mustExclude: '',
categoryId: '',
adapter: 'auto' as 'auto' | 'api' | 'scraper',
platform: 'ebay',
}
const filters = reactive<SearchFilters>({ ...DEFAULT_FILTERS })
@ -662,6 +696,12 @@ const parsedMustIncludeGroups = computed(() =>
.filter(g => g.length > 0)
)
const PLATFORMS: { value: string; label: string; available: boolean }[] = [
{ value: 'ebay', label: 'eBay', available: true },
{ value: 'mercari', label: 'Mercari', available: true },
{ value: 'poshmark', label: 'Poshmark', available: false },
]
const INCLUDE_MODES: { value: MustIncludeMode; label: string }[] = [
{ value: 'all', label: 'All' },
{ value: 'any', label: 'Any' },
@ -1440,6 +1480,16 @@ async function onSearch() {
white-space: nowrap;
}
.enriching-badge--searching {
background: color-mix(in srgb, var(--color-info) 10%, transparent);
border-color: color-mix(in srgb, var(--color-info) 30%, transparent);
color: var(--color-info);
}
.enriching-badge--searching .enriching-dot {
background: var(--color-info);
}
.enriching-dot {
width: 6px;
height: 6px;
@ -1776,4 +1826,53 @@ async function onSearch() {
to { opacity: 1; transform: translateY(0); }
}
/* ── Platform tab strip ──────────────────────────────────────────────── */
.platform-tabs {
display: flex;
gap: var(--space-1);
margin-bottom: var(--space-3);
}
.platform-tab {
display: inline-flex;
align-items: center;
gap: var(--space-1);
padding: var(--space-1) var(--space-3);
background: transparent;
border: 1.5px solid var(--color-border);
border-radius: var(--radius-full);
color: var(--color-text-muted);
font-family: var(--font-body);
font-size: 0.8125rem;
font-weight: 500;
cursor: pointer;
transition: border-color 150ms ease, color 150ms ease, background 150ms ease;
white-space: nowrap;
}
.platform-tab:hover:not(:disabled):not(.platform-tab--active) {
border-color: var(--app-primary);
color: var(--app-primary);
}
.platform-tab--active {
background: var(--app-primary);
border-color: var(--app-primary);
color: var(--color-text-inverse);
font-weight: 600;
}
.platform-tab--soon {
opacity: 0.45;
cursor: not-allowed;
}
.platform-tab__soon {
font-size: 0.625rem;
font-weight: 700;
text-transform: uppercase;
letter-spacing: 0.06em;
opacity: 0.8;
}
</style>

View file

@ -93,6 +93,74 @@
</div>
</section>
<!-- eBay Account Connection paid+ only -->
<section v-if="ebay.oauth_available && session.isLoggedIn" class="settings-section">
<h2 class="settings-section-title">eBay Account</h2>
<!-- Connected state -->
<div v-if="ebay.connected" class="ebay-connected">
<div class="ebay-status-row">
<span class="ebay-status-dot ebay-status-dot--on" aria-hidden="true" />
<span class="settings-toggle-label">Connected</span>
</div>
<p class="settings-toggle-desc">
Snipe uses your eBay account to fetch seller registration dates instantly
via the Trading API, without Playwright scraping. This means faster, more
accurate trust scores on every search.
<span v-if="ebay.access_token_expired" class="ebay-warn">
Your access token has expired reconnect to restore instant enrichment.
</span>
</p>
<div class="ebay-action-row">
<button
v-if="ebay.access_token_expired"
class="ebay-btn ebay-btn--primary"
:disabled="ebay.connecting"
@click="startConnect"
>
Reconnect eBay account
</button>
<button
class="ebay-btn ebay-btn--danger"
:disabled="ebay.disconnecting"
@click="disconnect"
>
{{ ebay.disconnecting ? 'Disconnecting…' : 'Disconnect' }}
</button>
</div>
</div>
<!-- Not connected paid tier -->
<div v-else-if="session.isPaid || session.isPremium" class="ebay-disconnected">
<p class="settings-toggle-desc">
Connect your eBay account to enable instant seller registration date lookup
via the Trading API. Without it, Snipe falls back to slower Playwright
scraping (or Shopping API rate-limited calls) to determine account age.
</p>
<button
class="ebay-btn ebay-btn--primary"
:disabled="ebay.connecting"
@click="startConnect"
>
{{ ebay.connecting ? 'Redirecting to eBay…' : 'Connect eBay account' }}
</button>
</div>
<!-- Not connected free tier upsell -->
<div v-else class="ebay-disconnected">
<p class="settings-toggle-desc">
Connect your eBay account for instant seller trust scoring without scraping.
Available on Paid tier and above.
</p>
<a class="ebay-btn ebay-btn--upsell" href="/pricing" rel="noopener">
Upgrade to Paid
</a>
</div>
<p v-if="ebay.error" class="settings-error" role="alert">{{ ebay.error }}</p>
<p v-if="ebay.success" class="settings-success" role="status">{{ ebay.success }}</p>
</section>
<!-- Affiliate Links only shown to signed-in cloud users -->
<section v-if="session.isLoggedIn" class="settings-section">
<h2 class="settings-section-title">Affiliate Links</h2>
@ -174,13 +242,16 @@
</template>
<script setup lang="ts">
import { ref, computed, watch } from 'vue'
import { ref, computed, watch, reactive, onMounted } from 'vue'
import { useRoute, useRouter } from 'vue-router'
import { useTrustSignalPref } from '../composables/useTrustSignalPref'
import { useTheme } from '../composables/useTheme'
import { useSessionStore } from '../stores/session'
import { usePreferencesStore } from '../stores/preferences'
import { useLLMQueryBuilder } from '../composables/useLLMQueryBuilder'
const route = useRoute()
const router = useRouter()
const { enabled: trustSignalEnabled, setEnabled } = useTrustSignalPref()
const theme = useTheme()
const themeOptions: { value: 'system' | 'dark' | 'light'; label: string }[] = [
@ -212,6 +283,90 @@ watch(() => prefs.affiliateByokId, (val) => { byokInput.value = val })
function saveByokId() {
prefs.setAffiliateByokId(byokInput.value)
}
// eBay Account Connection
const ebay = reactive({
oauth_available: false,
connected: false,
access_token_expired: false,
scopes: [] as string[],
connecting: false,
disconnecting: false,
error: '',
success: '',
})
async function fetchEbayStatus() {
try {
const res = await fetch('/api/ebay/status')
if (!res.ok) return
const data = await res.json()
ebay.oauth_available = data.oauth_available ?? false
ebay.connected = data.connected ?? false
ebay.access_token_expired = data.access_token_expired ?? false
ebay.scopes = data.scopes ?? []
} catch {
// silently ignore section stays hidden if fetch fails
}
}
async function startConnect() {
ebay.connecting = true
ebay.error = ''
try {
const res = await fetch('/api/ebay/connect')
if (!res.ok) {
const body = await res.json().catch(() => ({}))
ebay.error = body.detail ?? 'eBay connection unavailable.'
return
}
const { auth_url } = await res.json()
window.location.href = auth_url
} catch {
ebay.error = 'Could not reach the server. Try again.'
ebay.connecting = false
}
}
async function disconnect() {
ebay.disconnecting = true
ebay.error = ''
ebay.success = ''
try {
const res = await fetch('/api/ebay/disconnect', { method: 'DELETE' })
if (res.ok || res.status === 204) {
ebay.connected = false
ebay.access_token_expired = false
ebay.scopes = []
ebay.success = 'eBay account disconnected.'
} else {
ebay.error = 'Disconnect failed. Try again.'
}
} catch {
ebay.error = 'Could not reach the server. Try again.'
} finally {
ebay.disconnecting = false
}
}
onMounted(async () => {
await fetchEbayStatus()
// Handle OAuth callback redirect params: ?ebay_connected=1 or ?ebay_error=access_denied
const connected = route.query.ebay_connected
const oauthError = route.query.ebay_error
if (connected) {
ebay.success = 'eBay account connected! Trust scores will now use the Trading API.'
await fetchEbayStatus()
router.replace({ query: { ...route.query, ebay_connected: undefined } })
} else if (oauthError) {
ebay.error = oauthError === 'access_denied'
? 'eBay authorization was cancelled.'
: `eBay OAuth error: ${oauthError}`
router.replace({ query: { ...route.query, ebay_error: undefined } })
}
})
</script>
<style scoped>
@ -373,7 +528,7 @@ function saveByokId() {
outline-offset: 2px;
}
/* ---- Error feedback ---- */
/* ---- Error / success feedback ---- */
.settings-error {
font-size: 0.8125rem;
color: var(--color-danger, #f85149);
@ -398,6 +553,100 @@ function saveByokId() {
border-color: var(--app-primary);
}
.settings-success {
font-size: 0.8125rem;
color: var(--color-success, #3fb950);
margin: 0;
}
/* ---- eBay Account section ---- */
.ebay-status-row {
display: flex;
align-items: center;
gap: var(--space-2);
}
.ebay-status-dot {
width: 10px;
height: 10px;
border-radius: 50%;
flex-shrink: 0;
background: var(--color-border);
}
.ebay-status-dot--on {
background: var(--color-success, #3fb950);
}
.ebay-connected,
.ebay-disconnected {
display: flex;
flex-direction: column;
gap: var(--space-3);
}
.ebay-warn {
display: block;
margin-top: var(--space-1);
color: var(--color-warning, #d29922);
}
.ebay-action-row {
display: flex;
gap: var(--space-2);
flex-wrap: wrap;
}
.ebay-btn {
padding: var(--space-2) var(--space-4);
border: none;
border-radius: var(--radius-md);
font-size: 0.875rem;
font-weight: 600;
cursor: pointer;
font-family: inherit;
white-space: nowrap;
transition: opacity 0.15s ease;
text-decoration: none;
display: inline-flex;
align-items: center;
}
.ebay-btn:disabled {
opacity: 0.55;
cursor: not-allowed;
}
.ebay-btn--primary {
background: var(--app-primary);
color: var(--color-text-inverse, #fff);
}
.ebay-btn--primary:hover:not(:disabled) { opacity: 0.85; }
.ebay-btn--danger {
background: transparent;
color: var(--color-danger, #f85149);
border: 1px solid var(--color-danger, #f85149);
}
.ebay-btn--danger:hover:not(:disabled) {
background: color-mix(in srgb, var(--color-danger, #f85149) 12%, transparent);
}
.ebay-btn--upsell {
background: var(--color-surface-raised);
color: var(--color-text);
border: 1px solid var(--color-border);
}
.ebay-btn--upsell:hover { opacity: 0.85; }
.ebay-btn:focus-visible {
outline: 2px solid var(--app-primary);
outline-offset: 2px;
}
.theme-btn-group {
display: flex;
gap: 0;