Phase 2 (snipe#4): after bulk-reporting sellers to eBay T&S, Snipe now
persists which sellers were reported so cards show a muted "Reported to
eBay" badge and users aren't prompted to re-report the same seller.
- migration 012 adds reported_sellers table (user DB, UNIQUE on seller)
- Store.mark_reported / list_reported methods
- POST /api/reported + GET /api/reported endpoints
- reported store (frontend) with optimistic update + server persistence
- reportSelected wires into store after opening eBay tabs
Phase 3 prep (snipe#4): community blocklist share toggle
- Settings > Community section: "Share blocklist with community" toggle
(visible only to signed-in cloud users, default OFF)
- Persisted as community.blocklist_share user preference
- Backend community signal publish now gated on opt-in preference;
privacy-by-architecture: sharing is explicit, never implicit
Add 10 new tests covering the three previously untested flag paths:
- zero_feedback: flag fires + composite capped at 35 even with all-20 signals
- long_on_market: fires at >=5 sightings + >=14 days; NOT at <5 sightings or <14 days
- significant_price_drop: fires at >=20% below first-seen; NOT at <20% or no prior price
- established_retailer: duplicate_photo suppressed at feedback>=1000; fires below threshold
Also fix datetime.utcnow() deprecation in aggregator._days_since() and test helper
— replaced with timezone-aware datetime.now(timezone.utc) throughout.
Adds optional community_store param to refresh(). Credentialed instances
publish leaf categories to the shared community PostgreSQL after a
successful Taxonomy API fetch. Credentialless instances pull from community
(requires >= 10 rows) before falling back to the hardcoded bootstrap.
Adds 3 new tests (14 total, all passing).
Corrections (#31):
- Add 010_corrections.sql migration (from cf-core CORRECTIONS_MIGRATION_SQL)
- Wire make_corrections_router() at /api/corrections (shared_db, product='snipe')
- get_shared_db() dependency aggregates corrections across all cloud users
Community module (#32#33):
- Init SnipeCommunityStore at startup when COMMUNITY_DB_URL is set
- Graceful skip if COMMUNITY_DB_URL is unset (local mode, community disabled)
- add_to_blocklist() publishes confirmed_scam=True seller_trust signal to
community postgres on every manual blocklist addition (fire-and-forget)
- BlocklistAdd gains flags[] field so active red-flag keys travel with signal
cf-orch community postgres (cf-orch#36) + cf-core module (cf-core#47) both merged.
SQLite's executescript() auto-commits each DDL statement, so a partial
migration failure leaves columns in the DB without marking the migration
done. On the next startup the runner retries and hits duplicate column errors.
Use ADD COLUMN IF NOT EXISTS (SQLite 3.35+, shipped in Python 3.11+)
so migrations 004 and 005 are safe to re-run in any partial state.
001_init.sql already defines first_seen_at in the CREATE TABLE statement.
On fresh installs, migration 004 failed with 'duplicate column name: first_seen_at'.
Remove the redundant ALTER TABLE; last_seen_at/times_seen/price_at_first_seen
are still added by 004 as before.
- aggregator: also check listing.condition against damage keywords so listings
with eBay condition "for parts or not working" flag scratch_dent_mentioned
even when the title looks clean
- aggregator: add "parts/repair" (slash) + "parts or not working" to keyword set
- trust/__init__.py: pass listing.condition into aggregate()
- 3 new regression tests (synthetic fixtures, 17 total passing)
- SearchView: extract DEFAULT_FILTERS const + resetFilters(); add "Clear filters"
button that shows only when activeFilterCount > 0 with count badge
- .env.example: document LLM inference env vars (ANTHROPIC/OPENAI/OLLAMA/CF_ORCH_URL)
and cf-core wiring notes; closes#17
- Rename 002_background_tasks.sql → 007_background_tasks.sql to avoid
collision with existing 002_add_listing_format.sql migration
- Add CREATE UNIQUE INDEX on trust_scores(listing_id) in same migration
so save_trust_scores() can use ON CONFLICT upsert semantics
- Add Store.save_trust_scores() — upserts scores keyed by listing_id;
preserves photo_analysis_json so runner writes are never clobbered
- runner.py: replace raw sqlite3.connect() with get_connection() throughout
(timeout=30 + WAL mode); fix connection leak in insert_task via try/finally
- _run_trust_photo_analysis: read 'user_db' from params to write results to
the correct per-user DB in cloud mode (was silently writing to wrong DB)
- main.py lifespan: use _shared_db_path() in cloud mode so background_tasks
queue lives in shared DB, not _LOCAL_SNIPE_DB
- Add _enqueue_vision_tasks() and call it after score_batch() — this is the
missing enqueue call site; gated by features.photo_analysis (Paid tier)
- Test fixture: add missing 'stage' column to background_tasks schema
BTF enrichment (_fetch_item_html) was constructing invalid URLs like
https://www.ebay.com/itm/v1|123456789|0 when listings came from the API
adapter (Browse API itemId format). Extract the numeric segment from
compound IDs before appending to EBAY_ITEM_URL — scraper IDs are already
plain numeric so the split is a no-op for that adapter.
Large retailers like Newegg legitimately reuse manufacturer stock photos
across listings. Duplicate photo hash is not a scam signal for sellers
with 1000+ feedback — suppress the red flag for them.
- Two sidebar fields: 'Must include' and 'Must exclude' (comma-separated)
- Must-exclude terms forwarded to eBay _nkw as -term prefixes (native eBay
support) so exclusions reduce the eBay result set at the source — improves
market comp quality as a side effect
- Must-include applied client-side only (substring, case-insensitive)
- Both applied client-side via passesFilter() for instant response without
re-fetching (cache-friendly)
- Exclude input has subtle red border tint (color-mix) to signal intent
- Hint text: 're-search to apply to eBay' reminds user negatives need a
new search to take effect at the eBay level
- Parallel execution: search() and get_completed_sales() now run
concurrently via ThreadPoolExecutor — each gets its own Store/SQLite
connection for thread safety. First cold search time ~halved.
- Pagination: SearchFilters.pages (default 1) controls how many eBay
result pages are fetched. Both search and sold-comps support up to 3
parallel Playwright sessions per call (capped to avoid Xvfb overload).
UI: segmented 1/2/3/5 pages selector in filter sidebar with cost hint.
- True median: get_completed_sales() now averages the two middle values
for even-length price lists instead of always taking the lower bound.
- Fix suspicious_price false positive: aggregator now checks
signal_scores.get("price_vs_market") == 0 (pre-None-substitution)
so listings without market data are never flagged as suspicious.
- Fix title pollution: scraper strips eBay's hidden screen-reader span
("Opens in a new window or tab") from listing titles via regex.
Lazy-imports playwright/playwright_stealth inside _get() so pure
parsing functions are importable without the full browser stack.
- Tests: 48 pass on host (scraper tests now runnable without Docker),
new regression guards for all three bug fixes.
Scraper can't fetch seller profile age without following each listing's
seller link. Using 0 as sentinel caused every scraped seller to trigger
new_account and account_under_30_days red flags erroneously.
- Seller.account_age_days: int → Optional[int] (None = not yet fetched)
- Migration 003: recreate sellers table without NOT NULL constraint
- MetadataScorer: return None for unknown age → score_is_partial=True
- Aggregator: gate age flags on is not None
- Scraper: account_age_days=None instead of 0