research: offline translation model evaluation (VRAM vs accuracy) #18

Open
opened 2026-04-06 13:11:43 -07:00 by pyr0ball · 0 comments
Owner

Before committing to offline translation as a Free-tier feature, evaluate whether DeepL's offline model (or an OSS alternative like NLLB-200 or Opus-MT) can run alongside cf_voice classifiers on Heimdall's VRAM.

Research questions:

  1. What is DeepL's offline model VRAM requirement?
  2. Can it coexist with YAMNet + wav2vec2 + whisper.cpp on Heimdall's current VRAM headroom?
  3. If not, what's the smallest OSS translation model that covers top 10 languages at acceptable quality for 5-15 word tone labels?

Output: a recommendation (model name, VRAM requirement, quality notes) that unblocks the offline translation implementation issue.

This is the open question from design doc §Open Questions #1.

Before committing to offline translation as a Free-tier feature, evaluate whether DeepL's offline model (or an OSS alternative like NLLB-200 or Opus-MT) can run alongside cf_voice classifiers on Heimdall's VRAM. Research questions: 1. What is DeepL's offline model VRAM requirement? 2. Can it coexist with YAMNet + wav2vec2 + whisper.cpp on Heimdall's current VRAM headroom? 3. If not, what's the smallest OSS translation model that covers top 10 languages at acceptable quality for 5-15 word tone labels? Output: a recommendation (model name, VRAM requirement, quality notes) that unblocks the offline translation implementation issue. This is the open question from design doc §Open Questions #1.
pyr0ball added this to the Translation — v1.1 milestone 2026-04-06 13:11:43 -07:00
pyr0ball added the
design
backlog
labels 2026-04-06 13:11:43 -07:00
Sign in to join this conversation.
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: Circuit-Forge/linnet#18
No description provided.