- VoiceFrame dataclass: label, confidence, speaker_id, shift_magnitude, timestamp - MockVoiceIO: async generator of synthetic frames on a timer (CF_VOICE_MOCK=1) - ContextClassifier: passthrough stub wrapping VoiceIO; _enrich() hook for real classifiers - make_io() factory: mock mode auto-detected from env, raises NotImplementedError for real audio - cf-voice-demo CLI entry point for quick smoke-testing - 12 tests passing; editable install via pip install -e ../cf-voice
1.6 KiB
1.6 KiB
cf-voice
CircuitForge voice annotation pipeline. Produces VoiceFrame objects from a live audio stream — tone label, confidence, speaker identity, and shift magnitude.
Status: Notation v0.1.x stub — mock mode only. Real classifiers (YAMNet, wav2vec2, pyannote.audio) land incrementally.
Install
pip install -e ../cf-voice # editable install alongside sibling repos
Quick start
from cf_voice.context import ContextClassifier
classifier = ContextClassifier.mock() # or from_env() with CF_VOICE_MOCK=1
async for frame in classifier.stream():
print(frame.label, frame.confidence)
Or run the demo CLI:
CF_VOICE_MOCK=1 cf-voice-demo
VoiceFrame
@dataclass
class VoiceFrame:
label: str # e.g. "Warmly impatient"
confidence: float # 0.0–1.0
speaker_id: str # ephemeral local label, e.g. "speaker_a"
shift_magnitude: float # delta from previous frame, 0.0–1.0
timestamp: float # session-relative seconds
Mock mode
Set CF_VOICE_MOCK=1 or pass mock=True to make_io(). No GPU or microphone required. Useful for CI and frontend development.
Module structure
| Module | License | Purpose |
|---|---|---|
cf_voice.models |
MIT | VoiceFrame dataclass |
cf_voice.io |
MIT | Audio capture, mock generator |
cf_voice.context |
BSL 1.1* | Tone classification, diarization |
*BSL applies when real inference models are integrated. Currently stub = MIT.
Consumed by
Circuit-Forge/linnet— real-time tone annotation widgetCircuit-Forge/osprey— telephony bridge voice context