# cf-voice CircuitForge voice annotation pipeline. Produces `VoiceFrame` objects from a live audio stream — tone label, confidence, speaker identity, and shift magnitude. **Status:** Notation v0.1.x stub — mock mode only. Real classifiers (YAMNet, wav2vec2, pyannote.audio) land incrementally. ## Install ```bash pip install -e ../cf-voice # editable install alongside sibling repos ``` ## Quick start ```python from cf_voice.context import ContextClassifier classifier = ContextClassifier.mock() # or from_env() with CF_VOICE_MOCK=1 async for frame in classifier.stream(): print(frame.label, frame.confidence) ``` Or run the demo CLI: ```bash CF_VOICE_MOCK=1 cf-voice-demo ``` ## VoiceFrame ```python @dataclass class VoiceFrame: label: str # e.g. "Warmly impatient" confidence: float # 0.0–1.0 speaker_id: str # ephemeral local label, e.g. "speaker_a" shift_magnitude: float # delta from previous frame, 0.0–1.0 timestamp: float # session-relative seconds ``` ## Mock mode Set `CF_VOICE_MOCK=1` or pass `mock=True` to `make_io()`. No GPU or microphone required. Useful for CI and frontend development. ## Module structure | Module | License | Purpose | |--------|---------|---------| | `cf_voice.models` | MIT | `VoiceFrame` dataclass | | `cf_voice.io` | MIT | Audio capture, mock generator | | `cf_voice.context` | BSL 1.1* | Tone classification, diarization | *BSL applies when real inference models are integrated. Currently stub = MIT. ## Consumed by - `Circuit-Forge/linnet` — real-time tone annotation widget - `Circuit-Forge/osprey` — telephony bridge voice context