feat: add whisper.cpp ROCm backend support for AMD GPU acceleration
- transcription.py: new _transcribe_remote_whispercpp() using /inference endpoint - transcription.py: backend param routes to openai or whispercpp remote path - config.py: whisper.backend default 'openai', alt 'whispercpp' - pipeline.py: passes backend from config to transcribe_file - settings: backend dropdown (OpenAI-compat / whisper.cpp) - SETUP.md: whisper.cpp ROCm build and systemd setup instructions whisper-cpp-server running on beastix :8080 (ROCm0, gfx1030, RX 6800 XT)
This commit is contained in:
@@ -103,6 +103,7 @@ async def _run_meeting_pipeline(cfg, wav_path, output_dir, instructions, diar_cf
|
||||
model_name=cfg["whisper"]["model"],
|
||||
device=cfg["whisper"]["device"],
|
||||
base_url=cfg["whisper"].get("base_url", ""),
|
||||
backend=cfg["whisper"].get("backend", "openai"),
|
||||
with_segments=True,
|
||||
)
|
||||
)
|
||||
|
||||
Reference in New Issue
Block a user