feat: add whisper.cpp ROCm backend support for AMD GPU acceleration

- transcription.py: new _transcribe_remote_whispercpp() using /inference endpoint
- transcription.py: backend param routes to openai or whispercpp remote path
- config.py: whisper.backend default 'openai', alt 'whispercpp'
- pipeline.py: passes backend from config to transcribe_file
- settings: backend dropdown (OpenAI-compat / whisper.cpp)
- SETUP.md: whisper.cpp ROCm build and systemd setup instructions

whisper-cpp-server running on beastix :8080 (ROCm0, gfx1030, RX 6800 XT)
This commit is contained in:
2026-04-02 01:33:32 +02:00
parent 56d41b8620
commit c7cad4bb2a
6 changed files with 75 additions and 19 deletions
+1
View File
@@ -103,6 +103,7 @@ async def _run_meeting_pipeline(cfg, wav_path, output_dir, instructions, diar_cf
model_name=cfg["whisper"]["model"],
device=cfg["whisper"]["device"],
base_url=cfg["whisper"].get("base_url", ""),
backend=cfg["whisper"].get("backend", "openai"),
with_segments=True,
)
)