feat: add whisper.cpp ROCm backend support for AMD GPU acceleration

- transcription.py: new _transcribe_remote_whispercpp() using /inference endpoint
- transcription.py: backend param routes to openai or whispercpp remote path
- config.py: whisper.backend default 'openai', alt 'whispercpp'
- pipeline.py: passes backend from config to transcribe_file
- settings: backend dropdown (OpenAI-compat / whisper.cpp)
- SETUP.md: whisper.cpp ROCm build and systemd setup instructions

whisper-cpp-server running on beastix :8080 (ROCm0, gfx1030, RX 6800 XT)
This commit is contained in:
2026-04-02 01:33:32 +02:00
parent 56d41b8620
commit c7cad4bb2a
6 changed files with 75 additions and 19 deletions
+1
View File
@@ -13,6 +13,7 @@ DEFAULTS = {
"language": "de",
"device": "auto", # "auto" = use GPU if ROCm available, else CPU
"base_url": "",
"backend": "openai", # "openai" = OpenAI-compatible API, "whispercpp" = whisper.cpp /inference
},
"audio": {
"device": "",