feat: add whisper.cpp ROCm backend support for AMD GPU acceleration
- transcription.py: new _transcribe_remote_whispercpp() using /inference endpoint - transcription.py: backend param routes to openai or whispercpp remote path - config.py: whisper.backend default 'openai', alt 'whispercpp' - pipeline.py: passes backend from config to transcribe_file - settings: backend dropdown (OpenAI-compat / whisper.cpp) - SETUP.md: whisper.cpp ROCm build and systemd setup instructions whisper-cpp-server running on beastix :8080 (ROCm0, gfx1030, RX 6800 XT)
This commit is contained in:
@@ -74,9 +74,16 @@
|
||||
|
||||
<section>
|
||||
<h2>Verarbeitung</h2>
|
||||
<div class="field">
|
||||
<label>Whisper Backend</label>
|
||||
<select id="whisper-backend">
|
||||
<option value="openai">OpenAI-kompatibel (faster-whisper-server)</option>
|
||||
<option value="whispercpp">whisper.cpp Server</option>
|
||||
</select>
|
||||
</div>
|
||||
<div class="field">
|
||||
<label>Whisper Server URL (leer = lokal)</label>
|
||||
<input type="text" id="whisper-url" placeholder="http://beastix:8000">
|
||||
<input type="text" id="whisper-url" placeholder="http://beastix:8080">
|
||||
</div>
|
||||
<div class="field">
|
||||
<label>Whisper Modell</label>
|
||||
|
||||
Reference in New Issue
Block a user