安装如下一些依赖:
- ffmpeg-python
- transformers
使用如下代码进行识别:
import whisper
model = whisper.load_model("small.pt")
result = model.transcribe("output_audio.wav")
print(result["text"])
另一个更为底层的调用方法:
audio = whisper.load_audio("output.wav")
audio = whisper.pad_or_trim(audio)
mel = whisper.log_mel_spectrogram(audio).to(model.device)
_,probs = model.detect_language(mel)
print("Detected language: {}".format(max(probs, key=probs.get)))
options = whisper.DecodingOptions()
result = whisper.decode(model, mel, options)
print("You say:",result.text)
其中模型可以打开__init__.py
文件进行复制,如small模型在https://openaipublic.azureedge.net/main/whisper/models/9ecf779972d90ba49c06d968637d720dd632c55bbf19d441fb42bf17a411e794/small.pt
。
参考文章:
标签:ASR,whisper,result,语音,mel,model,audio,probs From: https://www.cnblogs.com/commuter/p/18496268