错误
[E:onnxruntime:Default, provider_bridge_ort.cc:1480 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1193 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.11: cannot open shared object file: No such file or directory
2023-12-07 16:06:22.975254616 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:747 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Please reference https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements to ensure all dependencies are met.
原因:
onnxruntime 只支持 cuda-11.8及以下:https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html
解决方案:
重装 cuda-11.8
https://developer.nvidia.com/cuda-11-8-0-download-archive
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda_11.8.0_520.61.05_linux.run
sudo sh cuda_11.8.0_520.61.05_linux.run
选择 continue, 并且后续的选择不要安装驱动,只安装 CUDA Toolkit 即可。
安装 cudnn
https://developer.nvidia.com/rdp/cudnn-download
https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html
我们选择的 Download cuDNN v8.9.7 (December 5th, 2023), for CUDA 11.x
Local Installer for Linux x86_64 (Tar)
标签:11.8,onnxruntime,com,cuda,https,GPU,推理,CUDA From: https://www.cnblogs.com/odesey/p/17883634.html