首页 > 其他分享 >快速体验LLaMA-Factory 私有化部署和高效微调Llama3模型FAQ

快速体验LLaMA-Factory 私有化部署和高效微调Llama3模型FAQ

时间:2024-08-05 10:52:30浏览次数:18  
标签:scnlbe5oi5 torch Llama3 Factory py LLaMA nvidia cu12 vllm

序言

之前已经介绍了在超算互联网平台SCNet上使用异构加速卡AI 显存64GB PCIE,私有化部署Llama3模型,并对 Llama3-8B-Instruct 模型进行 LoRA 微调推理合并 ,详细内容请参考另一篇博客:快速体验LLaMA-Factory 私有化部署和高效微调Llama3模型(曙光超算互联网平台异构加速卡DCU)

由于博主调试过程中遇到较多问题,本文记录FAQ相关问题,仅提供解决思路。

一、参考资料

曙光超算互联网平台SCNet之国产异构加速卡DCU

Llama3本地部署与高效微调入门

二、重要说明

当遇到包冲突时,通常使用 pip install --no-deps -e . 可解决绝大多数问题。

三、FAQ

Q:ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1. requires transformers==4.33.2, but you have transformers 4.43.3 which is incompatible.
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
transformers 4.33.2 requires tokenizers!=0.11.3,<0.14,>=0.11.1, but you have tokenizers 0.15.0 which is incompatible.
vllm 0.3.3+git3380931.abi0.dtk2404.torch2.1 requires transformers>=4.38.0, but you have transformers 4.33.2 which is incompatible.

错误原因:错误一要求安装 transformers==4.33.2,安装该版本后,出现错误二。错误二要求安装 transformers>=4.38.0,与错误一相矛盾。

解决方法:解决该问题的思路,请参考下文的FAQ。

Q:pip._vendor.packaging.version.InvalidVersion: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.'

ERROR: Exception:
Traceback (most recent call last):
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper
    status = _inner_run()
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run
    return self.run(options, args)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
    return func(self, options, args)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 483, in run
    installed_versions[distribution.canonical_name] = distribution.version
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py", line 192, in version
    return parse_version(self._dist.version)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 56, in parse
    return Version(version)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 202, in __init__
    raise InvalidVersion(f"Invalid version: '{version}'")
pip._vendor.packaging.version.InvalidVersion: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.'
(llama_factory_torch) root@notebook-1813389960667746306-scnlbe5oi5-50216:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory# pip install tokenizers==0.13
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple
Collecting tokenizers==0.13
  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/cc/67/4c05eb8cbe8d20e52f5f47a9c591738d8cbc2a29e918813b7fcc431ec3db/tokenizers-0.13.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (7.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.0/7.0 MB 37.4 MB/s eta 0:00:00
WARNING: Error parsing dependencies of lmdeploy: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.'
WARNING: Error parsing dependencies of mmcv: Invalid version: '2.0.1-gitc0ccf15.abi0.dtk2404.torch2.1.'
Installing collected packages: tokenizers
  Attempting uninstall: tokenizers
    Found existing installation: tokenizers 0.15.0
    Uninstalling tokenizers-0.15.0:
      Successfully uninstalled tokenizers-0.15.0
ERROR: Exception:
Traceback (most recent call last):
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper
    status = _inner_run()
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run
    return self.run(options, args)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
    return func(self, options, args)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 483, in run
    installed_versions[distribution.canonical_name] = distribution.version
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py", line 192, in version
    return parse_version(self._dist.version)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 56, in parse
    return Version(version)
  File "/opt/conda/envs/llama3/lib/python3.10/site-packages/pip/_vendor/packaging/version.py", line 202, in __init__
    raise InvalidVersion(f"Invalid version: '{version}'")
pip._vendor.packaging.version.InvalidVersion: Invalid version: '0.1.0-git782048c.abi0.dtk2404.torch2.1.'

错误原因:lmdeploy版本问题。

解决方法:解决该问题的思路,请参考下文的FAQ。

Q:版本匹配问题

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install -r requirements.txt
...
Installing collected packages: pydub, websockets, urllib3, tomlkit, shtab, semantic-version, scipy, ruff, importlib-resources, ffmpy, docstring-parser, aiofiles, tyro, sse-starlette, tokenizers, gradio-client, transformers, trl, peft, gradio
  Attempting uninstall: websockets
    Found existing installation: websockets 12.0
    Uninstalling websockets-12.0:
      Successfully uninstalled websockets-12.0
  Attempting uninstall: urllib3
    Found existing installation: urllib3 1.26.13
    Uninstalling urllib3-1.26.13:
      Successfully uninstalled urllib3-1.26.13
  Attempting uninstall: tokenizers
    Found existing installation: tokenizers 0.15.0
    Uninstalling tokenizers-0.15.0:
      Successfully uninstalled tokenizers-0.15.0
  Attempting uninstall: transformers
    Found existing installation: transformers 4.38.0
    Uninstalling transformers-4.38.0:
      Successfully uninstalled transformers-4.38.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1. requires transformers==4.33.2, but you have transformers 4.43.3 which is incompatible.

错误原因lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1.transformers 版本冲突,要求 transformers==4.33.2。由于LLaMA-Factory项目要求 transformers>=4.41.2,因此选择升级 lmdeploy 以匹配 transformers 版本。

解决方法:在光合社区中查询并下载安装lmdeploy。以 lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0-cp310-cp310-manylinux_2_31_x86_64.whl 为例,尝试安装 lmdeploy-0.2.6

root@notebook-1813389960667746306-scnlbe5oi5-17811:~# pip list | grep lmdeploy
lmdeploy                       0.1.0-git782048c.abi0.dtk2404.torch2.1.
(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install  lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0-cp310-cp310-manylinux_2_31_x86_64.whl
...
Installing collected packages: shortuuid, tokenizers, transformers, peft, lmdeploy
  Attempting uninstall: tokenizers
    Found existing installation: tokenizers 0.19.1
    Uninstalling tokenizers-0.19.1:
      Successfully uninstalled tokenizers-0.19.1
  Attempting uninstall: transformers
    Found existing installation: transformers 4.43.3
    Uninstalling transformers-4.43.3:
      Successfully uninstalled transformers-4.43.3
  Attempting uninstall: peft
    Found existing installation: peft 0.12.0
    Uninstalling peft-0.12.0:
      Successfully uninstalled peft-0.12.0
  Attempting uninstall: lmdeploy
    Found existing installation: lmdeploy 0.1.0-git782048c.abi0.dtk2404.torch2.1.
    Uninstalling lmdeploy-0.1.0-git782048c.abi0.dtk2404.torch2.1.:
      Successfully uninstalled lmdeploy-0.1.0-git782048c.abi0.dtk2404.torch2.1.
Successfully installed lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0 peft-0.9.0 shortuuid-1.0.13 tokenizers-0.15.2 transformers-4.38.1

lmdeploy-0.2.6 安装成功,且没有报错,但是transformers版本降低为transformers-4.38.1

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# python src/webui.py \
>     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct" \
>     --template llama3 \
>     --infer_backend vllm \
>     --vllm_enforce_eager
Traceback (most recent call last):
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in <module>
    from llamafactory.webui.interface import create_ui
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in <module>
    from .cli import VERSION
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 21, in <module>
    from . import launcher
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/launcher.py", line 15, in <module>
    from llamafactory.train.tuner import run_exp
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/train/tuner.py", line 25, in <module>
    from ..hparams import get_infer_args, get_train_args
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/__init__.py", line 20, in <module>
    from .parser import get_eval_args, get_infer_args, get_train_args
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 45, in <module>
    check_dependencies()
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/extras/misc.py", line 82, in check_dependencies
    require_version("transformers>=4.41.2", "To fix: pip install transformers>=4.41.2")
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 111, in require_version
    _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 44, in _compare_versions
    raise ImportError(
ImportError: transformers>=4.41.2 is required for a normal functioning of this module, but found transformers==4.38.1.
To fix: pip install transformers>=4.41.2

解决方法:升级 transformers,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install -U transformers
...
Installing collected packages: tokenizers, transformers
  Attempting uninstall: tokenizers
    Found existing installation: tokenizers 0.15.2
    Uninstalling tokenizers-0.15.2:
      Successfully uninstalled tokenizers-0.15.2
  Attempting uninstall: transformers
    Found existing installation: transformers 4.38.1
    Uninstalling transformers-4.38.1:
      Successfully uninstalled transformers-4.38.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lmdeploy 0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0 requires transformers<=4.38.1,>=4.33.0, but you have transformers 4.43.3 which is incompatible.
Successfully installed tokenizers-0.19.1 transformers-4.43.3

错误原因lmdeploy 0.2.6transformers 版本冲突,要求 transformers<=4.38.1,>=4.33.0。由于LLaMA-Factory项目要求 transformers>=4.41.2,因此选择继续升级 lmdeploy 以匹配 transformers 版本。

解决方法:升级 lmdeploy

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install -U lmdeploy
...
Installing collected packages: nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cuda-runtime-cu12, nvidia-cublas-cu12, lmdeploy
  Attempting uninstall: lmdeploy
    Found existing installation: lmdeploy 0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0
    Uninstalling lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0:
      Successfully uninstalled lmdeploy-0.2.6+das1.1.git6ba90df.abi1.dtk2404.torch2.1.0
Successfully installed lmdeploy-0.5.2.post1 nvidia-cublas-cu12-12.5.3.2 nvidia-cuda-runtime-cu12-12.5.82 nvidia-curand-cu12-10.3.6.82 nvidia-nccl-cu12-2.22.3

lmdeploy-0.5.2 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# python src/webui.py     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct"     --template llama3     --infer_backend vllm     --vllm_enforce_eager
Traceback (most recent call last):
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in <module>
    from llamafactory.webui.interface import create_ui
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in <module>
    from .cli import VERSION
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 21, in <module>
    from . import launcher
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/launcher.py", line 15, in <module>
    from llamafactory.train.tuner import run_exp
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/train/tuner.py", line 25, in <module>
    from ..hparams import get_infer_args, get_train_args
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/__init__.py", line 20, in <module>
    from .parser import get_eval_args, get_infer_args, get_train_args
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 45, in <module>
    check_dependencies()
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/extras/misc.py", line 85, in check_dependencies
    require_version("peft>=0.11.1", "To fix: pip install peft>=0.11.1")
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 111, in require_version
    _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 44, in _compare_versions
    raise ImportError(
ImportError: peft>=0.11.1 is required for a normal functioning of this module, but found peft==0.9.0.
To fix: pip install peft>=0.11.1

解决方法:安装 peft==0.11.1

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install peft==0.11.1
...
Installing collected packages: peft
  Attempting uninstall: peft
    Found existing installation: peft 0.12.0
    Uninstalling peft-0.12.0:
      Successfully uninstalled peft-0.12.0
Successfully installed peft-0.11.1

peft==0.11.1 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# python src/webui.py     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct"     --template llama3     --infer_backend vllm     --vllm_enforce_eager
[2024-07-31 15:23:04,562] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Traceback (most recent call last):
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in <module>
    from llamafactory.webui.interface import create_ui
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in <module>
    from .cli import VERSION
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 22, in <module>
    from .api.app import run_api
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/api/app.py", line 21, in <module>
    from ..chat import ChatModel
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/__init__.py", line 16, in <module>
    from .chat_model import ChatModel
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 26, in <module>
    from .vllm_engine import VllmEngine
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/vllm_engine.py", line 37, in <module>
    from vllm.sequence import MultiModalData
ImportError: cannot import name 'MultiModalData' from 'vllm.sequence' (/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/sequence.py)

该问题的解决方法,请参考下文的FAQ。

Q:ImportError: cannot import name 'MultiModalData' from 'vllm.sequence'

最新取用的代码,运行api.py(或者是webui.py)报错,错误信息均是:ImportError: cannot import name ‘MultiModalData’ from ‘vllm.sequence’ (/usr/local/lib/python3.10/dist-packages/vllm/sequence.py) #3645

ImportError: cannot import name 'MultiModalData' from 'vllm.sequence'

错误原因:vllm版本过高或者版本过低,而LLaMA-Factory项目要求最低版本 vllm==0.4.3

解决方法:以版本过高为例,将vllm版本从 vllm==0.5.0 降低到 vllm==0.4.3,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install vllm==0.4.3
...
Installing collected packages: nvidia-ml-py, triton, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, llvmlite, lark, joblib, interegular, distro, diskcache, cmake, cloudpickle, nvidia-cusparse-cu12, nvidia-cudnn-cu12, numba, prometheus-fastapi-instrumentator, openai, nvidia-cusolver-cu12, lm-format-enforcer, torch, xformers, vllm-flash-attn, outlines, vllm
  Attempting uninstall: triton
    Found existing installation: triton 2.1.0+git3841f975.abi0.dtk2404
    Uninstalling triton-2.1.0+git3841f975.abi0.dtk2404:
      Successfully uninstalled triton-2.1.0+git3841f975.abi0.dtk2404
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.22.3
    Uninstalling nvidia-nccl-cu12-2.22.3:
      Successfully uninstalled nvidia-nccl-cu12-2.22.3
  Attempting uninstall: nvidia-curand-cu12
    Found existing installation: nvidia-curand-cu12 10.3.6.82
    Uninstalling nvidia-curand-cu12-10.3.6.82:
      Successfully uninstalled nvidia-curand-cu12-10.3.6.82
  Attempting uninstall: nvidia-cuda-runtime-cu12
    Found existing installation: nvidia-cuda-runtime-cu12 12.5.82
    Uninstalling nvidia-cuda-runtime-cu12-12.5.82:
      Successfully uninstalled nvidia-cuda-runtime-cu12-12.5.82
  Attempting uninstall: nvidia-cublas-cu12
    Found existing installation: nvidia-cublas-cu12 12.5.3.2
    Uninstalling nvidia-cublas-cu12-12.5.3.2:
      Successfully uninstalled nvidia-cublas-cu12-12.5.3.2
  Attempting uninstall: torch
    Found existing installation: torch 2.1.0+git00661e0.abi0.dtk2404
    Uninstalling torch-2.1.0+git00661e0.abi0.dtk2404:
      Successfully uninstalled torch-2.1.0+git00661e0.abi0.dtk2404
  Attempting uninstall: xformers
    Found existing installation: xformers 0.0.25+gitd11e899.abi0.dtk2404.torch2.1
    Uninstalling xformers-0.0.25+gitd11e899.abi0.dtk2404.torch2.1:
      Successfully uninstalled xformers-0.0.25+gitd11e899.abi0.dtk2404.torch2.1
  Attempting uninstall: vllm
    Found existing installation: vllm 0.3.3+git3380931.abi0.dtk2404.torch2.1
    Uninstalling vllm-0.3.3+git3380931.abi0.dtk2404.torch2.1:
      Successfully uninstalled vllm-0.3.3+git3380931.abi0.dtk2404.torch2.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lmdeploy 0.5.2.post1 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible.
lmdeploy 0.5.2.post1 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible.
Successfully installed cloudpickle-3.0.0 cmake-3.30.1 diskcache-5.6.3 distro-1.9.0 interegular-0.3.3 joblib-1.4.2 lark-1.1.9 llvmlite-0.43.0 lm-format-enforcer-0.10.1 numba-0.60.0 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-ml-py-12.555.43 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.5.82 nvidia-nvtx-cu12-12.1.105 openai-1.37.1 outlines-0.0.34 prometheus-fastapi-instrumentator-7.0.0 torch-2.3.0 triton-2.3.0 vllm-0.4.3 vllm-flash-attn-2.5.8.post2 xformers-0.0.26.post1

解决方法:将torch版本从 torch 2.3.0 降低到 torch 2.1.0,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install torch==2.1.0
...
Installing collected packages: triton, nvidia-nccl-cu12, torch
  Attempting uninstall: triton
    Found existing installation: triton 2.3.0
    Uninstalling triton-2.3.0:
      Successfully uninstalled triton-2.3.0
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.20.5
    Uninstalling nvidia-nccl-cu12-2.20.5:
      Successfully uninstalled nvidia-nccl-cu12-2.20.5
  Attempting uninstall: torch
    Found existing installation: torch 2.3.0
    Uninstalling torch-2.3.0:
      Successfully uninstalled torch-2.3.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm 0.4.3 requires torch==2.3.0, but you have torch 2.1.0 which is incompatible.
vllm-flash-attn 2.5.8.post2 requires torch==2.3.0, but you have torch 2.1.0 which is incompatible.
xformers 0.0.26.post1 requires torch==2.3.0, but you have torch 2.1.0 which is incompatible.
Successfully installed nvidia-nccl-cu12-2.18.1 torch-2.1.0 triton-2.1.0

解决方法:将vllm版本从 vllm 0.4.3 降低到 vllm 0.4.2,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install vllm==0.4.2
...
Installing collected packages: vllm-nccl-cu12, triton, nvidia-nccl-cu12, tiktoken, torch, lm-format-enforcer, vllm
  Attempting uninstall: triton
    Found existing installation: triton 2.1.0
    Uninstalling triton-2.1.0:
      Successfully uninstalled triton-2.1.0
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.18.1
    Uninstalling nvidia-nccl-cu12-2.18.1:
      Successfully uninstalled nvidia-nccl-cu12-2.18.1
  Attempting uninstall: tiktoken
    Found existing installation: tiktoken 0.7.0
    Uninstalling tiktoken-0.7.0:
      Successfully uninstalled tiktoken-0.7.0
  Attempting uninstall: torch
    Found existing installation: torch 2.1.0
    Uninstalling torch-2.1.0:
      Successfully uninstalled torch-2.1.0
  Attempting uninstall: lm-format-enforcer
    Found existing installation: lm-format-enforcer 0.10.1
    Uninstalling lm-format-enforcer-0.10.1:
      Successfully uninstalled lm-format-enforcer-0.10.1
  Attempting uninstall: vllm
    Found existing installation: vllm 0.4.3
    Uninstalling vllm-0.4.3:
      Successfully uninstalled vllm-0.4.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lmdeploy 0.5.2.post1 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible.
lmdeploy 0.5.2.post1 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible.
Successfully installed lm-format-enforcer-0.9.8 nvidia-nccl-cu12-2.20.5 tiktoken-0.6.0 torch-2.3.0 triton-2.3.0 vllm-0.4.2 vllm-nccl-cu12-2.18.1.0.4.0

解决方法:将vllm版本从 vllm 0.4.2 降低到 vllm 0.4.1,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install vllm==0.4.1
...
Installing collected packages: triton, nvidia-nccl-cu12, torch, xformers, vllm
  Attempting uninstall: triton
    Found existing installation: triton 2.3.0
    Uninstalling triton-2.3.0:
      Successfully uninstalled triton-2.3.0
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.20.5
    Uninstalling nvidia-nccl-cu12-2.20.5:
      Successfully uninstalled nvidia-nccl-cu12-2.20.5
  Attempting uninstall: torch
    Found existing installation: torch 2.3.0
    Uninstalling torch-2.3.0:
      Successfully uninstalled torch-2.3.0
  Attempting uninstall: xformers
    Found existing installation: xformers 0.0.26.post1
    Uninstalling xformers-0.0.26.post1:
      Successfully uninstalled xformers-0.0.26.post1
  Attempting uninstall: vllm
    Found existing installation: vllm 0.4.2
    Uninstalling vllm-0.4.2:
      Successfully uninstalled vllm-0.4.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm-flash-attn 2.5.8.post2 requires torch==2.3.0, but you have torch 2.2.1 which is incompatible.
Successfully installed nvidia-nccl-cu12-2.19.3 torch-2.2.1 triton-2.2.0 vllm-0.4.1 xformers-0.0.25

解决方法:将vllm版本从 vllm-flash-attn 2.5.8.post2 降低到 vllm-flash-attn-2.5.6,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install vllm-flash-attn==2.5.6
...
Installing collected packages: triton, nvidia-nccl-cu12, torch, vllm-flash-attn
  Attempting uninstall: triton
    Found existing installation: triton 2.2.0
    Uninstalling triton-2.2.0:
      Successfully uninstalled triton-2.2.0
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.19.3
    Uninstalling nvidia-nccl-cu12-2.19.3:
      Successfully uninstalled nvidia-nccl-cu12-2.19.3
  Attempting uninstall: torch
    Found existing installation: torch 2.2.1
    Uninstalling torch-2.2.1:
      Successfully uninstalled torch-2.2.1
  Attempting uninstall: vllm-flash-attn
    Found existing installation: vllm-flash-attn 2.5.8.post2
    Uninstalling vllm-flash-attn-2.5.8.post2:
      Successfully uninstalled vllm-flash-attn-2.5.8.post2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm 0.4.1 requires torch==2.2.1, but you have torch 2.1.2 which is incompatible.
xformers 0.0.25 requires torch==2.2.1, but you have torch 2.1.2 which is incompatible.
Successfully installed nvidia-nccl-cu12-2.18.1 torch-2.1.2 triton-2.1.0 vllm-flash-attn-2.5.6

解决方法:将vllm版本从 vllm 0.4.1 降低到 vllm 0.4.0

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install vllm==0.4.0
...
Installing collected packages: xformers, vllm
  Attempting uninstall: xformers
    Found existing installation: xformers 0.0.25
    Uninstalling xformers-0.0.25:
      Successfully uninstalled xformers-0.0.25
  Attempting uninstall: vllm
    Found existing installation: vllm 0.4.1
    Uninstalling vllm-0.4.1:
      Successfully uninstalled vllm-0.4.1
Successfully installed vllm-0.4.0 xformers-0.0.23.post1

vllm 0.4.0 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# python src/webui.py \
>     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct" \
>     --template llama3 \
>     --infer_backend vllm \
>     --vllm_enforce_eager
No ROCm runtime is found, using ROCM_HOME='/opt/dtk'
/opt/conda/envs/llama_factory/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 'libc10_hip.so: cannot open shared object file: No such file or directory'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
[2024-07-31 15:52:48,647] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Traceback (most recent call last):
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in <module>
    from llamafactory.webui.interface import create_ui
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in <module>
    from .cli import VERSION
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 22, in <module>
    from .api.app import run_api
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/api/app.py", line 21, in <module>
    from ..chat import ChatModel
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/__init__.py", line 16, in <module>
    from .chat_model import ChatModel
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 26, in <module>
    from .vllm_engine import VllmEngine
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/chat/vllm_engine.py", line 29, in <module>
    from vllm import AsyncEngineArgs, AsyncLLMEngine, RequestOutput, SamplingParams
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/__init__.py", line 4, in <module>
    from vllm.engine.async_llm_engine import AsyncLLMEngine
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 12, in <module>
    from vllm.engine.llm_engine import LLMEngine
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 16, in <module>
    from vllm.model_executor.model_loader import get_architecture_class_name
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/model_executor/model_loader.py", line 10, in <module>
    from vllm.model_executor.models.llava import LlavaForConditionalGeneration
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/model_executor/models/llava.py", line 11, in <module>
    from vllm.model_executor.layers.activation import get_act_fn
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/vllm/model_executor/layers/activation.py", line 9, in <module>
    from vllm._C import ops
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

该问题的解决方法,请参考下文的FAQ。

Q:ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

ImportError: libcuda.so.1: cannot open shared object file: No such file or directory

查找 libcuda.so.1 文件:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# find / -name "libcuda.so.1"
find: '/proc/1/map_files': Operation not permitted
find: '/proc/13/map_files': Operation not permitted
find: '/proc/45/map_files': Operation not permitted
find: '/proc/116/map_files': Operation not permitted
find: '/proc/118/map_files': Operation not permitted
find: '/proc/120/map_files': Operation not permitted
find: '/proc/121/map_files': Operation not permitted
find: '/proc/5527/map_files': Operation not permitted
find: '/proc/5529/map_files': Operation not permitted
find: '/proc/5531/map_files': Operation not permitted
find: '/proc/6148/map_files': Operation not permitted
find: '/proc/24592/map_files': Operation not permitted
find: '/proc/24970/map_files': Operation not permitted
find: '/proc/24971/map_files': Operation not permitted

错误原因:没有找到该文件,猜测是vllm的版本问题。

解决方法:重新安装 llvm 0.4.3,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install vllm==0.4.3
...
Installing collected packages: triton, nvidia-nccl-cu12, torch, lm-format-enforcer, xformers, vllm-flash-attn, vllm
  Attempting uninstall: triton
    Found existing installation: triton 2.1.0
    Uninstalling triton-2.1.0:
      Successfully uninstalled triton-2.1.0
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.18.1
    Uninstalling nvidia-nccl-cu12-2.18.1:
      Successfully uninstalled nvidia-nccl-cu12-2.18.1
  Attempting uninstall: torch
    Found existing installation: torch 2.1.2
    Uninstalling torch-2.1.2:
      Successfully uninstalled torch-2.1.2
  Attempting uninstall: lm-format-enforcer
    Found existing installation: lm-format-enforcer 0.9.8
    Uninstalling lm-format-enforcer-0.9.8:
      Successfully uninstalled lm-format-enforcer-0.9.8
  Attempting uninstall: xformers
    Found existing installation: xformers 0.0.23.post1
    Uninstalling xformers-0.0.23.post1:
      Successfully uninstalled xformers-0.0.23.post1
  Attempting uninstall: vllm-flash-attn
    Found existing installation: vllm-flash-attn 2.5.6
    Uninstalling vllm-flash-attn-2.5.6:
      Successfully uninstalled vllm-flash-attn-2.5.6
  Attempting uninstall: vllm
    Found existing installation: vllm 0.4.0
    Uninstalling vllm-0.4.0:
      Successfully uninstalled vllm-0.4.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lmdeploy 0.5.2.post1 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible.
lmdeploy 0.5.2.post1 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible.
Successfully installed lm-format-enforcer-0.10.1 nvidia-nccl-cu12-2.20.5 torch-2.3.0 triton-2.3.0 vllm-0.4.3 vllm-flash-attn-2.5.8.post2 xformers-0.0.26.post1

错误原因lmdeploy 0.5.2.post1torch 版本冲突,要求 torch<=2.2.2,>=2.0.0,而当前版本为torch 2.3.0lmdeploy 0.5.2.post1triton 版本冲突,要求 triton<=2.2.0,>=2.1.0,而当前版本为triton 2.3.0理论上,应该升级lmdeploy 版本以匹配torch版本,但是lmdeploy已经是最新版本了。因此,尝试降低lmdeploy版本

解决方法:将lmdeploy版本从 lmdeploy 0.5.2.post1 降低到 lmdeploy 0.5.0,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install lmdeploy==0.5.0
...
Installing collected packages: triton, nvidia-nccl-cu12, torch, lmdeploy
  Attempting uninstall: triton
    Found existing installation: triton 2.3.0
    Uninstalling triton-2.3.0:
      Successfully uninstalled triton-2.3.0
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.20.5
    Uninstalling nvidia-nccl-cu12-2.20.5:
      Successfully uninstalled nvidia-nccl-cu12-2.20.5
  Attempting uninstall: torch
    Found existing installation: torch 2.3.0
    Uninstalling torch-2.3.0:
      Successfully uninstalled torch-2.3.0
  Attempting uninstall: lmdeploy
    Found existing installation: lmdeploy 0.5.2.post1
    Uninstalling lmdeploy-0.5.2.post1:
      Successfully uninstalled lmdeploy-0.5.2.post1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
vllm 0.4.3 requires torch==2.3.0, but you have torch 2.2.2 which is incompatible.
vllm-flash-attn 2.5.8.post2 requires torch==2.3.0, but you have torch 2.2.2 which is incompatible.
xformers 0.0.26.post1 requires torch==2.3.0, but you have torch 2.2.2 which is incompatible.

解决方法:将torch版本从 torch 2.2.2 升级到 torch 2.3.0,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install torch==2.3.0
...
Installing collected packages: triton, nvidia-nccl-cu12, torch
  Attempting uninstall: triton
    Found existing installation: triton 2.2.0
    Uninstalling triton-2.2.0:
      Successfully uninstalled triton-2.2.0
  Attempting uninstall: nvidia-nccl-cu12
    Found existing installation: nvidia-nccl-cu12 2.19.3
    Uninstalling nvidia-nccl-cu12-2.19.3:
      Successfully uninstalled nvidia-nccl-cu12-2.19.3
  Attempting uninstall: torch
    Found existing installation: torch 2.2.2
    Uninstalling torch-2.2.2:
      Successfully uninstalled torch-2.2.2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
lmdeploy 0.5.0 requires torch<=2.2.2,>=2.0.0, but you have torch 2.3.0 which is incompatible.
lmdeploy 0.5.0 requires triton<=2.2.0,>=2.1.0; sys_platform == "linux", but you have triton 2.3.0 which is incompatible.
Successfully installed nvidia-nccl-cu12-2.20.5 torch-2.3.0 triton-2.3.0

解决方法:把lmdeploy版本从 lmdeploy 0.5.0 升级到 lmdeploy 0.5.1

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# pip install lmdeploy==0.5.1
...
Installing collected packages: lmdeploy
  Attempting uninstall: lmdeploy
    Found existing installation: lmdeploy 0.5.0
    Uninstalling lmdeploy-0.5.0:
      Successfully uninstalled lmdeploy-0.5.0
Successfully installed lmdeploy-0.5.1

lmdeploy-0.5.1 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# python src/webui.py \
>     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct" \
>     --template llama3 \
>     --infer_backend vllm \
>     --vllm_enforce_eager
Traceback (most recent call last):
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/webui.py", line 17, in <module>
    from llamafactory.webui.interface import create_ui
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/__init__.py", line 38, in <module>
    from .cli import VERSION
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/cli.py", line 21, in <module>
    from . import launcher
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/launcher.py", line 15, in <module>
    from llamafactory.train.tuner import run_exp
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/train/tuner.py", line 25, in <module>
    from ..hparams import get_infer_args, get_train_args
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/__init__.py", line 20, in <module>
    from .parser import get_eval_args, get_infer_args, get_train_args
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/hparams/parser.py", line 45, in <module>
    check_dependencies()
  File "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/src/llamafactory/extras/misc.py", line 85, in check_dependencies
    require_version("peft>=0.11.1", "To fix: pip install peft>=0.11.1")
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 111, in require_version
    _compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
  File "/opt/conda/envs/llama_factory/lib/python3.10/site-packages/transformers/utils/versions.py", line 44, in _compare_versions
    raise ImportError(
ImportError: peft>=0.11.1 is required for a normal functioning of this module, but found peft==0.9.0.
To fix: pip install peft>=0.11.1

解决方法:升级 peft==0.11.1

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/cache# pip install peft==0.11.1
...
Installing collected packages: peft
  Attempting uninstall: peft
    Found existing installation: peft 0.9.0
    Uninstalling peft-0.9.0:
      Successfully uninstalled peft-0.9.0
Successfully installed peft-0.11.1

peft-0.11.1 安装成功,且没有报错。

重新启动服务,又出现新问题:

(llama_factory) root@notebook-1813389960667746306-scnlbe5oi5-17811:/public/home/scnlbe5oi5/Downloads/models/LLaMA-Facto
ry# python src/webui.py     --model_name_or_path "/public/home/scnlbe5oi5/Downloads/models/LLaMA-Factory/models/Meta-Llama-3-8B-Instruct"     --template llama3     --infer_backend vllm     --vllm_enforce_eager
No ROCm runtime is found, using ROCM_HOME='/opt/dtk'
/opt/conda/envs/llama_factory/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 'libc10_hip.so: cannot open shared object file: No such file or directory'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
[2024-07-31 16:58:35,443] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
gradio_share: False
Running on local URL:  http://127.0.0.1:7860

Could not create share link. Missing file: /opt/conda/envs/llama_factory/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2.

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps:

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /opt/conda/envs/llama_factory/lib/python3.10/site-packages/gradio

该问题的解决方法,请参考下文的FAQ。

Q. Could not create share link. Missing file:/PATH/TO/gradio/frpc_linux_amd64_v0.2

【Gradio】Could not create share link

在这里插入图片描述

Could not create share link. Missing file: /opt/conda/envs/llama_factory_torch/lib/python3.11/site-packages/gradio/frpc_linux_amd64_v0.2. 

Please check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: 

1. Download this file: https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64
2. Rename the downloaded file to: frpc_linux_amd64_v0.2
3. Move the file to this location: /opt/conda/envs/llama_factory_torch/lib/python3.11/site-packages/gradio
# 解决方法
1. 下载 https://cdn-media.huggingface.co/frpc-gradio-0.2/frpc_linux_amd64

2. 重命名
mv frpc_linux_amd64 frpc_linux_amd64_v0.2

3. 移动到指定目录
cp frpc_linux_amd64_v0.2 /opt/conda/envs/llama_factory_torch/lib/python3.10/site-packages/gradio

4. 修改权限
chmod +x /opt/conda/envs/llama_factory_torch/lib/python3.10/site-packages/gradio/frpc_linux_amd64_v0.2

Q. Could not create share link.. Please check your internet connection or our status page

Could not create share link. Please check your internet connection or our status page: https://status.gradio.app

解决方法:修改 frpc_linux_amd64_v0.2文件权限。

chmod +x /opt/conda/envs/llama_factory_torch/lib/python3.11/site-packages/gradio/frpc_linux_amd64_v0.2

标签:scnlbe5oi5,torch,Llama3,Factory,py,LLaMA,nvidia,cu12,vllm
From: https://blog.csdn.net/m0_37605642/article/details/140908621

相关文章

  • wsl docker里运行ollama并使用nvidia gpu的一些记录
     1、安装wsl2具体过程网上一搜一把,这里就先略过了,只有wsl2能用哈2、wsl里装docker,及相关配置装dockerwget https://download.docker.com/linux/static/stable/aarch64/docker-23.0.6.tgzcd/mydata/tmp/tar -zxvf docker-23.0.6.tgzmvdocker/*/usr/bin/mvdock......
  • Continue-AI编程助手本地部署llama3.1+deepseek-coder-v2
    领先的开源人工智能代码助手。您可以连接任何模型和任何上下文,以在IDE内构建自定义自动完成和聊天体验推荐以下开源模型:聊天:llama3.1-8B推理代码:deepseek-coder-v2:16b嵌入模型nomic-embed-text模型默认存储路径:C:\Users\你的用户名\.ollama\models\blobs模型离线下......
  • 跟《经济学人》学英文:2024年08月03日这期 GPT, Claude, Llama? How to tell which AI
    GPT,Claude,Llama?HowtotellwhichAImodelisbestBewaremodel-makersmarkingtheirownhomework原文:WhenMeta,theparentcompanyofFacebook,announceditslatestopen-sourcelargelanguagemodel(LLM)onJuly23rd,itclaimedthatthemostpo......
  • Llamaindex RAG实践
    任务要求:基于LlamaIndex构建自己的RAG知识库,寻找一个问题A在使用LlamaIndex之前InternLM2-Chat-1.8B模型不会回答,借助LlamaIndex后InternLM2-Chat-1.8B模型具备回答A的能力,截图保存。本文将分为以下几个部分来介绍,如何使用LlamaIndex来部署InternLM21.8B(以......
  • SemanticKernel/C#:使用Ollama中的对话模型与嵌入模型用于本地离线场景
    前言上一篇文章介绍了使用SemanticKernel/C#的RAG简易实践,在上篇文章中我使用的是兼容OpenAI格式的在线API,但实际上会有很多本地离线的场景。今天跟大家介绍一下在SemanticKernel/C#中如何使用Ollama中的对话模型与嵌入模型用于本地离线场景。开始实践本文使用的对话模型是gemm......
  • ollama 简易使用教程
    ollama安装Install使用以下命令安装ollama:curl-fsSLhttps://ollama.com/install.sh|sh手动安装下载ollama二进制文件:sudocurl-Lhttps://ollama.com/download/ollama-linux-amd64-o/usr/bin/ollamasudochmod+x/usr/bin/ollama添加ollama作为启动服务(推......
  • LLaMA-Factory 大模型微调超简单,从零开始开始玩转大模型微调
    目录LLaMA-Factory大模型微调超简单,从零开始开始玩转大模型微调为什么要使用LLaMA-Factory进行微调?如何使用LLaMA-Factory进行微调?安装启动数据准备Alpaca格式指令监督微调数据集预训练数据集开始微调模型评估对话测试模型导出为什么要使用LLaMA-Factory进行......
  • GPU训Llama 3.1疯狂崩溃,竟有大厂用CPU服务器跑千亿参数大模型?
    马斯克19天建成由10万块H100串联的世界最大超算,已全力投入Grok3的训练中。与此同时,外媒爆料称,OpenAI和微软联手打造的下一个超算集群,将由10万块GB200组成。在这场AI争霸赛中,各大科技公司们卯足劲加大对GPU的投资,似乎在暗示着拥有更多、更强大的GPU,就能让自己立于不败之......
  • python llama_index.indices.list.retrievers 导入错误
    fromllama_indeximportGPTListIndexfromllama_index.indices.list.retrieversimportListIndexLLMRetrieverdocuments=SimpleDirectoryReader('./data').load_data()index=GPTListIndex.from_documents(documents,service_context=service_context)r......
  • 在 Python Langchain 应用程序的 Docker 文件中运行 Ollama
    背景信息我有一个使用langchain和Ollama的Python应用程序。在本地运行这个程序效果非常好,因为我的机器上运行着Ollama客户端。我想要做的是在无服务器平台(例如GCR)上托管这个应用程序,为了做到这一点,我需要容器化应用程序。这对于应用程序的python端来说很容......