DX4600部署immich相册
步骤
- 开启DX4600远程调试功能
- 下载docekr-compose
- 下载immich部署文件
- 修改部署文件配置
- 部署
- 部署完配置
1.开启远程调试
这个步骤很简单,如下图,下面的验证码就是ssh密码,通过ssh工具连接登录绿联nas,用户root,端口922
2.下载Docker-compose
已经下载过且可用的跳过此步骤。
可以搜索docker-compose二进制可执行文件直接下载到nas,拷贝到/usr/bin/目录下,加执行权限就行。
下面是我分享的文件,链接有时效,失效可以从网上下载。
https://web.ugreen.cloud/web/#/share/9da7a03728c24a95965242a749d65abb 提取码:YB3F
赋权操作:
~#chmod +x /usr/bin/docker-compose
~# ls -hl /usr/bin/docker-compose
-rwxr-xr-x 1 root root 60.1M May 24 17:09 /usr/bin/docker-compose
然后输入docker-compose就会出现介绍和使用说明。这时候docekr-compose就已经可用了
3.下载immich docker-compose部署文件
简介:
官方提供的docker-compose文件共有4个:
.env是环境变量文件,定义了通过immich上传图片的路径,外部库路径,即你之前就已经存在的图片要被immich扫描到需要添加这个外部库路径。
docekr-compose.yml是部署文件,不开硬件加速的话这个文件可以不用改。
另外两个是硬件加速(实验性功能)相关配置的部署文件hwaccel.ml和hwaccel.transcoding。实际测试硬件加速开启的情况下智能搜索会失效,看日志在使用intel核心显卡来跑ai模型的时候会报错,导致人脸识别和图像语义搜索功能失效。
实际只需要.env文件和docker-compose文件即可完成部署
环境变量介绍:
其中UPLOAD_LOCATION是immich上传的图片路径
EXTERNAL_PATH是外部库的路径。找到你之前存在的照片路径
DB_DATA_LOCATION是immich运行产生的数据,这些软件运行的文件如果条件允许推荐放到固态硬盘路径上
CACHE_LOCATION是immich缓存目录,也一样,推荐固态路径
配置文件中的路径需要集合自己的图片路径来进行相应更改
我是在固态硬盘上建立一个immich的文件夹,用来放部署文件和immich的运行数据和缓存
即/mnt/dm-0/.ugreen_nas/138716/Docker/immich
下面是.env文件
# You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables
# The location where your uploaded files are stored
UPLOAD_LOCATION=/mnt/dm-5/.ugreen_nas/138716/DSM/homes/immich
EXTERNAL_PATH=/mnt/dm-5/.ugreen_nas/138716
# The location where your database files are stored
DB_DATA_LOCATION=/mnt/dm-0/.ugreen_nas/138716/Docker/immich/data
CACHE_LOCATION=/mnt/dm-0/.ugreen_nas/138716/Docker/immich/cache
# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
TZ=Asia/Shanghai
# The Immich version to use. You can pin this to a specific version like "v1.71.0"
IMMICH_VERSION=release
# Connection secret for postgres. You should change it to a random password
DB_PASSWORD=postgres
# The values below this line do not need to be changed
DB_USERNAME=postgres
DB_DATABASE_NAME=immich
HF_ENDPOINT=https://hf-mirror.com
下面是docker-compose.yml文件
#
# WARNING: Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
#
name: immich
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
# extends:
# file: hwaccel.transcoding.yml
# service: quicksync # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
# devices:
# - /dev/dri:/dev/dri
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- ${EXTERNAL_PATH}:/usr/src/app/external
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
ports:
- 2283:3001
depends_on:
- redis
- database
restart: always
immich-machine-learning:
container_name: immich_machine_learning
# For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
# Example tag: ${IMMICH_VERSION:-release}-cuda
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}-openvino
# extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
# file: hwaccel.ml.yml
# service: openvino # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
# devices:
# - /dev/dri:/dev/dri
volumes:
- ${CACHE_LOCATION}:/cache
env_file:
- .env
restart: always
redis:
container_name: immich_redis
image: docker.io/redis:6.2-alpine@sha256:328fe6a5822256d065debb36617a8169dbfbd77b797c525288e465f56c1d392b
healthcheck:
test: redis-cli ping || exit 1
restart: always
database:
container_name: immich_postgres
image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: '--data-checksums'
volumes:
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
healthcheck:
test: pg_isready --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' || exit 1; Chksum="$$(psql --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' --tuples-only --no-align --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')"; echo "checksum failure count is $$Chksum"; [ "$$Chksum" = '0' ] || exit 1
interval: 5m
# start_interval: 30s
start_period: 5m
command: ["postgres", "-c" ,"shared_preload_libraries=vectors.so", "-c", 'search_path="$$user", public, vectors', "-c", "logging_collector=on", "-c", "max_wal_size=2GB", "-c", "shared_buffers=512MB", "-c", "wal_compression=on"]
restart: always
volumes:
model-cache:
下面是hwaccel.ml.yml,排除硬件加速可以不用此文件
# Configurations for hardware-accelerated machine learning
# If using Unraid or another platform that doesn't allow multiple Compose files,
# you can inline the config for a backend by copying its contents
# into the immich-machine-learning service in the docker-compose.yml file.
# See https://immich.app/docs/features/ml-hardware-acceleration for info on usage.
services:
armnn:
devices:
- /dev/mali0:/dev/mali0
volumes:
- /lib/firmware/mali_csffw.bin:/lib/firmware/mali_csffw.bin:ro # Mali firmware for your chipset (not always required depending on the driver)
- /usr/lib/libmali.so:/usr/lib/libmali.so:ro # Mali driver for your chipset (always required)
cpu: {}
cuda:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
openvino:
device_cgroup_rules:
- 'c 189:* rmw'
devices:
- /dev/dri:/dev/dri
volumes:
- /dev/bus/usb:/dev/bus/usb
openvino-wsl:
devices:
- /dev/dri:/dev/dri
- /dev/dxg:/dev/dxg
volumes:
- /dev/bus/usb:/dev/bus/usb
- /usr/lib/wsl:/usr/lib/wsl
下面是hwaccel.transcoding.yml,排除硬件加速可以不用此文件
# Configurations for hardware-accelerated transcoding
# If using Unraid or another platform that doesn't allow multiple Compose files,
# you can inline the config for a backend by copying its contents
# into the immich-microservices service in the docker-compose.yml file.
# See https://immich.app/docs/features/hardware-transcoding for more info on using hardware transcoding.
services:
cpu: {}
nvenc:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities:
- gpu
- compute
- video
quicksync:
devices:
- /dev/dri:/dev/dri
rkmpp:
security_opt: # enables full access to /sys and /proc, still far better than privileged: true
- systempaths=unconfined
- apparmor=unconfined
group_add:
- video
devices:
- /dev/rga:/dev/rga
- /dev/dri:/dev/dri
- /dev/dma_heap:/dev/dma_heap
- /dev/mpp_service:/dev/mpp_service
#- /dev/mali0:/dev/mali0 # only required to enable OpenCL-accelerated HDR -> SDR tonemapping
volumes:
#- /etc/OpenCL:/etc/OpenCL:ro # only required to enable OpenCL-accelerated HDR -> SDR tonemapping
#- /usr/lib/aarch64-linux-gnu/libmali.so.1:/usr/lib/aarch64-linux-gnu/libmali.so.1:ro # only required to enable OpenCL-accelerated HDR -> SDR tonemapping
vaapi:
devices:
- /dev/dri:/dev/dri
vaapi-wsl: # use this for VAAPI if you're running Immich in WSL2
devices:
- /dev/dri:/dev/dri
volumes:
- /usr/lib/wsl:/usr/lib/wsl
environment:
- LD_LIBRARY_PATH=/usr/lib/wsl/lib
- LIBVA_DRIVER_NAME=d3d12
4.修改部署文件配置
不开启硬件加速,只需要拷贝.env文件和docker-compose文件到nas上即可
修改.env文件中的路径配置,
UPLOAD_LOCATION=/mnt/dm-5/.ugreen_nas/138716/DSM/homes/immich #此路径改为你实际上传路径
EXTERNAL_PATH=/mnt/dm-5/.ugreen_nas/138716 #此路径改为你已经存在的照片路径
# The location where your database files are stored
DB_DATA_LOCATION=/mnt/dm-0/.ugreen_nas/138716/Docker/immich/data #改为实际数据放置路径,新建
CACHE_LOCATION=/mnt/dm-0/.ugreen_nas/138716/Docker/immich/cache #改为实际缓存路径,新建
文件一览
5 部署
在docker-compose文件所在路径下执行docker-compose up -d 即可
会拉取镜像创建容器,我这里已经拉取过,所以是直接重建。至于目前docker的问题,目前我使用1panel的镜像源:https://docker.1panel.live
部署完,等所有immich容器都起来后,输入http://ip:2283即可访问immich控制台
6 部署完配置
登录
主要说一点机器学习的ai模型,有点大,几个G,一般网络下载不了,可以下载离线包,这里使用的智能模型为
XLM-Roberta-Large-Vit-B-16Plus,选完保存后会从官方库里下载模型,网络原因不一定下载得了。可以去下载离线包,放置到缓存目录覆盖即可。
离线包链接:https://www.123pan.com/s/WXqA-EGL6d.html
作者:大志若勇 https://www.bilibili.com/read/cv33865669/ 出处:bilibili
下载离线包,解压后会得到如下文件夹,将这三个拷贝到缓存路径下即可。
这样就可以加载ai模型。进行人脸识别和图片语义搜索。
标签:compose,相册,dev,dri,immich,usr,DX4600,docker From: https://www.cnblogs.com/pleach/p/18333307