ax650使用ax-pipeline进行推理
搭建交叉编译环境
- 拉取ax-pipeline源码及子模块
git clone --recursive https://github.com/AXERA-TECH/ax-pipeline.git
- 下载sdk
cd ax-pipeline
./download_ax_bsp.sh ax650
cd ax650n_bsp_sdk
wget https://github.com/ZHEQIUSHUI/assets/releases/download/ax650/drm.zip
mkdir third-party
unzip drm.zip -d third-party
cd ..
- 下载opencv
mkdir 3rdparty
cd 3rdparty
wget https://github.com/ZHEQIUSHUI/assets/releases/download/ax650/libopencv-4.5.5-aarch64.zip
unzip libopencv-4.5.5-aarch64.zip
- 编译环境
wget https://developer.arm.com/-/media/Files/downloads/gnu-a/9.2-2019.12/binrel/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu.tar.xz
tar -xvf gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu.tar.xz
export PATH=$PATH:$PWD/gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu/bin/
- 源码编译
cd ax-pipeline
mkdir build
cd build
cmake -DAXERA_TARGET_CHIP=AX650 -DBSP_MSP_DIR=$PWD/../ax650n_bsp_sdk/msp/out -DOpenCV_DIR=$PWD/../3rdparty/libopencv-4.5.5-aarch64/lib/cmake/opencv4 -DSIPY_BUILD=OFF -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=../toolchains/aarch64-none-linux-gnu.toolchain.cmake -DCMAKE_INSTALL_PREFIX=install ..
make -j12
make install
- 获得bin文件到开发板上
bin
├── config
│ ├── custom_model.json
│ ├── dinov2.json
│ ├── dinov2_depth.json
│ ├── glpdepth.json
│ ├── ppyoloe.json
│ ├── scrfd.json
│ ├── scrfd_recognition.json
│ ├── yolo_nas.json
│ ├── yolov5_seg.json
│ ├── yolov5s.json
│ ├── yolov5s_face.json
│ ├── yolov5s_face_recognition.json
│ ├── yolov6.json
│ ├── yolov7.json
│ ├── yolov7_face.json
│ ├── yolov8.json
│ ├── yolov8_pose.json
│ └── yolox.json
├── sample_demux_ivps_npu_hdmi_vo
├── sample_demux_ivps_npu_rtsp
├── sample_demux_ivps_npu_rtsp_hdmi_vo
├── sample_multi_demux_ivps_npu_hdmi_vo
├── sample_multi_demux_ivps_npu_multi_rtsp
├── sample_multi_demux_ivps_npu_multi_rtsp_hdmi_vo
├── sample_vin_ivps_npu_hdmi_vo
└── sample_vin_ivps_npu_venc_rtsp
- 开发板运行
修改yolov5s.json文件
博主的
{
"MODEL_TYPE": "MT_DET_YOLOV5",
"MODEL_PATH": "/root/Desktop/install/bin/config/models/yolov5s_hat.axmodel",
"TRACK_ENABLE": true,
"STRIDES": [8, 16, 32],
"ANCHORS": [
10.0,
13.0,
16.0,
30.0,
33.0,
23.0,
30.0,
61.0,
62.0,
45.0,
59.0,
119.0,
116.0,
90.0,
156.0,
198.0,
373.0,
326.0
],
"CLASS_NAMES": [
"hat",
"person"
],
"CLASS_NUM": 2,
"NMS_THRESHOLD": 0.44999998807907104,
"PROB_THRESHOLD": 0.4000000059604645
}
需要修改标签和标签数量,anchors可以不改,如果你是官方模型pt训练的,然后修改一下模型路径
要显示推理到hdmi上,先杀掉 fb_vo 这个进程
ps aux|grep fb_vo
找到进程号后kill -9 实际pid杀掉即可
将hdmi插入hdmi0(远离网口的那个)
如果需要恢复运行/root/runVoHook.sh即可恢复原样
本例子使用读取视频MP4,调用npu推理到hdmi显示
./sample_demux_ivps_npu_hdmi_vo -p config/yolov5s.json -f test.mp4
开发板运行显示hdmi上的视频:
<iframe allowfullscreen="true" border="0" frameborder="no" framespacing="0" scrolling="no" src="//player.bilibili.com/player.html?aid=579893520&bvid=BV1M64y1p7dU&cid=1373064355&p=1"> </iframe> 标签:pipeline,hdmi,ivps,vo,sample,json,ax650,npu,ax From: https://www.cnblogs.com/smallwxw/p/17914293.html