基本思想:客户的开发板,搞一个deepstream开发板上进行推流检测并将视频推到手机上进行实时显示,如果开发板的python环境有问题的话,可以在pc端进行模型转换,不用计较pc端的驱动版本和tensorrt版本,模型转换完成之后,在移植开发板上即可,因为deepstream 只使用c++ 进行推理,后期准备搞个http通信,和界面相配合,完成终端的绘图和数据映射,开发板只做推理和加速处理
首先需要配置vnc连接或者TTL通信
一、开发板的系统刷机之后的环境现状
li@li-desktop:~$ sudo apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
li@li-desktop:~$ sudo apt-get install libssl-dev
li@li-desktop:~$ sudo apt-get install libgstrtspserver-1.0-0 libjansson4
li@li-desktop:~$ uname -a
Linux li-desktop 4.9.253-tegra #1 SMP PREEMPT Mon Jul 26 12:19:28 PDT 2021 aarch64 aarch64 aarch64 GNU/Linux
li@li-desktop:~$ jetson_release -v
- NVIDIA Jetson Xavier NX (Developer Kit Version)
* Jetpack 4.6 [L4T 32.6.1]
* NV Power Mode: MODE_20W_6CORE - Type: 8
* jetson_stats.service: active
- Board info:
* Type: Xavier NX (Developer Kit Version)
* SOC Family: tegra194 - ID:25
* Module: P3668 - Board: P3509-000
* Code Name: jakku
* CUDA GPU architecture (ARCH_BIN): 7.2
* Serial Number: 1422521039095
- Libraries:
* CUDA: 10.2.300
* cuDNN: 8.2.1.32
* TensorRT: 8.0.1.6
* Visionworks: 1.6.0.501
* OpenCV: 4.1.1 compiled CUDA: NO
* VPI: ii libnvvpi1 1.1.15 arm64 NVIDIA Vision Programming Interface library
* Vulkan: 1.2.70
- jetson-stats:
* Version 3.1.1
* Works on Python 3.6.9
li@li-desktop:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
li@li-desktop:~$ python3
Python 3.6.9 (default, Dec 8 2021, 21:08:43)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
>>> import tensorrt
>>> exit()
li@li-desktop:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Sun_Feb_28_22:34:44_PST_2021
Cuda compilation tools, release 10.2, V10.2.300
Build cuda_10.2_r440.TC440_70.29663091_0
li@li-desktop:~$ python3
Python 3.6.9 (default, Dec 8 2021, 21:08:43)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pycuda
>>>
整个开发板的性能参数jtop
二、基本的库都有,安装一下deepstream ,使用motrix下载即可
li@li-desktop:~$ cd sxj731533730/
li@li-desktop:~/sxj731533730$ axel -n 100 https://developer.download.nvidia.com/assets/Deepstream/DeepStream_6.0.1/deepstream_sdk_v6.0.1_jetson.tbz2
li@li-desktop:~/sxj731533730$ sudo tar xpvf deepstream_sdk_v6.0.1_jetson.tbz2 -C /
li@li-desktop:~/sxj731533730$ cd /opt/nvidia/deepstream/deepstream-6.0/
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo ./install.sh
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo ldconfig
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo vim /etc/ld.so.conf
/opt/nvidia/deepstream/deepstream-6.0/lib
li@li-desktop:/opt/nvidia/deepstream/deepstream-6.0$ sudo ldconfig
测试安装deepstream成功
li@li-desktop:~$ deepstream-app --version-all
deepstream-app version 6.0.1
DeepStreamSDK 6.0.1
CUDA Driver Version: 10.2
CUDA Runtime Version: 10.2
TensorRT Version: 8.0
cuDNN Version: 8.2
libNVWarp360 Version: 2.0.1d3
三、测试deepstream 拉流处理
li@li-desktop:~$ sudo vim /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
修改 [sink0],将 enable 改为 0
修改 [sink1],将 enable 改为 1
测试deepstream
li@li-desktop:~$ sudo deepstream-app -c /opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt
Opening in BLOCKING MODE
0:00:04.888175377 10973 0x38827120 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<secondary_gie_2> NvDsInferContext[UID 6]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 6]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.0/samples/configs/deepstream-app/../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input_1 3x224x224
1 OUTPUT kFLOAT predictions/Softmax 20x1x1
....
NvMMLiteOpen : Block : BlockType = 4
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4
H264: Profile = 66, Level = 0
NVMEDIA_ENC: bBlitMode is set to TRUE
**PERF: 66.34 (64.04) 66.34 (64.04) 66.34 (64.04) 66.34 (64.04)
**PERF: 58.75 (59.57) 58.75 (59.57) 58.75 (59.57) 58.75 (59.57)
**PERF: 59.46 (59.49) 59.46 (59.49) 59.46 (59.49) 59.46 (59.49)
**PERF: 63.21 (60.67) 63.21 (60.67) 63.21 (60.67) 63.21 (60.67)
**PERF: 65.88 (61.96) 65.88 (61.96) 65.88 (61.96) 65.88 (61.96)
** INFO: <bus_callback:217>: Received EOS. Exiting ...
Quitting
[NvMultiObjectTracker] De-initialized
App run successful
三、先测试一下tensorrtx,转一下模型(第三步其实不需要,只测试deepstream 可直接第四步)
训练的模型 参考
使用的版本为yolov5 tag6.0
li@li-desktop:~/sxj731533730$ git clone https://github.com/wang-xinyu/tensorrtx.git
Cloning into 'tensorrtx'...
remote: Enumerating objects: 1883, done.
remote: Counting objects: 100% (551/551), done.
remote: Compressing objects: 100% (87/87), done.
remote: Total 1883 (delta 504), reused 464 (delta 464), pack-reused 1332
Receiving objects: 100% (1883/1883), 1.63 MiB | 3.14 MiB/s, done.
Resolving deltas: 100% (1226/1226), done.
li@li-desktop:~/sxj731533730$ cd yolov5/
li@li-desktop:~/sxj731533730/yolov5$ cp ../tensorrtx/yolov5/gen_wts.py .
li@li-desktop:~/sxj731533730/yolov5$ python3 gen_wts.py -w run/exp3/weights/best.pt -o yolov5s.wts
YOLOv5 标签:deepstream,25,self,li,Jetson,desktop,推流,sxj731533730,button From: https://blog.51cto.com/u_12504263/5719072