首页 > 其他分享 >40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码

时间:2022-12-09 22:32:53浏览次数:90  
标签:box ROC int float 40 开发板 grid input model


基本思想:喜得一个RK3588开发板,利用它完成目标检测和TCP通信

 

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_ide

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_网络_02

一、刷机 参考官网或者参考下篇博客

系统用的:ROC-RK3588S-PC_Ubuntu20.04-Gnome-r21199_v1.0.1b_220812.7z 刷机参考这篇博客

二、在window11上,搜索ip

C:\Users\Administrator>for /L %i IN (1,1,254) DO ping -w 2 -n 1 192.168.10.%i

 然后搜索

C:\Users\Administrator>arp -a

Interface: 192.168.10.151 --- 0x9
Internet Address Physical Address Type
192.168.10.1 01-00-5e-00-00-02 dynamic
192.168.10.53 01-00-5e-00-00-02 dynamic
192.168.10.130 01-00-5e-00-00-02 dynamic
192.168.10.191 01-00-5e-00-00-02 dynamic
192.168.10.228 01-00-5e-00-00-02 dynamic
192.168.10.255 01-00-5e-00-00-02 static
224.0.0.2 01-00-5e-00-00-02 static
224.0.0.22 01-00-5e-00-00-16 static
224.0.0.251 01-00-5e-00-00-fb static
224.0.0.252 01-00-5e-00-00-fc static
239.255.255.250 01-00-5e-7f-ff-fa static
255.255.255.255 ff-ff-ff-ff-ff-ff static

Interface: 192.168.159.1 --- 0xd
Internet Address Physical Address Type
192.168.159.254 00-50-56-fb-7b-d6 dynamic
192.168.159.255 ff-ff-ff-ff-ff-ff static
224.0.0.2 01-00-5e-00-00-02 static
224.0.0.22 01-00-5e-00-00-16 static
224.0.0.251 01-00-5e-00-00-fb static
224.0.0.252 01-00-5e-00-00-fc static
239.255.255.250 01-00-5e-7f-ff-fa static
255.255.255.255 ff-ff-ff-ff-ff-ff static

Interface: 192.168.187.1 --- 0x15
Internet Address Physical Address Type
192.168.187.254 00-50-56-f5-81-5a dynamic
192.168.187.255 ff-ff-ff-ff-ff-ff static
224.0.0.2 01-00-5e-00-00-02 static
224.0.0.22 01-00-5e-00-00-16 static
224.0.0.251 01-00-5e-00-00-fb static
224.0.0.252 01-00-5e-00-00-fc static
239.255.255.250 01-00-5e-7f-ff-fa static
255.255.255.255 ff-ff-ff-ff-ff-ff static

然后使用wsl连接上

ubuntu@sxj731533730:~$ ssh [email protected]
The authenticity of host '192.168.10.53 (192.168.10.53)' can't be established.
ECDSA key fingerprint is SHA256:YzzWxhSX9onzRt6P3BlribpyQ44+Bs0ik8jPLx15MOU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.10.53' (ECDSA) to the list of known hosts.
[email protected]'s password:
_____ _ __ _
| ___(_)_ __ ___ / _| |_ _
| |_ | | '__/ _ \ |_| | | | |
| _| | | | | __/ _| | |_| |
|_| |_|_| \___|_| |_|\__, |
|___/
Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.10.66 aarch64)

* Documentation: http://wiki.t-firefly.com
* Management: http://www.t-firefly.com

System information as of Sat Sep 24 13:50:53 UTC 2022

System load: 0.64 0.48 0.21 Up time: 3 min Local users: 2
Memory usage: 17 % of 3710MB IP: 192.168.10.53
Usage of /: 1% of 23G


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

查看一下npu信息

firefly@firefly:~$ dpkg -l | grep npu
ii firefly-rk3588npu-driver 1.3.0a arm64 <rk3588 npu package>
ii gir1.2-ibus-1.0:arm64 1.5.22-2ubuntu2.1 arm64 Intelligent Input Bus - introspection data
ii im-config 0.44-1ubuntu1.3 all Input method configuration framework
ii inputattach 1:1.7.0-1 arm64 utility to connect serial-attached peripherals to the input subsystem
ii libavdevice58:arm64 7:4.2.4-1ubuntu1.0firefly5 arm64 FFmpeg library for handling input and output devices - runtime files
ii libibus-1.0-5:arm64 1.5.22-2ubuntu2.1 arm64 Intelligent Input Bus - shared library
ii libinput-bin 1.15.5-1ubuntu0.3 arm64 input device management and event handling library - udev quirks
ii libinput10:arm64 1.15.5-1ubuntu0.3 arm64 input device management and event handling library - shared library
ii libxcb-xinput0:arm64 1.14-2 arm64 X C Binding, xinput extension
ii libxi6:arm64 2:1.7.10-0ubuntu1 arm64 X11 Input extension library

三、配置环境,测试py调用npu和c++调用npu

firefly@firefly:~$ sudo apt-get update
firefly@firefly:~$ sudo apt-get install libopencv-dev python3-pip
firefly@firefly:~$ sudo apt-get install ffmpeg gcc g++ git cmake make
firefly@firefly:~$ sudo apt-get install libxslt1-dev zlib1g zlib1g-dev libglib2.0-0 libsm6 libgl1-mesa-glx libprotobuf-dev gcc

1)下载rknn-toolkit配置python的rknnlite环境,首先配置阿里源

conda config --add channels https://mirrors.aliyun.com/anaconda/pkgs/free
conda config --add channels https://mirrors.aliyun.com/anaconda/pkgs/main
conda config --add channels https://mirrors.aliyun.com/anaconda/pkgs/msys2
conda config --add channels https://mirrors.aliyun.com/anaconda/pkgs/r

conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/Paddle
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/auto
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/biobakery
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/bioconda
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/c4aarch64
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/caffe2
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/conda-forge
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/deepmodeling
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/dglteam
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/fastai
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/fermi
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/idaholab
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/intel
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/matsci
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/menpo
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/mordred-descriptor
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/msys2
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/numba
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/ohmeta
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/omnia
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/plotly
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/psi4
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/pytorch
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/pytorch-test
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/pytorch3d
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/pyviz
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/qiime2
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/rapidsai
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/rdkit
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/simpleitk
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/stackless
conda config --add channels https://mirrors.aliyun.com/anaconda/cloud/ursky

conda config --set show_channel_urls yes

配置环境

firefly@firefly:~$ git clone https://github.com/rockchip-linux/rknn-toolkit2.git
firefly@firefly:~$ wget https://github.com/Archiconda/build-tools/releases/download/0.2.2/Archiconda3-0.2.2-Linux-aarch64.sh
firefly@firefly:~$ sh Archiconda3-0.2.2-Linux-aarch64.sh
firefly@firefly:~$ python3
Python 3.8.10 (default, Jun 22 2022, 20:18:18)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> exit()
firefly@firefly:~$ source ~/.bashrc
firefly@firefly:~$ python3
Python 3.7.1 | packaged by conda-forge | (default, Jan 7 2019, 00:11:41)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
firefly@firefly:~$ conda create -n rknnpy37 python=3.7
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate rknnpy37
#
# To deactivate an active environment, use
#
# $ conda deactivate
firefly@firefly:~$ conda activate rknnpy37
(rknnpy37) firefly@firefly:~$
(rknnpy37) firefly@firefly:~/rknn-toolkit2/rknn_toolkit_lite2/packages$ pip3 install rknn_toolkit_lite2-1.4.0-cp37-cp37m-linux_aarch64.whl
(rknnpy37) firefly@firefly:~/rknn-toolkit2/rknn_toolkit_lite2/packages$ python3
Python 3.7.2 (default, Jan 11 2019, 18:52:21)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from rknnlite.api import RKNNLite
>>>

四、测试3588的npu真的快

(rknnpy37) firefly@firefly:~/rknn-toolkit2/rknn_toolkit_lite2/examples/inference_with_lite$ python3 test.py
--> Load RKNN model
done
--> Init runtime environment
I RKNN: [03:36:50.104] RKNN Runtime Information: librknnrt version: 1.3.0 (c193be371@2022-05-04T20:16:33)
I RKNN: [03:36:50.104] RKNN Driver Information: version: 0.7.2
I RKNN: [03:36:50.106] RKNN Model Information: version: 1, toolkit version: 1.4.0-c15f5e0b(compiler version: 1.4.0 (c73777b51@2022-09-05T12:06:01)), target: RKNPU v2, target platform: rk3588, framework name: PyTorch, framework layout: NCHW
W RKNN: [03:36:50.106] RKNN Model version: 1.4.0 not match with rknn runtime version: 1.3.0
done
--> Running model
resnet18
-----TOP 5-----
[812]: 0.9996696710586548
[404]: 0.0002492684288881719
[657]: 1.632158637221437e-05
[833]: 1.0159346857108176e-05
[466 895]: 9.02384545042878e-06

done

在上一篇博客的第四步配置pc的环境rknn-toolkit 

配置台式机的结果显示

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_网络_03

 2)转模型 参考方法一的第七步,需要使用rknn-toolkit2里的包重新配置rknn.api和rknnlite.api​​35、ubuntu20.04搭建瑞芯微的npu仿真环境和测试rv1126的Debain系统下的yolov5+npu检测功能

(rknnpy36) ubuntu@ubuntu:~/rknn-toolkit2/packages$ pip3 install rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl -i https://pypi.tuna.tsinghua.edu.cn/simple

文档位置:/home/ubuntu/rknn-toolkit2/doc/RKNNToolKit2_API_Difference_With_Toolkit1-1.4.0.md

转化代码

from rknn.api import RKNN

ONNX_MODEL = './yolov5s_v5_0.onnx'
RKNN_MODEL = './yolov5s_v5_0_rk3588.rknn'

if __name__ == '__main__':

# Create RKNN object
rknn = RKNN(verbose=True)

# pre-process config
print('--> config model')
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]],
target_platform='rk3588',
quantized_dtype='asymmetric_quantized-8', optimization_level=3)
print('done')

print('--> Loading model')
ret = rknn.load_onnx(model=ONNX_MODEL)
if ret != 0:
print('Load model failed!')
exit(ret)
print('done')

# Build model
print('--> Building model')
ret = rknn.build(do_quantization=True, dataset='train.txt') # ,pre_compile=True
if ret != 0:
print('Build yolov5s failed!')
exit(ret)
print('done')

# Export rknn model
print('--> Export RKNN model')
ret = rknn.export_rknn(RKNN_MODEL)
if ret != 0:
print('Export yolov5s_1109.rknn failed!')
exit(ret)
print('done')

rknn.release()

代码转换和量化模型过程

(rknnpy36) ubuntu@ubuntu:~/rknn-toolkit2/examples/onnx/yolov5$ python3 onnx2rknn.py 
W __init__: rknn-toolkit2 version: 1.4.0-22dcfef4
--> config model
done
--> Loading model
W load_onnx: It is recommended onnx opset 12, but your onnx model opset is 11!
W load_onnx: Model converted from pytorch, 'opset_version' should be set 12 in torch.onnx.export for successful convert!
More details can be found in examples/pytorch/resnet18_export_onnx
done
--> Building model
I base_optimize ...
I base_optimize done.
I
I fold_constant ...
I fold_constant done.
....
-----------------+---------------------------------
D RKNN: [13:24:46.668] ----------------------------------------
D RKNN: [13:24:46.668] Total Weight Memory Size: 7355008
D RKNN: [13:24:46.668] Total Internal Memory Size: 20889600
D RKNN: [13:24:46.668] Predict Internal Memory RW Amount: 261873456
D RKNN: [13:24:46.668] Predict Weight Memory RW Amount: 7354168
D RKNN: [13:24:46.668] ----------------------------------------
D RKNN: [13:24:46.668] <<<<<<<< end: N4rknn21RKNNMemStatisticsPassE
I rknn buiding done
done
--> Export RKNN model
done

2)将模型移动到开发板上

测试模型

链接:https://pan.baidu.com/s/1CXhQAfK2Un_4zXdVKWjbhA?pwd=c263 
提取码:c263 
--来自百度网盘超级会员V1的分享

测试代码

import os
import urllib
import traceback
import time
import sys
import numpy as np
import cv2
from rknnlite.api import RKNNLite as RKNN

RKNN_MODEL = 'yolov5s_v5_0_rk3588.rknn'
IMG_PATH = 'bus.jpg'

QUANTIZE_ON = True

BOX_THRESH = 0.5
NMS_THRESH = 0.6
IMG_SIZE = 640


CLASSES = ("person", "bicycle", "car","motorbike ","aeroplane ","bus ","train","truck ","boat","traffic light",
"fire hydrant","stop sign ","parking meter","bench","bird","cat","dog ","horse ","sheep","cow","elephant",
"bear","zebra ","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee","skis","snowboard","sports ball","kite",
"baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle","wine glass","cup","fork","knife ",
"spoon","bowl","banana","apple","sandwich","orange","broccoli","carrot","hot dog","pizza ","donut","cake","chair","sofa",
"pottedplant","bed","diningtable","toilet ","tvmonitor","laptop ","mouse ","remote ","keyboard ","cell phone","microwave ",
"oven ","toaster","sink","refrigerator ","book","clock","vase","scissors ","teddy bear ","hair drier", "toothbrush ")

def sigmoid(x):
return 1 / (1 + np.exp(-x))


def xywh2xyxy(x):
# Convert [x, y, w, h] to [x1, y1, x2, y2]
y = np.copy(x)
y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
return y


def resize_postprocess(x, offset_x, offset_y):
# Convert [x1, y1, x2, y2] to [x1, y1, x2, y2]
y = np.copy(x)
y[:, 0] = x[:, 0] / offset_x # top left x
y[:, 1] = x[:, 1] / offset_y # top left y
y[:, 2] = x[:, 2] / offset_x # bottom right x
y[:, 3] = x[:, 3] / offset_y # bottom right y
return y


def process(input, mask, anchors):
anchors = [anchors[i] for i in mask]
grid_h, grid_w = map(int, input.shape[0:2])

box_confidence = sigmoid(input[..., 4])
box_confidence = np.expand_dims(box_confidence, axis=-1)

box_class_probs = sigmoid(input[..., 5:])

box_xy = sigmoid(input[..., :2]) * 2 - 0.5

col = np.tile(np.arange(0, grid_w), grid_w).reshape(-1, grid_w)
row = np.tile(np.arange(0, grid_h).reshape(-1, 1), grid_h)
col = col.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
row = row.reshape(grid_h, grid_w, 1, 1).repeat(3, axis=-2)
grid = np.concatenate((col, row), axis=-1)
box_xy += grid
box_xy *= int(IMG_SIZE / grid_h)

box_wh = pow(sigmoid(input[..., 2:4]) * 2, 2)
box_wh = box_wh * anchors

box = np.concatenate((box_xy, box_wh), axis=-1)

return box, box_confidence, box_class_probs


def filter_boxes(boxes, box_confidences, box_class_probs):
"""Filter boxes with box threshold. It's a bit different with origin yolov5 post process!
# Arguments
boxes: ndarray, boxes of objects.
box_confidences: ndarray, confidences of objects.
box_class_probs: ndarray, class_probs of objects.
# Returns
boxes: ndarray, filtered boxes.
classes: ndarray, classes for boxes.
scores: ndarray, scores for boxes.
"""
box_classes = np.argmax(box_class_probs, axis=-1)
box_class_scores = np.max(box_class_probs, axis=-1)
pos = np.where(box_confidences[..., 0] >= BOX_THRESH)

boxes = boxes[pos]
classes = box_classes[pos]
scores = box_class_scores[pos]

return boxes, classes, scores


def nms_boxes(boxes, scores):
"""Suppress non-maximal boxes.
# Arguments
boxes: ndarray, boxes of objects.
scores: ndarray, scores of objects.
# Returns
keep: ndarray, index of effective boxes.
"""
x = boxes[:, 0]
y = boxes[:, 1]
w = boxes[:, 2] - boxes[:, 0]
h = boxes[:, 3] - boxes[:, 1]

areas = w * h
order = scores.argsort()[::-1]

keep = []
while order.size > 0:
i = order[0]
keep.append(i)

xx1 = np.maximum(x[i], x[order[1:]])
yy1 = np.maximum(y[i], y[order[1:]])
xx2 = np.minimum(x[i] + w[i], x[order[1:]] + w[order[1:]])
yy2 = np.minimum(y[i] + h[i], y[order[1:]] + h[order[1:]])

w1 = np.maximum(0.0, xx2 - xx1 + 0.00001)
h1 = np.maximum(0.0, yy2 - yy1 + 0.00001)
inter = w1 * h1

ovr = inter / (areas[i] + areas[order[1:]] - inter)
inds = np.where(ovr <= NMS_THRESH)[0]
order = order[inds + 1]
keep = np.array(keep)
return keep


def yolov5_post_process(input_data):
masks = [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
anchors = [[10, 13], [16, 30], [33, 23], [30, 61], [62, 45],
[59, 119], [116, 90], [156, 198], [373, 326]]

boxes, classes, scores = [], [], []
for input, mask in zip(input_data, masks):
b, c, s = process(input, mask, anchors)
b, c, s = filter_boxes(b, c, s)
boxes.append(b)
classes.append(c)
scores.append(s)

boxes = np.concatenate(boxes)
boxes = xywh2xyxy(boxes)
classes = np.concatenate(classes)
scores = np.concatenate(scores)

nboxes, nclasses, nscores = [], [], []
for c in set(classes):
inds = np.where(classes == c)
b = boxes[inds]
c = classes[inds]
s = scores[inds]
keep = nms_boxes(b, s)
nboxes.append(b[keep])
nclasses.append(c[keep])
nscores.append(s[keep])

if not nclasses and not nscores:
return None, None, None

boxes = np.concatenate(nboxes)
classes = np.concatenate(nclasses)
scores = np.concatenate(nscores)

return boxes, classes, scores


def draw(image, boxes, scores, classes):
"""Draw the boxes on the image.
# Argument:
image: original image.
boxes: ndarray, boxes of objects.
classes: ndarray, classes of objects.
scores: ndarray, scores of objects.
all_classes: all classes name.
"""
for box, score, cl in zip(boxes, scores, classes):
top, left, right, bottom = box
print('class: {}, score: {}'.format(CLASSES[cl], score))
print('box coordinate left,top,right,down: [{}, {}, {}, {}]'.format(top, left, right, bottom))
top = int(top)
left = int(left)
right = int(right)
bottom = int(bottom)

cv2.rectangle(image, (top, left), (right, bottom), (255, 0, 0), 2)
cv2.putText(image, '{0} {1:.2f}'.format(CLASSES[cl], score),
(top, left - 6),
cv2.FONT_HERSHEY_SIMPLEX,
0.6, (0, 0, 255), 2)


def letterbox(im, new_shape=(640, 640), color=(0, 0, 0)):
# Resize and pad image while meeting stride-multiple constraints
shape = im.shape[:2] # current shape [height, width]
if isinstance(new_shape, int):
new_shape = (new_shape, new_shape)

# Scale ratio (new / old)
r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])

# Compute padding
ratio = r, r # width, height ratios
new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding

dw /= 2 # divide padding into 2 sides
dh /= 2

if shape[::-1] != new_unpad: # resize
im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add border
return im, ratio, (dw, dh)


def letter_box_postprocess(x, scalingfactor, xy_correction):
y = np.copy(x)
y[:, 0] = (x[:, 0] - xy_correction[0]) / scalingfactor # top left x
y[:, 1] = (x[:, 1] - xy_correction[1]) / scalingfactor # top left y
y[:, 2] = (x[:, 2] - xy_correction[0]) / scalingfactor # bottom right x
y[:, 3] = (x[:, 3] - xy_correction[1]) / scalingfactor # bottom right y
return y


def get_file(filepath):
templist = []
with open(filepath, "r") as f:
for item in f.readlines():
templist.append(item.strip())
return templist


if __name__ == '__main__':

# Create RKNN object
rknn = RKNN()
image_process_mode = "letter_box"
print("image_process_mode = ", image_process_mode)

if not os.path.exists(RKNN_MODEL):
print('model not exist')
exit(-1)

# Load ONNX model
print('--> Loading model')
ret = rknn.load_rknn(RKNN_MODEL)
if ret != 0:
print('Load rknn model failed!')
exit(ret)
print('done')

# init runtime environment
print('--> Init runtime environment')
ret = rknn.init_runtime()
# ret = rknn.init_runtime('rk180_8', device_id='1808')
if ret != 0:
print('Init runtime environment failed')
exit(ret)
print('done')

image = cv2.imread(IMG_PATH)
img_height = image.shape[0]
img_width = image.shape[1]
# img, ratio, (dw, dh) = letterbox(img, new_shape=(IMG_SIZE, IMG_SIZE))
img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if image_process_mode == "resize":
img = cv2.resize(img, (IMG_SIZE, IMG_SIZE))
elif image_process_mode == "letter_box":
img, scale_factor, correction = letterbox(img)
# Inference
print('--> Running model')
outputs = rknn.inference(inputs=[img])

# post process
input0_data = outputs[0]
input1_data = outputs[1]
input2_data = outputs[2]

input0_data = input0_data.reshape([3, -1] + list(input0_data.shape[-2:]))
input1_data = input1_data.reshape([3, -1] + list(input1_data.shape[-2:]))
input2_data = input2_data.reshape([3, -1] + list(input2_data.shape[-2:]))

input_data = list()
input_data.append(np.transpose(input0_data, (2, 3, 0, 1)))
input_data.append(np.transpose(input1_data, (2, 3, 0, 1)))
input_data.append(np.transpose(input2_data, (2, 3, 0, 1)))

boxes, classes, scores = yolov5_post_process(input_data)
if image_process_mode == "resize":
scale_h = IMG_SIZE / img_height
scale_w = IMG_SIZE / img_width
boxes = resize_postprocess(boxes, scale_w, scale_h)
elif image_process_mode == "letter_box":
boxes = letter_box_postprocess(boxes, scale_factor[0], correction)

# img_1 = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
if boxes is not None:
draw(image, boxes, scores, classes)
cv2.imwrite("image.jpg", image)
rknn.release()

测试结果

(rknnpy37) firefly@firefly:~/sxj731533730$ python3 test.py
image_process_mode = letter_box
--> Loading model
done
--> Init runtime environment
I RKNN: [06:12:29.168] RKNN Runtime Information: librknnrt version: 1.3.0 (c193be371@2022-05-04T20:16:33)
I RKNN: [06:12:29.168] RKNN Driver Information: version: 0.7.2
I RKNN: [06:12:29.169] RKNN Model Information: version: 1, toolkit version: 1.4.0-22dcfef4(compiler version: 1.4.0 (3b4520e4f@2022-09-05T12:50:09)), target: RKNPU v2, target platform: rk3588, framework name: ONNX, framework layout: NCHW
W RKNN: [06:12:29.169] RKNN Model version: 1.4.0 not match with rknn runtime version: 1.3.0
done
--> Running model
class: person, score: 0.997715950012207
box coordinate left,top,right,down: [475.8802708387375, 256.1136655807495, 559.5198756456375, 518.8727235794067]
class: person, score: 0.9961398243904114
box coordinate left,top,right,down: [112.27060797810555, 231.6195125579834, 216.2691259086132, 530.3792667388916]
class: person, score: 0.9730960130691528
box coordinate left,top,right,down: [208.75255846977234, 252.7006424665451, 287.3006947040558, 504.38852989673615]
class: bus , score: 0.9917091727256775
box coordinate left,top,right,down: [86.03590875864029, 140.60074424743652, 560.1752118468285, 439.3604984283447]

测试图片

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_ide_04

五、rk3588 测试yolov5的图片检测,开发工具使用的clion进行远程控制

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_5e_05

rk3588的so文件存在于/home/firefly/rknpu2/runtime/RK3588/Linux/librknn_api/aarch64,奇怪的是在​​https://github.com/radxa/rknpu2​​​ 存在rk3588的so文件,而官方的​​https://github.com/airockchip/rknn_model_zoo​​​ 没有rk3588so,注意其接口和rv1126的接口存在细微差别,需要稍微修改一下代码,如果追求速度,可以使用其​​https://github.com/radxa/rknpu2​​提供的代码,实测和修改自己的模型focus修改和maxpool修改

cmakelists.txt

cmake_minimum_required(VERSION 3.13)
project(3588_demo)
set(CMAKE_CXX_FLAGS "-std=c++11")
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -lstdc++ ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -lstdc++")

include_directories(${CMAKE_SOURCE_DIR})
include_directories(${CMAKE_SOURCE_DIR}/include)
message(STATUS ${OpenCV_INCLUDE_DIRS})

#添加头文件
include_directories(${OpenCV_INCLUDE_DIRS})
find_package(OpenCV REQUIRED)
#链接Opencv库
add_library(librknn_api SHARED IMPORTED)
set_target_properties(librknn_api PROPERTIES IMPORTED_LOCATION ${CMAKE_SOURCE_DIR}/lib/librknn_api.so)


add_executable(3588_demo main.cpp)
target_link_libraries(3588_demo ${OpenCV_LIBS} librknn_api)

源码

#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <queue>
#include "rknn_api.h"
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <chrono>
#define OBJ_NAME_MAX_SIZE 16
#define OBJ_NUMB_MAX_SIZE 200
#define OBJ_CLASS_NUM 80
#define PROP_BOX_SIZE (5+OBJ_CLASS_NUM)
using namespace std;

typedef struct _BOX_RECT {
int left;
int right;
int top;
int bottom;
} BOX_RECT;

typedef struct __detect_result_t {
char name[OBJ_NAME_MAX_SIZE];
int class_index;
BOX_RECT box;
float prop;
} detect_result_t;

typedef struct _detect_result_group_t {
int id;
int count;
detect_result_t results[OBJ_NUMB_MAX_SIZE];
} detect_result_group_t;

const int anchor0[6] = {10, 13, 16, 30, 33, 23};
const int anchor1[6] = {30, 61, 62, 45, 59, 119};
const int anchor2[6] = {116, 90, 156, 198, 373, 326};

void printRKNNTensor(rknn_tensor_attr *attr) {
printf("index=%d name=%s n_dims=%d dims=[%d %d %d %d] n_elems=%d size=%d "
"fmt=%d type=%d qnt_type=%d fl=%d zp=%d scale=%f\n",
attr->index, attr->name, attr->n_dims, attr->dims[3], attr->dims[2],
attr->dims[1], attr->dims[0], attr->n_elems, attr->size, 0, attr->type,
attr->qnt_type, attr->fl, attr->zp, attr->scale);
}

float sigmoid(float x) {
return 1.0 / (1.0 + expf(-x));
}

float unsigmoid(float y) {
return -1.0 * logf((1.0 / y) - 1.0);
}

int process_fp(float *input, int *anchor, int grid_h, int grid_w, int height, int width, int stride,
std::vector<float> &boxes, std::vector<float> &boxScores, std::vector<int> &classId,
float threshold) {

int validCount = 0;
int grid_len = grid_h * grid_w;
float thres_sigmoid = unsigmoid(threshold);
for (int a = 0; a < 3; a++) {
for (int i = 0; i < grid_h; i++) {
for (int j = 0; j < grid_w; j++) {
float box_confidence = input[(PROP_BOX_SIZE * a + 4) * grid_len + i * grid_w + j];
if (box_confidence >= thres_sigmoid) {
int offset = (PROP_BOX_SIZE * a) * grid_len + i * grid_w + j;
float *in_ptr = input + offset;
float box_x = sigmoid(*in_ptr) * 2.0 - 0.5;
float box_y = sigmoid(in_ptr[grid_len]) * 2.0 - 0.5;
float box_w = sigmoid(in_ptr[2 * grid_len]) * 2.0;
float box_h = sigmoid(in_ptr[3 * grid_len]) * 2.0;
box_x = (box_x + j) * (float) stride;
box_y = (box_y + i) * (float) stride;
box_w = box_w * box_w * (float) anchor[a * 2];
box_h = box_h * box_h * (float) anchor[a * 2 + 1];
box_x -= (box_w / 2.0);
box_y -= (box_h / 2.0);
boxes.push_back(box_x);
boxes.push_back(box_y);
boxes.push_back(box_w);
boxes.push_back(box_h);

float maxClassProbs = in_ptr[5 * grid_len];
int maxClassId = 0;
for (int k = 1; k < OBJ_CLASS_NUM; ++k) {
float prob = in_ptr[(5 + k) * grid_len];
if (prob > maxClassProbs) {
maxClassId = k;
maxClassProbs = prob;
}
}
float box_conf_f32 = sigmoid(box_confidence);
float class_prob_f32 = sigmoid(maxClassProbs);
boxScores.push_back(box_conf_f32 * class_prob_f32);
classId.push_back(maxClassId);
validCount++;
}
}
}
}
return validCount;
}

float CalculateOverlap(float xmin0, float ymin0, float xmax0, float ymax0, float xmin1, float ymin1, float xmax1,
float ymax1) {
float w = fmax(0.f, fmin(xmax0, xmax1) - fmax(xmin0, xmin1) + 1.0);
float h = fmax(0.f, fmin(ymax0, ymax1) - fmax(ymin0, ymin1) + 1.0);
float i = w * h;
float u = (xmax0 - xmin0 + 1.0) * (ymax0 - ymin0 + 1.0) + (xmax1 - xmin1 + 1.0) * (ymax1 - ymin1 + 1.0) - i;
return u <= 0.f ? 0.f : (i / u);
}

int nms(int validCount, std::vector<float> &outputLocations, std::vector<int> &order, float threshold) {
for (int i = 0; i < validCount; ++i) {
if (order[i] == -1) {
continue;
}
int n = order[i];
for (int j = i + 1; j < validCount; ++j) {
int m = order[j];
if (m == -1) {
continue;
}
float xmin0 = outputLocations[n * 4 + 0];
float ymin0 = outputLocations[n * 4 + 1];
float xmax0 = outputLocations[n * 4 + 0] + outputLocations[n * 4 + 2];
float ymax0 = outputLocations[n * 4 + 1] + outputLocations[n * 4 + 3];

float xmin1 = outputLocations[m * 4 + 0];
float ymin1 = outputLocations[m * 4 + 1];
float xmax1 = outputLocations[m * 4 + 0] + outputLocations[m * 4 + 2];
float ymax1 = outputLocations[m * 4 + 1] + outputLocations[m * 4 + 3];

float iou = CalculateOverlap(xmin0, ymin0, xmax0, ymax0, xmin1, ymin1, xmax1, ymax1);

if (iou > threshold) {
order[j] = -1;
}
}
}
return 0;
}

int quick_sort_indice_inverse(
std::vector<float> &input,
int left,
int right,
std::vector<int> &indices) {
float key;
int key_index;
int low = left;
int high = right;
if (left < right) {
key_index = indices[left];
key = input[left];
while (low < high) {
while (low < high && input[high] <= key) {
high--;
}
input[low] = input[high];
indices[low] = indices[high];
while (low < high && input[low] >= key) {
low++;
}
input[high] = input[low];
indices[high] = indices[low];
}
input[low] = key;
indices[low] = key_index;
quick_sort_indice_inverse(input, left, low - 1, indices);
quick_sort_indice_inverse(input, low + 1, right, indices);
}
return low;
}

int clamp(float val, int min, int max) {
return val > min ? (val < max ? val : max) : min;
}

int post_process_fp(float *input0, float *input1, float *input2, int model_in_h, int model_in_w,
int h_offset, int w_offset, float resize_scale, float conf_threshold, float nms_threshold,
detect_result_group_t *group, const char *labels[]) {
memset(group, 0, sizeof(detect_result_group_t));
std::vector<float> filterBoxes;
std::vector<float> boxesScore;
std::vector<int> classId;
int stride0 = 8;
int grid_h0 = model_in_h / stride0;
int grid_w0 = model_in_w / stride0;
int validCount0 = 0;
validCount0 = process_fp(input0, (int *) anchor0, grid_h0, grid_w0, model_in_h, model_in_w,
stride0, filterBoxes, boxesScore, classId, conf_threshold);

int stride1 = 16;
int grid_h1 = model_in_h / stride1;
int grid_w1 = model_in_w / stride1;
int validCount1 = 0;
validCount1 = process_fp(input1, (int *) anchor1, grid_h1, grid_w1, model_in_h, model_in_w,
stride1, filterBoxes, boxesScore, classId, conf_threshold);

int stride2 = 32;
int grid_h2 = model_in_h / stride2;
int grid_w2 = model_in_w / stride2;
int validCount2 = 0;
validCount2 = process_fp(input2, (int *) anchor2, grid_h2, grid_w2, model_in_h, model_in_w,
stride2, filterBoxes, boxesScore, classId, conf_threshold);

int validCount = validCount0 + validCount1 + validCount2;
// no object detect
if (validCount <= 0) {
return 0;
}

std::vector<int> indexArray;
for (int i = 0; i < validCount; ++i) {
indexArray.push_back(i);
}

quick_sort_indice_inverse(boxesScore, 0, validCount - 1, indexArray);

nms(validCount, filterBoxes, indexArray, nms_threshold);

int last_count = 0;
/* box valid detect target */
for (int i = 0; i < validCount; ++i) {

if (indexArray[i] == -1 || boxesScore[i] < conf_threshold || last_count >= OBJ_NUMB_MAX_SIZE) {
continue;
}
int n = indexArray[i];

float x1 = filterBoxes[n * 4 + 0];
float y1 = filterBoxes[n * 4 + 1];
float x2 = x1 + filterBoxes[n * 4 + 2];
float y2 = y1 + filterBoxes[n * 4 + 3];
int id = classId[n];

group->results[last_count].box.left = (int) ((clamp(x1, 0, model_in_w) - w_offset) / resize_scale);
group->results[last_count].box.top = (int) ((clamp(y1, 0, model_in_h) - h_offset) / resize_scale);
group->results[last_count].box.right = (int) ((clamp(x2, 0, model_in_w) - w_offset) / resize_scale);
group->results[last_count].box.bottom = (int) ((clamp(y2, 0, model_in_h) - h_offset) / resize_scale);
group->results[last_count].prop = boxesScore[i];
group->results[last_count].class_index = id;
const char *label = labels[id];
strncpy(group->results[last_count].name, label, OBJ_NAME_MAX_SIZE);

// printf("result %2d: (%4d, %4d, %4d, %4d), %s\n", i, group->results[last_count].box.left, group->results[last_count].box.top,
// group->results[last_count].box.right, group->results[last_count].box.bottom, label);
last_count++;
}
group->count = last_count;

return 0;
}

float deqnt_affine_to_f32(uint8_t qnt, uint8_t zp, float scale) {
return ((float) qnt - (float) zp) * scale;
}

int32_t __clip(float val, float min, float max) {
float f = val <= min ? min : (val >= max ? max : val);
return f;
}

uint8_t qnt_f32_to_affine(float f32, uint8_t zp, float scale) {
float dst_val = (f32 / scale) + zp;
uint8_t res = (uint8_t) __clip(dst_val, 0, 255);
return res;
}

int process_u8(uint8_t *input, int *anchor, int grid_h, int grid_w, int height, int width, int stride,
std::vector<float> &boxes, std::vector<float> &boxScores, std::vector<int> &classId,
float threshold, uint8_t zp, float scale) {

int validCount = 0;
int grid_len = grid_h * grid_w;
float thres = unsigmoid(threshold);
uint8_t thres_u8 = qnt_f32_to_affine(thres, zp, scale);
for (int a = 0; a < 3; a++) {
for (int i = 0; i < grid_h; i++) {
for (int j = 0; j < grid_w; j++) {
uint8_t box_confidence = input[(PROP_BOX_SIZE * a + 4) * grid_len + i * grid_w + j];
if (box_confidence >= thres_u8) {
int offset = (PROP_BOX_SIZE * a) * grid_len + i * grid_w + j;
uint8_t *in_ptr = input + offset;
float box_x = sigmoid(deqnt_affine_to_f32(*in_ptr, zp, scale)) * 2.0 - 0.5;
float box_y = sigmoid(deqnt_affine_to_f32(in_ptr[grid_len], zp, scale)) * 2.0 - 0.5;
float box_w = sigmoid(deqnt_affine_to_f32(in_ptr[2 * grid_len], zp, scale)) * 2.0;
float box_h = sigmoid(deqnt_affine_to_f32(in_ptr[3 * grid_len], zp, scale)) * 2.0;
box_x = (box_x + j) * (float) stride;
box_y = (box_y + i) * (float) stride;
box_w = box_w * box_w * (float) anchor[a * 2];
box_h = box_h * box_h * (float) anchor[a * 2 + 1];
box_x -= (box_w / 2.0);
box_y -= (box_h / 2.0);
boxes.push_back(box_x);
boxes.push_back(box_y);
boxes.push_back(box_w);
boxes.push_back(box_h);

uint8_t maxClassProbs = in_ptr[5 * grid_len];
int maxClassId = 0;
for (int k = 1; k < OBJ_CLASS_NUM; ++k) {
uint8_t prob = in_ptr[(5 + k) * grid_len];
if (prob > maxClassProbs) {
maxClassId = k;
maxClassProbs = prob;
}
}
float box_conf_f32 = sigmoid(deqnt_affine_to_f32(box_confidence, zp, scale));
float class_prob_f32 = sigmoid(deqnt_affine_to_f32(maxClassProbs, zp, scale));
boxScores.push_back(box_conf_f32 * class_prob_f32);
classId.push_back(maxClassId);
validCount++;
}
}
}
}
return validCount;
}

int post_process_u8(uint8_t *input0, uint8_t *input1, uint8_t *input2, int model_in_h, int model_in_w,
int h_offset, int w_offset, float resize_scale, float conf_threshold, float nms_threshold,
std::vector<uint8_t> &qnt_zps, std::vector<float> &qnt_scales,
detect_result_group_t *group, const char *labels[]) {

memset(group, 0, sizeof(detect_result_group_t));

std::vector<float> filterBoxes;
std::vector<float> boxesScore;
std::vector<int> classId;
int stride0 = 8;
int grid_h0 = model_in_h / stride0;
int grid_w0 = model_in_w / stride0;
int validCount0 = 0;
validCount0 = process_u8(input0, (int *) anchor0, grid_h0, grid_w0, model_in_h, model_in_w,
stride0, filterBoxes, boxesScore, classId, conf_threshold, qnt_zps[0], qnt_scales[0]);

int stride1 = 16;
int grid_h1 = model_in_h / stride1;
int grid_w1 = model_in_w / stride1;
int validCount1 = 0;
validCount1 = process_u8(input1, (int *) anchor1, grid_h1, grid_w1, model_in_h, model_in_w,
stride1, filterBoxes, boxesScore, classId, conf_threshold, qnt_zps[1], qnt_scales[1]);

int stride2 = 32;
int grid_h2 = model_in_h / stride2;
int grid_w2 = model_in_w / stride2;
int validCount2 = 0;
validCount2 = process_u8(input2, (int *) anchor2, grid_h2, grid_w2, model_in_h, model_in_w,
stride2, filterBoxes, boxesScore, classId, conf_threshold, qnt_zps[2], qnt_scales[2]);

int validCount = validCount0 + validCount1 + validCount2;
// no object detect
if (validCount <= 0) {
return 0;
}

std::vector<int> indexArray;
for (int i = 0; i < validCount; ++i) {
indexArray.push_back(i);
}

quick_sort_indice_inverse(boxesScore, 0, validCount - 1, indexArray);

nms(validCount, filterBoxes, indexArray, nms_threshold);

int last_count = 0;
group->count = 0;
/* box valid detect target */
for (int i = 0; i < validCount; ++i) {

if (indexArray[i] == -1 || boxesScore[i] < conf_threshold || last_count >= OBJ_NUMB_MAX_SIZE) {
continue;
}
int n = indexArray[i];

float x1 = filterBoxes[n * 4 + 0];
float y1 = filterBoxes[n * 4 + 1];
float x2 = x1 + filterBoxes[n * 4 + 2];
float y2 = y1 + filterBoxes[n * 4 + 3];
int id = classId[n];

group->results[last_count].box.left = (int) ((clamp(x1, 0, model_in_w) - w_offset) / resize_scale);
group->results[last_count].box.top = (int) ((clamp(y1, 0, model_in_h) - h_offset) / resize_scale);
group->results[last_count].box.right = (int) ((clamp(x2, 0, model_in_w) - w_offset) / resize_scale);
group->results[last_count].box.bottom = (int) ((clamp(y2, 0, model_in_h) - h_offset) / resize_scale);
group->results[last_count].prop = boxesScore[i];
group->results[last_count].class_index = id;
const char *label = labels[id];
strncpy(group->results[last_count].name, label, OBJ_NAME_MAX_SIZE);

// printf("result %2d: (%4d, %4d, %4d, %4d), %s\n", i, group->results[last_count].box.left, group->results[last_count].box.top,
// group->results[last_count].box.right, group->results[last_count].box.bottom, label);
last_count++;
}
group->count = last_count;

return 0;
}
void letterbox(cv::Mat rgb,cv::Mat &img_resize,int target_width,int target_height){

float shape_0=rgb.rows;
float shape_1=rgb.cols;
float new_shape_0=target_height;
float new_shape_1=target_width;
float r=std::min(new_shape_0/shape_0,new_shape_1/shape_1);
float new_unpad_0=int(round(shape_1*r));
float new_unpad_1=int(round(shape_0*r));
float dw=new_shape_1-new_unpad_0;
float dh=new_shape_0-new_unpad_1;
dw=dw/2;
dh=dh/2;
cv::Mat copy_rgb=rgb.clone();
if(int(shape_0)!=int(new_unpad_0)&&int(shape_1)!=int(new_unpad_1)){
cv::resize(copy_rgb,img_resize,cv::Size(new_unpad_0,new_unpad_1));
copy_rgb=img_resize;
}
int top=int(round(dh-0.1));
int bottom=int(round(dh+0.1));
int left=int(round(dw-0.1));
int right=int(round(dw+0.1));
cv::copyMakeBorder(copy_rgb, img_resize,top, bottom, left, right, cv::BORDER_CONSTANT, cv::Scalar(0,0,0));

}
int main(int argc, char **argv) {
const char *img_path = "../0.jpeg";
const char *model_path = "../model/yolov5s_v5_0_rk3588.rknn";
const char *post_process_type = "fp";//fp
const int target_width = 640;
const int target_height = 640;
const char *image_process_mode = "letter_box";
float resize_scale = 0;
int h_pad=0;
int w_pad=0;
const float nms_threshold = 0.6;
const float conf_threshold = 0.25;
const char *labels[] = {"person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat",
"traffic light",
"fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse",
"sheep", "cow",
"elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie",
"suitcase", "frisbee",
"skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove",
"skateboard", "surfboard",
"tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl",
"banana", "apple",
"sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake",
"chair", "couch",
"potted plant", "bed", "dining table", "toilet", "tv", "laptop", "mouse", "remote",
"keyboard", "cell phone",
"microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase",
"scissors", "teddy bear",
"hair drier", "toothbrush"};
// Load image
cv::Mat bgr = cv::imread(img_path);
if (!bgr.data) {
printf("cv::imread %s fail!\n", img_path);
return -1;
}
cv::Mat rgb;
//BGR->RGB
cv::cvtColor(bgr, rgb, cv::COLOR_BGR2RGB);

cv::Mat img_resize;
float correction[2] = {0, 0};
float scale_factor[] = {0, 0};
int width=rgb.cols;
int height=rgb.rows;
// Letter box resize
float img_wh_ratio = (float) width / (float) height;
float input_wh_ratio = (float) target_width / (float) target_height;
int resize_width;
int resize_height;
if (img_wh_ratio >= input_wh_ratio) {
//pad height dim
resize_scale = (float) target_width / (float) width;
resize_width = target_width;
resize_height = (int) ((float) height * resize_scale);
w_pad = 0;
h_pad = (target_height - resize_height) / 2;
} else {
//pad width dim
resize_scale = (float) target_height / (float) height;
resize_width = (int) ((float) width * resize_scale);
resize_height = target_height;
w_pad = (target_width - resize_width) / 2;;
h_pad = 0;
}
if(strcmp(image_process_mode,"letter_box")==0){
letterbox(rgb,img_resize,target_width,target_height);
}else {
cv::resize(rgb, img_resize, cv::Size(target_width, target_height));
}
// Load model
FILE *fp = fopen(model_path, "rb");
if (fp == NULL) {
printf("fopen %s fail!\n", model_path);
return -1;
}
fseek(fp, 0, SEEK_END);
int model_len = ftell(fp);
void *model = malloc(model_len);
fseek(fp, 0, SEEK_SET);
if (model_len != fread(model, 1, model_len, fp)) {
printf("fread %s fail!\n", model_path);
free(model);
return -1;
}


rknn_context ctx = 0;

int ret = rknn_init(&ctx, model, model_len, 0,0);
if (ret < 0) {
printf("rknn_init fail! ret=%d\n", ret);
return -1;
}

/* Query sdk version */
rknn_sdk_version version;
ret = rknn_query(ctx, RKNN_QUERY_SDK_VERSION, &version,
sizeof(rknn_sdk_version));
if (ret < 0) {
printf("rknn_init error ret=%d\n", ret);
return -1;
}
printf("sdk version: %s driver version: %s\n", version.api_version,
version.drv_version);


/* Get input,output attr */
rknn_input_output_num io_num;
ret = rknn_query(ctx, RKNN_QUERY_IN_OUT_NUM, &io_num, sizeof(io_num));
if (ret < 0) {
printf("rknn_init error ret=%d\n", ret);
return -1;
}
printf("model input num: %d, output num: %d\n", io_num.n_input,
io_num.n_output);

rknn_tensor_attr input_attrs[io_num.n_input];
memset(input_attrs, 0, sizeof(input_attrs));
for (int i = 0; i < io_num.n_input; i++) {
input_attrs[i].index = i;
ret = rknn_query(ctx, RKNN_QUERY_INPUT_ATTR, &(input_attrs[i]),
sizeof(rknn_tensor_attr));
if (ret < 0) {
printf("rknn_init error ret=%d\n", ret);
return -1;
}
printRKNNTensor(&(input_attrs[i]));
}

rknn_tensor_attr output_attrs[io_num.n_output];
memset(output_attrs, 0, sizeof(output_attrs));
for (int i = 0; i < io_num.n_output; i++) {
output_attrs[i].index = i;
ret = rknn_query(ctx, RKNN_QUERY_OUTPUT_ATTR, &(output_attrs[i]),
sizeof(rknn_tensor_attr));
printRKNNTensor(&(output_attrs[i]));
}

int input_channel = 3;
int input_width = 0;
int input_height = 0;
if (input_attrs[0].fmt == RKNN_TENSOR_NCHW) {
printf("model is NCHW input fmt\n");
input_width = input_attrs[0].dims[0];
input_height = input_attrs[0].dims[1];
printf("input_width=%d input_height=%d\n", input_width, input_height);
} else {
printf("model is NHWC input fmt\n");
input_width = input_attrs[0].dims[1];
input_height = input_attrs[0].dims[2];
printf("input_width=%d input_height=%d\n", input_width, input_height);
}

printf("model input height=%d, width=%d, channel=%d\n", input_height, input_width,
input_channel);


/* Init input tensor */
rknn_input inputs[1];
memset(inputs, 0, sizeof(inputs));
inputs[0].index = 0;
inputs[0].buf = img_resize.data;
inputs[0].type = RKNN_TENSOR_UINT8;
inputs[0].size = input_width * input_height * input_channel;
inputs[0].fmt = RKNN_TENSOR_NHWC;
inputs[0].pass_through = 0;

/* Init output tensor */
rknn_output outputs[io_num.n_output];
memset(outputs, 0, sizeof(outputs));
for (int i = 0; i < io_num.n_output; i++) {
if (strcmp(post_process_type, "fp") == 0) {
outputs[i].want_float = 1;
} else if (strcmp(post_process_type, "u8") == 0) {
outputs[i].want_float = 0;
}
}
printf("img.cols: %d, img.rows: %d\n", img_resize.cols, img_resize.rows);
auto t1=std::chrono::steady_clock::now();
rknn_inputs_set(ctx, io_num.n_input, inputs);
ret = rknn_run(ctx, NULL);
if (ret < 0) {
printf("ctx error ret=%d\n", ret);
return -1;
}
ret = rknn_outputs_get(ctx, io_num.n_output, outputs, NULL);
if (ret < 0) {
printf("outputs error ret=%d\n", ret);
return -1;
}
/* Post process */
std::vector<float> out_scales;
std::vector<uint8_t> out_zps;
for (int i = 0; i < io_num.n_output; ++i) {
out_scales.push_back(output_attrs[i].scale);
out_zps.push_back(output_attrs[i].zp);
}

detect_result_group_t detect_result_group;
if (strcmp(post_process_type, "u8") == 0) {
post_process_u8((uint8_t *) outputs[0].buf, (uint8_t *) outputs[1].buf, (uint8_t *) outputs[2].buf,
input_height, input_width,
h_pad, w_pad, resize_scale, conf_threshold, nms_threshold, out_zps, out_scales,
&detect_result_group, labels);
} else if (strcmp(post_process_type, "fp") == 0) {
post_process_fp((float *) outputs[0].buf, (float *) outputs[1].buf, (float *) outputs[2].buf, input_height,
input_width,
h_pad, w_pad, resize_scale, conf_threshold, nms_threshold, &detect_result_group, labels);
}
//毫秒级
auto t2=std::chrono::steady_clock::now();
double dr_ms=std::chrono::duration<double,std::milli>(t2-t1).count();
printf("%lf ms\n",dr_ms);


for (int i = 0; i < detect_result_group.count; i++) {
detect_result_t *det_result = &(detect_result_group.results[i]);
printf("%s @ (%d %d %d %d) %f\n",
det_result->name,
det_result->box.left, det_result->box.top, det_result->box.right, det_result->box.bottom,
det_result->prop);
int bx1 = det_result->box.left;
int by1 = det_result->box.top;
int bx2 = det_result->box.right;
int by2 = det_result->box.bottom;
cv::rectangle(bgr, cv::Point(bx1, by1), cv::Point(bx2, by2), cv::Scalar(231, 232, 143)); //两点的方式
char text[256];
sprintf(text, "%s %.1f%% ", det_result->name, det_result->prop * 100);

int baseLine = 0;
cv::Size label_size = cv::getTextSize(text, cv::FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);

int x = bx1;
int y = by1 - label_size.height - baseLine;
if (y < 0)
y = 0;
if (x + label_size.width > bgr.cols)
x = bgr.cols - label_size.width;


cv::rectangle(bgr, cv::Rect(cv::Point(x, y), cv::Size(label_size.width, label_size.height + baseLine)),
cv::Scalar(0, 0, 255), -1);

cv::putText(bgr, text, cv::Point(x, y + label_size.height),
cv::FONT_HERSHEY_DUPLEX, 0.4, cv::Scalar(255, 255, 255), 1, cv::LINE_AA);

cv::imwrite("bgr9.jpg", bgr);
}


ret = rknn_outputs_release(ctx, io_num.n_output, outputs);

if (ret < 0) {
printf("rknn_query fail! ret=%d\n", ret);
goto Error;
}


Error:
if (ctx > 0)
rknn_destroy(ctx);
if (model)
free(model);
if (fp)
fclose(fp);
return 0;
}

测试结果

/home/firefly/3588_demo/cmake-build-debug/3588_demo
sdk version: 1.3.0 (c193be371@2022-05-04T20:16:33) driver version: 0.7.2
model input num: 1, output num: 3
index=0 name=images n_dims=4 dims=[3 640 640 1] n_elems=1228800 size=1228800 fmt=0 type=2 qnt_type=2 fl=0 zp=-128 scale=0.003922
index=0 name=output n_dims=5 dims=[80 85 3 1] n_elems=1632000 size=1632000 fmt=0 type=2 qnt_type=2 fl=0 zp=65 scale=0.110716
index=1 name=415 n_dims=5 dims=[40 85 3 1] n_elems=408000 size=408000 fmt=0 type=2 qnt_type=2 fl=0 zp=51 scale=0.096500
index=2 name=434 n_dims=5 dims=[20 85 3 1] n_elems=102000 size=102000 fmt=0 type=2 qnt_type=2 fl=0 zp=46 scale=0.085433
model is NHWC input fmt
input_width=640 input_height=640
model input height=640, width=640, channel=3
img.cols: 640, img.rows: 640
98.037714 ms
dog @ (278 19 462 217) 0.667599
cat @ (172 153 611 401) 0.494277
chair @ (0 5 95 103) 0.266982

Process finished with exit code 0

测试图片

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_tcp/ip_06

六、测试一下摄像头是否能用和然后代码集成

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_ide_07

七、3558测试mpp编解码

firefly@firefly:~$ git clone https://github.com/rockchip-linux/mpp
Cloning into 'mpp'...
remote: Enumerating objects: 29854, done.
remote: Counting objects: 100% (3602/3602), done.
remote: Compressing objects: 100% (1296/1296), done.
remote: Total 29854 (delta 2776), reused 2964 (delta 2306), pack-reused 26252
Receiving objects: 100% (29854/29854), 11.82 MiB | 13.17 MiB/s, done.
Resolving deltas: 100% (23739/23739), done.
firefly@firefly:~$ cd mpp/
firefly@firefly:~/mpp$ ls
build CMakeLists.txt debian doc inc LICENSE.md mpp osal pkgconfig readme.txt test tools utils
firefly@firefly:~/mpp$ mkdir build
mkdir: cannot create directory ‘build’: File exists
firefly@firefly:~/mpp$ cd build/
firefly@firefly:~/mpp/build$ cmake ..
firefly@firefly:~/mpp/build$ make
firefly@firefly:~/mpp/build$ sudo make install

1)打开一个新的终端,监控输出日志

firefly@firefly:~$ watch -n 1 tail -f /var/log/syslog

终端1输入,终端2显示命令参数

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_tcp/ip_08

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_5e_09

测试过程,在pc转mp4到h264

C:\Users\Administrator>ffmpeg -i 1920x1080.mp4 -codec copy -bsf: h264_mp4toannexb -f h264 1920x1080.h264

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_tcp/ip_10

 开发板上进行编码

firefly@firefly:~/mpp/build/test$ sudo ./mpi_dec_test -i 1920x1080.h264 -t 7 -n 250 -o 1920x1080_yuv.yuv -w 1920 -h 1080 -f yuv420p

pc上进行播放yuv

C:\Users\Administrator> ffplay -f rawvideo -video_size 1920*1080 -pixel_format yuv420p 1920x1080_yuv.yuv

但是视频存在了失去色彩

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_tcp/ip_11

 待解决,的阅读一下代码,研究一下,集成到项目中

八、代码进行解码海康摄像头,解码成功,代码整理待上传github

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_5e_12

测试

40、记录ROC-RK3588S-PC开发板目标检测和mpp拉海康摄像头进行解码_网络协议_13

 具体问题参考手册链接:https://pan.baidu.com/s/1Wm7qq5mO5-px873AvhVRsg?pwd=j4hx 
提取码:j4hx 
--来自百度网盘超级会员V1的分享

​https://github.com/sxj731533730/mpp_rtsp.git​

测试视频截图

参考:

标签:box,ROC,int,float,40,开发板,grid,input,model
From: https://blog.51cto.com/u_12504263/5926650

相关文章

  • stm32f407VET6 串口使用 USART1
    第一步、开启时钟,把需要用到的USART1和GPIO的时钟打开第二步、GPIO初始化,把TX配置成复用输出,RX配置成复用输入第三步、配置USART1,直接使用一个结构体第四步、如果只需......
  • Win10 TrafficMonitor 缺少mfc140u.dll
    问题描述电脑在Win11上不能运行TrafficMonitor,提示缺少mfc140u.dll,重装Win10后还是出现了同样的问题。解决方案安装XMeters,卸载XMeters,下载TrafficMonitor。解决过程......
  • DP5340:国产兼容替代CS5340立体声音频A/D转换器芯片
    DP5340简介DP5340是一款完整的采样、模数音频信号转换、抗混叠滤波的芯片,在串行格式下以每声道最高200kHz采样率高达24位宽,并支持大部分的音频数据格式。DP5340......
  • itop3588开发板编译Linux源码包-修改成mipi显示
    打开​​Linux​​源码kernel/arch/arm64/boot/dts/rockchip/rk3588-evb7-lp4.dtsi中的设备树文件。如下图所示默认包含的头文件即是mipi显示:更多内容可以了解迅为35......
  • itop3568开发板在Linux系统中使用NPU
    下载rknpu2并拷贝虚拟机​​Ubuntu​​,如下图所示,RKNPU2提供了访问rk3568芯片NPU的高级接口。下载地址为“iTOP-3568开发板\02_【iTOP-RK3568开发板】开发资料\11_NPU使......
  • itop3568开发板在Linux系统中使用NPU
    下载rknpu2并拷贝虚拟机Ubuntu,如下图所示,RKNPU2提供了访问rk3568芯片NPU的高级接口。 下载地址为“iTOP-3568开发板\02_【iTOP-RK3568开发板】开发资料\11_NPU使用......
  • 专访|开源之夏最佳质量奖 Apache RocketMQ Committer 黄章衡
    随着开源之夏2022年度优秀学生名单出炉,其中ApacheRocketMQCommitter黄章衡同学获得开源之夏最佳质量奖。今天,我们也带来黄章衡同学的人物专访。https://summer-ospp.a......
  • itop3588开发板编译Linux源码包-修改成mipi显示
    打开Linux源码kernel/arch/arm64/boot/dts/rockchip/rk3588-evb7-lp4.dtsi中的设备树文件。如下图所示默认包含的头文件即是mipi显示:更多内容可以了解迅为3588开发......
  • 力扣-40-组合总和Ⅱ
    复习下原题,之前做过的,4个月前了第一眼看到觉得是完全背包,但是好像不太一样然后想到了回溯我很快写了一个标准的回溯出来,但是意识到好像不太对classSolution{public:......
  • stm32f407VET6 同一组IO的不同io口设置不同的模式(GPIOA 或 GPIOB 或 GPIOC、等)
    1、使能这组GPIO外设的时钟2、定义GPIO口初始化结构体(不同模式的io口,设置不同的结构体),设置Pin_x的模式 ......