首页 > 其他分享 >37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割

时间:2022-12-09 22:32:26浏览次数:68  
标签:deeplab nn -- 37 OAK deeplabv3 ubuntu tf home


基本思想:手中有个OAK摄像头,一直想移植一下官方的deeplabv3的模型,逐记录一下训练过程和模型转换,从pb转模型到openvino,然后在移植oak摄像头,tensorflow/model的版本为2022-09-11之前的版本(含)

链接:https://pan.baidu.com/s/118DdBuk6kNeUEfRUQfQ4DQ?pwd=wuje 
提取码:wuje 
--来自百度网盘超级会员V1的分享

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_深度学习

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_tensorflow_02

实验的结果和数据集

链接:https://pan.baidu.com/s/1nDgJ-1SP6hcSweKPH3chlw 
提取码:mn5e 

第一步:首先创建conda环境,配置model

ubuntu@ubuntu:~$ conda create -n tf python=3.6
ubuntu@ubuntu:~$ conda activate tf
(tf) ubuntu@ubuntu:~$ git clone https://github.com/tensorflow/models.git

刷新tensorflow的版本

(tf) ubuntu@ubuntu:~$ pip install -i https://pypi.tuna.tsinghua.edu.cn/simple tensorflow-gpu==1.15.0 tensorflow==1.15.0
(tf) ubuntu@ubuntu:~$ python3
Python 3.6.13 |Anaconda, Inc.| (default, Jun 4 2021, 14:25:59)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.test.is_gpu_available()
True

(tf) ubuntu@ubuntu:~$ conda install cudatoolkit=10.0 -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/linux-64/
(tf) ubuntu@ubuntu:~$ conda install cudnn=7.6.5 -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/linux-64/

(tf) ubuntu@ubuntu:~$ conda update -n base -c defaults conda

但是官方给的训练方法是cpu版本,我使用gpu训练偶尔会报错,建议使用cpu训练

第二步:下载lableme的源码,以备生成数据集使用

(tf) ubuntu@ubuntu:~$ git clone https://github.com/wkentaro/labelme.git

第三步:下载coco数据集,提取了里面的单类人的数据集且图片大小筛选了480×640的图片

提供一下本菜的代码

# -*- coding: utf-8 -*-
import glob
import os
import cv2
import json
import io

coco = ["person", "bicycle", "car", "motorcycle", "airplane", "bus", "train", "truck", "boat", "traffic light",
"fire hydrant", "", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow",
"elephant", "bear", "zebra", "giraffe", "", "backpack", "umbrella", "", "", "handbag", "tie", "suitcase",
"frisbee",
"skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard",
"tennis racket", "bottle", "", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple",
"sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch",
"potted plant", "bed", "", "dining table", "", "", "toilet", "", "tv", "laptop", "mouse", "remote", "keyboard",
"cell phone",
"microwave", "oven", "toaster", "sink", "refrigerator", "book", "", "clock", "vase", "scissors", "teddy bear",
"hair drier", "toothbrush"]

label = dict()
for idx, item in enumerate(coco):
label.update({idx: item})
labelme_path = r'F:\CP210x_USB_TO_UART\val2017\labelme'
coco_json_path = r'F:\CP210x_USB_TO_UART\val2017'
jpg_path = r'F:\CP210x_USB_TO_UART\val2017\val2017'
coco_json = glob.glob(os.path.join(coco_json_path, "*.json"))[0]
file_json = io.open(coco_json, 'r', encoding='utf-8')
m_json_data = file_json.read()
m_data = json.loads(m_json_data)
# m_type=m_data['type']

for item in m_data['images']:
flag = False
m_images_file_name = item['file_name']
(filename_path, m_filename) = os.path.split(m_images_file_name)
(m_name, extension) = os.path.splitext(m_filename)
m_image = cv2.imread(os.path.join(jpg_path, m_name + ".jpg"))
m_images_height = item['height']
m_images_width = item['width']
m_images_id = item['id']
data = {}
data['imagePath'] = m_filename
data['flags'] = {}
data['imageWidth'] = m_images_width
data['imageHeight'] = m_images_height
data['imageData'] = None
data['version'] = "5.0.1"
data["shapes"] = []
for annit in m_data['annotations']:
m_image_id = annit['image_id']
m_category_id = annit['category_id']
if m_image_id == m_images_id and label[m_category_id - 1] == 'person' and m_images_width==640 and m_images_height==480:
flag = True
for segitem in annit['segmentation']:
points = []
for idx in range(0, len(segitem), 2):
x, y = segitem[idx], segitem[idx + 1]
if str(x).isalpha() or str(y).isalpha():
flag = False
break
points.append([x, y])
itemData = {'points': []}
if len(points) == 0:
flag = False
break
itemData['points'].extend(points)
itemData["flag"] = {}
itemData["group_id"] = None
itemData["shape_type"] = "polygon"
itemData["label"] = label[m_category_id - 1]
data["shapes"].append(itemData)
if flag:
jsonName = ".".join([m_name, "json"])
jpgName = ".".join([m_name, "jpg"])
print(labelme_path, jsonName)
jsonPath = os.path.join(labelme_path, jsonName)
jpgPath = os.path.join(labelme_path, jpgName)
with open(jsonPath, "w") as f:
json.dump(data, f)
cv2.imwrite(jpgPath, m_image)
print("加载入文件完成...")

将上述的if判断条件稍微修改就能从coco数据集转成labelme数据集,然后提取出关心的类别和严格要求的图片大小图片

ubuntu@ubuntu:~/Downloads/dataset$ tree -L 1
.
├── train
├── trainval
└── val

3 directories, 0 files
图片的宽度是640 高度480

train里面存放的只含人的实例分割的人的图片和lableme标注格式的json文件,其它文件夹类似

第四步:生成tensroflow的deeplabv3+数据集

1)生成数据集转换格式

(tf) ubuntu@ubuntu:~/labelme/examples/semantic_segmentation$ python3 labelme2voc.py /home/ubuntu/Downloads/dataset/train /home/ubuntu/Downloads/dataset/train_voc --labels labels.txt
(tf) ubuntu@ubuntu:~/labelme/examples/semantic_segmentation$ python3 labelme2voc.py /home/ubuntu/Downloads/dataset/trainval /home/ubuntu/Downloads/dataset/trainval_voc --labels labels.txt
(tf) ubuntu@ubuntu:~/labelme/examples/semantic_segmentation$ python3 labelme2voc.py /home/ubuntu/Downloads/dataset/val /home/ubuntu/Downloads/dataset/val_voc --labels labels.txt

其中labels.txt格式

__ignore__
_background_
person

目录格式

(tf1.15) ubuntu@ubuntu:~/Downloads$ tree -L 2
.
└── dataset
├── total
├── train
├── trainval
├── trainval_voc
├── train_voc
├── val
└── val_voc

8 directories, 0 files

目录格式

(tf1.15) ubuntu@ubuntu:~/Downloads/dataset/train_voc$ tree -L 1
.
├── class_names.txt
├── JPEGImages
├── SegmentationClass
├── SegmentationClassPNG
├── SegmentationClassRaw
├── SegmentationClassVisualization

6 directories, 1 file

2)生成单通道的数据集

(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ python3 remove_gt_colormap.py --original_gt_folder=/home/ubuntu/Downloads/dataset/train_voc/SegmentationClassPNG --output_dir=/home/ubuntu/Downloads/dataset/train_voc/SegmentationClassRaw
(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ python3 remove_gt_colormap.py --original_gt_folder=/home/ubuntu/Downloads/dataset/val_voc/SegmentationClassPNG --output_dir=/home/ubuntu/Downloads/dataset/val_voc/SegmentationClassRaw
(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ python3 remove_gt_colormap.py --original_gt_folder=/home/ubuntu/Downloads/dataset/trainval_voc/SegmentationClassPNG --output_dir=/home/ubuntu/Downloads/dataset/trainval_voc/SegmentationClassRaw

全黑很正常,因为本菜只用了一个类,像素值只是1,而背景是0,很难看到目标信息,(测试中不该这么做)如果想看到可以更改~/models/research/deeplab/datasets/remove_gt_colormap.py 中的51行增加

old_raw_pic=np.array(Image.open(filename))
raw_pic=old_raw_pic*50
return raw_pic

这样就能看到生成单通道png存在画面黑白轮廓信息,但是这样搞就不对了,虽然直观上能看见,但是语义分割是按照像素级别进行分割,你只有一个目标那除了背景,另个像素目标就是1,你改成50,那这个目标就是第50个目标,所以还是全黑就全黑吧,视觉看不到就看不到吧,不影响训练,只是为了求证这个事情

3)在文件夹下创建三个文件夹,分别是在train_voc/trainlist   val_voc/vallist   trainval_voc/trainvallist

目录格式

(tf1.15) ubuntu@ubuntu:~/Downloads/dataset/train_voc$ tree -L 1
.
├── class_names.txt
├── JPEGImages
├── SegmentationClass
├── SegmentationClassPNG
├── SegmentationClassRaw
├── SegmentationClassVisualization
└── trainlist

6 directories, 1 file

在各个train_voc/JPEGImages中执行

find . -name "*.jpg" > ../trainlist/train.txt
find . -name "*.jpg" > ../vallist/val.txt
find . -name "*.jpg" > ../trainvallist/trainval.txt
使用编辑文本替换的功能修正为txt只有文件名字列表,没有后缀名和文件夹路径

目录的结构是这样train.txt

000000002153
000000015335
000000079588
000000140203
000000119641
000000087144
000000018837
000000118405
000000032887

4)生成tfrecord数据集,预先在~/models/research/deeplab/datasets创建目录datasetData、 log 、result目录

(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ python3 build_voc2012_data.py --image_folder=/home/ubuntu/Downloads/dataset/train_voc/JPEGImages --semantic_segmentation_folder=/home/ubuntu/Downloads/dataset/train_voc/SegmentationClassRaw --list_folder=/home/ubuntu/Downloads/dataset/train_voc/trainlist --image_format="jpg"  --output_dir=/home/ubuntu/models/research/deeplab/datasets/datasetData
(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ python3 build_voc2012_data.py --image_folder=/home/ubuntu/Downloads/dataset/trainval_voc/JPEGImages --semantic_segmentation_folder=/home/ubuntu/Downloads/dataset/trainval_voc/SegmentationClassRaw --list_folder=/home/ubuntu/Downloads/dataset/trainval_voc/trainvallist --image_format="jpg" --output_dir=/home/ubuntu/models/research/deeplab/datasets/datasetData
(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ python3 build_voc2012_data.py --image_folder=/home/ubuntu/Downloads/dataset/val_voc/JPEGImages --semantic_segmentation_folder=/home/ubuntu/Downloads/dataset/val_voc/SegmentationClassRaw --list_folder=/home/ubuntu/Downloads/dataset/val_voc/vallist --image_format="jpg" --output_dir=/home/ubuntu/models/research/deeplab/datasets/datasetData

第五步:修改代码一些逻辑

1)修改路径/home/ubuntu/models/research/deeplab/datasets/data_generator.py

_MYDATA_INFORMATION = DatasetDescriptor(
splits_to_sizes={
'train': 271, # 训练集数量
'trainval': 136, # 训练集数量
'val': 70, # 测试集数量
},
num_classes=3,#__ignore__+_background_+Arrow =3
ignore_label=255,
)

112行

_DATASETS_INFORMATION = {
'cityscapes': _CITYSCAPES_INFORMATION,
'pascal_voc_seg': _PASCAL_VOC_SEG_INFORMATION,
'ade20k': _ADE20K_INFORMATION,
'mydata':_MYDATA_INFORMATION, # 添加自己的数据集
}

2)修改路径/home/ubuntu/models/research/deeplab/utils/train_utils.py

# Variables that will not be restored.
#exclude_list = ['global_step']
exclude_list = ['global_step','logits']
if not initialize_last_layer:
exclude_list.extend(last_layers)

3)修改路径/home/ubuntu/models/research/deeplab/utils/get_dataset_colormap.py

41行
_DATASET_NAME='mydata' # 添加在这里,和注册的名字相同

_DATASET_NAME: 3, # 在这里添加 colormap 的颜色数
51行
def create_dataset_name_label_colormap():
return np.asarray([
[165, 42, 42],
[0, 192, 0],
[196, 196, 196],
])
390行
elif dataset == _DATASET_NAME: # 添加在这里
return create_dataset_name_label_colormap()

4)修改/home/ubuntu/models/research/deeplab/vis.py 增加mydata

flags.DEFINE_enum('colormap_type', 'pascal', ['mydata','pascal', 'cityscapes', 'ade20k'],
'Visualization colormap type.')

5)修改/home/ubuntu/models/research/deeplab/utils/train_utils.py 153行下添加

ignore_weight = 0
label0_weight = 1 # 对应background,mask中灰度值0
label1_weight = 10 # 对应a,mask中灰度值1
not_ignore_mask = tf.to_float(tf.equal(scaled_labels, 0)) * label0_weight + \
tf.to_float(tf.equal(scaled_labels, 1)) * label1_weight + \
tf.to_float(tf.equal(scaled_labels, ignore_label)) * ignore_weight
tf.losses.softmax_cross_entropy(train_labels,tf.reshape(logits, shape=[-1,num_classes]),weights=not_ignore_mask,scope=loss_scope)

但是好像tensorflow新版本的训练参数是多了一个 --label_weights={0,0.1,10}可以改变权重比例,这个权重应该怎么设置,需要matlab支持查看png的各目标像素占比,请自行搜索,我设置了个1:10而已,我随机设置的

第六步:下载预训练权重

(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ wget -nd -c http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz
(tf) ubuntu@ubuntu:~/models/research/deeplab/datasets$ tar -zxvf deeplabv3_mnv2_pascal_train_aug_2018_01_29.tar.gz

第七步训练,训练参数也是参考oak官网设置

(tf) ubuntu@ubuntu:~/models/research$ CUDA_VISIBLE_DEVICES=0 python3 deeplab/train.py --logtostderr --training_number_of_steps=3000 --train_split="train" --model_variant="mobilenet_v2" --output_stride=8 --fine_tune_batch_norm=true --label_weights={0,0.1,10} --train_batch_size=2 --train_crop_size="481,641" --dataset="mydata" --tf_initial_checkpoint='/home/ubuntu/models/research/deeplab/datasets/deeplabv3_mnv2_pascal_train_aug/model.ckpt-30000' --train_logdir='/home/ubuntu/models/research/deeplab/datasets/result' --dataset_dir='/home/ubuntu/models/research/deeplab/datasets/datasetData'

训练过程,如果中间遇到错误,不是配置错误的话,先用cpu训练试试是否可以,gpu训练偶尔有错误,感觉是bug

INFO:tensorflow:Recording summary at step 2932.
I0911 13:25:50.207441 140671436445440 supervisor.py:1050] Recording summary at step 2932.
INFO:tensorflow:global step 2940: loss = 0.0116 (3.233 sec/step)
I0911 13:26:13.105489 140674671449920 learning.py:507] global step 2940: loss = 0.0116 (3.233 sec/step)
INFO:tensorflow:global step 2950: loss = 0.0139 (3.186 sec/step)
I0911 13:26:44.115453 140674671449920 learning.py:507] global step 2950: loss = 0.0139 (3.186 sec/step)
INFO:tensorflow:global step 2960: loss = 0.0166 (3.240 sec/step)
I0911 13:27:14.827654 140674671449920 learning.py:507] global step 2960: loss = 0.0166 (3.240 sec/step)
INFO:tensorflow:global step 2970: loss = 0.0158 (3.210 sec/step)
I0911 13:27:46.387307 140674671449920 learning.py:507] global step 2970: loss = 0.0158 (3.210 sec/step)
INFO:tensorflow:global step 2980: loss = 0.0152 (3.233 sec/step)
I0911 13:28:17.760236 140674671449920 learning.py:507] global step 2980: loss = 0.0152 (3.233 sec/step)
INFO:tensorflow:global step 2990: loss = 0.0165 (3.007 sec/step)
I0911 13:28:49.145774 140674671449920 learning.py:507] global step 2990: loss = 0.0165 (3.007 sec/step)
INFO:tensorflow:global step 3000: loss = 0.0178 (3.238 sec/step)
I0911 13:29:20.179072 140674671449920 learning.py:507] global step 3000: loss = 0.0178 (3.238 sec/step)
INFO:tensorflow:Stopping Training.
I0911 13:29:20.179394 140674671449920 learning.py:777] Stopping Training.
INFO:tensorflow:Finished training! Saving model to disk.
I0911 13:29:20.179480 140674671449920 learning.py:785] Finished training! Saving model to disk.
/home/ubuntu/miniconda3/envs/tf/lib/python3.6/site-packages/tensorflow_core/python/summary/writer/writer.py:386: UserWarning: Attempting to use a closed FileWriter. The operation will be a noop unless the FileWriter is explicitly reopened.
warnings.warn("Attempting to use a closed FileWriter.

第八步:测试

(tf) ubuntu@ubuntu:~/models/research$ python deeplab/eval.py --logtostderr --eval_split="val" --model_variant="mobilenet_v2" --eval_crop_size="481,641" --dataset="mydata" --output_stride=8 --checkpoint_dir=/home/ubuntu/models/research/deeplab/datasets/result --eval_logdir=/home/ubuntu/models/research/deeplab/datasets/log --dataset_dir=/home/ubuntu/models/research/deeplab/datasets/datasetData --max_number_of_evaluations=1

测试结果

eval/miou_1.0_class_1[0.552526116]
eval/miou_1.0_class_2[nan]
eval/miou_1.0_class_0[0.901511848]
eval/miou_1.0_overall[0.727019072]

第九步:可视化

(tf) ubuntu@ubuntu:~/models/research$ python deeplab/vis.py --logtostderr --vis_split="val" --model_variant="mobilenet_v2" --vis_crop_size="481,641" --dataset="mydata" --colormap_type="mydata" --output_stride=8 --checkpoint_dir=/home/ubuntu/models/research/deeplab/datasets/result --vis_logdir=/home/ubuntu/models/research/deeplab/datasets/log --dataset_dir=/home/ubuntu/models/research/deeplab/datasets/datasetData --max_number_of_iterations=1  --also_save_raw_predictions=True

可视化结果

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_深度学习_03

随机找两张图

 

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_ubuntu_04

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_深度学习_05

 第十步:转pb模型测试

(tf) ubuntu@ubuntu:~/models/research$ python3 deeplab/export_model.py --logtostderr --checkpoint_path=/home/ubuntu/models/research/deeplab/datasets/result/model.ckpt-3000 --model_variant="mobilenet_v2" --crop_size=481 --crop_size=641  --inference_scales=1.0 --num_classes=3 --export_path=/home/ubuntu/models/research/deeplab/datasets/log/export/frozen_inference_graph.pb

 测试代码

#!/usr/bin/env python
# coding: utf-8

import os
from io import BytesIO
import tarfile
import tempfile
from six.moves import urllib
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
from PIL import Image
import tensorflow as tf
import scipy

LABEL_NAMES = np.asarray(["background", "class1", "class2"])


class DeepLabModel(object):
"""Class to load deeplab model and run inference."""

INPUT_TENSOR_NAME = "ImageTensor:0"
OUTPUT_TENSOR_NAME = "SemanticPredictions:0"
INPUT_SIZE = 321
FROZEN_GRAPH_NAME = "frozen_inference_graph"

def __init__(self, modelname):
"""Creates and loads pretrained deeplab model."""
self.graph = tf.Graph()
graph_def = None

with open(modelname, "rb") as fd:
graph_def = tf.GraphDef.FromString(fd.read())

if graph_def is None:
raise RuntimeError("Cannot find inference graph in tar archive.")

with self.graph.as_default():
tf.import_graph_def(graph_def, name="")

self.sess = tf.Session(graph=self.graph)

def run(self, image):
"""Runs inference on a single image.
Args:
image: A PIL.Image object, raw input image.
Returns:
resized_image: RGB image resized from original input image.
seg_map: Segmentation map of `resized_image`.
"""
width, height = image.size
resize_ratio = 1.0 * self.INPUT_SIZE / max(width, height)
target_size = (int(resize_ratio * width), int(resize_ratio * height))
resized_image = image.convert("RGB").resize(target_size, Image.ANTIALIAS)
batch_seg_map = self.sess.run(
self.OUTPUT_TENSOR_NAME,
feed_dict={self.INPUT_TENSOR_NAME: [np.asarray(resized_image)]},
)
seg_map = batch_seg_map[0]
return resized_image, seg_map


def create_pascal_label_colormap():
"""Creates a label colormap used in PASCAL VOC segmentation benchmark.
Returns:
A Colormap for visualizing segmentation results.
"""
colormap = np.zeros((256, 3), dtype=int)
ind = np.arange(256, dtype=int)

for shift in reversed(range(8)):
for channel in range(3):
colormap[:, channel] |= ((ind >> channel) & 1) << shift
ind >>= 3

return colormap


# 从 label 到 color_image
def label_to_color_image(label):
"""Adds color defined by the dataset colormap to the label.
Args:
label: A 2D array with integer type, storing the segmentation label.
Returns:
result: A 2D array with floating type. The element of the array
is the color indexed by the corresponding element in the input label
to the PASCAL color map.
Raises:
ValueError: If label is not of rank 2 or its value is larger than color
map maximum entry.
"""
if label.ndim != 2:
raise ValueError("Expect 2-D input label")

colormap = create_pascal_label_colormap()

if np.max(label) >= len(colormap):
raise ValueError("label value too large.")

return colormap[label]


# 分割结果可视化
def vis_segmentation(image, seg_map, name):
"""Visualizes input image, segmentation map and overlay view."""
plt.figure(figsize=(15, 5))
grid_spec = gridspec.GridSpec(1, 4, width_ratios=[6, 6, 6, 1])

plt.subplot(grid_spec[0])
plt.imshow(image)
plt.axis("off")
plt.title("input image")

plt.subplot(grid_spec[1])
seg_image = label_to_color_image(seg_map).astype(np.uint8)
plt.imshow(seg_image)
plt.axis("off")
plt.title("segmentation map")

plt.subplot(grid_spec[2])
plt.imshow(image)
plt.imshow(seg_image, alpha=0.7)
plt.axis("off")
plt.title("segmentation overlay")

unique_labels = np.unique(seg_map)
ax = plt.subplot(grid_spec[3])
plt.imshow(FULL_COLOR_MAP[unique_labels].astype(np.uint8), interpolation="nearest")
ax.yaxis.tick_right()
plt.yticks(range(len(unique_labels)), LABEL_NAMES[unique_labels])
plt.xticks([], [])
ax.tick_params(width=0.0)
plt.grid("off")
if not os.path.exists("./seg_map_result/"):
os.mkdir("./seg_map_result/")
plt.savefig("./seg_map_result/" + name + ".png")
# plt.show()


FULL_LABEL_MAP = np.arange(len(LABEL_NAMES)).reshape(len(LABEL_NAMES), 1)
FULL_COLOR_MAP = label_to_color_image(FULL_LABEL_MAP)


def main_test(filepath):
# 加载模型
modelname = "/home/ubuntu/models/research/deeplab/datasets/log/export/frozen_inference_graph.pb"
MODEL = DeepLabModel(modelname)
print("model loaded successfully!")

filelist = os.listdir(filepath)
for item in filelist:
print("process image of ", item)
name = item.split(".jpg", 1)[0]
original_im = Image.open(filepath + item)
resized_im, seg_map = MODEL.run(original_im)

# 分割结果拼接
vis_segmentation(resized_im, seg_map, name)

# 单独保存分割结果
# seg_map_name = name + '_seg.png'
# resized_im_name = name + '_in.png'
# path = './seg_map_result/'
# scipy.misc.imsave(path + resized_im_name,resized_im)
# scipy.misc.imsave(path + seg_map_name,seg_map)


if __name__ == "__main__":
filepath = "/home/ubuntu/Downloads/dataset/val_voc/JPEGImages/"
main_test(filepath)

测试结果

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_深度学习_06

ubuntu@ubuntu:~$ python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo_tf.py --input_model /home/ubuntu/models/research/deeplab/datasets/log/export/frozen_inference_graph.pb --model_name deeplab_v3_plus_mnv2_decoder_256 --data_type FP16 --input_shape [1,256,256,3] --reverse_input_channel --output_dir ./

....
While validating node 'v0::Concat Concat_1003 (strided_slice_10/stack_1/Unsqueeze[0]:i32{1}, strided_slice_10/stack_1/Unsqueeze1082[0]:i32{1}, strided_slice_10/stack_1/Unsqueeze1084[0]:i32{1}, strided_slice_10/extend_end_const1243231247[0]:i64{1}) -> ()' with friendly_name 'Concat_1003':
Argument element types are inconsistent.

[ WARNING ] Using fallback to produce IR.
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/ubuntu/yy/deeplab_v3_plus_mnv2_decoder_256.xml
[ SUCCESS ] BIN file: /home/ubuntu/yy/deeplab_v3_plus_mnv2_decoder_256.bin
[ SUCCESS ] Total execution time: 18.66 seconds.
[ SUCCESS ] Memory consumed: 384 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2021_bu_IOTG_OpenVINO-2021-4-LTS&content=upg_all&medium=organic or on the GitHub*

然后开始在转换模型​​BlobConverter | Convert model to MyriadX blob​

参考上述转换最后一行提示修改i64改成i32 , 

At this point, since OpenVINO version 2020.3, the obtained .xml is broken and will not let you directly create a .blob. To fix this, we need to change the following code located in .xml:

<layer id="490" name="strided_slice_10/extend_end_const1243231247" type="Const" version="opset1">
<data element_type="i64" offset="924018" shape="1" size="8"/>
<output>
<port id="0" precision="I64">
<dim>1</dim>
</port>
</output>
</layer>
We need to change element_type to i32 instead of i64. The edited line should look like this:

<data element_type="i32" offset="924018" shape="1" size="8"/>

网页转换

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_python_07

 第十二步:测试OAK深度相机

这里也可以转onnx模型在转blob也可以

ubuntu@ubuntu:~$ python3 -m tf2onnx.convert --graphdef frozen_inference_graph.pb --output model.onnx --inputs ImageTensor:0 --outputs SemanticPredictions:0

测试代码

#!/usr/bin/env python3

import cv2
import depthai as dai
import numpy as np
import argparse
import time

'''
Deeplabv3 multiclass running on selected camera.
Run as:
python3 -m pip install -r requirements.txt
python3 main.py -cam rgb
Possible input choices (-cam):
'rgb', 'left', 'right'

Blob is taken from ML training examples:
https://github.com/luxonis/depthai-ml-training/tree/master/colab-notebooks

You can clone the DeepLabV3plus_MNV2.ipynb notebook and try training the model yourself.

'''

num_of_classes = 1 # define the number of classes in the dataset
cam_options = ['rgb', 'left', 'right']

parser = argparse.ArgumentParser()
parser.add_argument("-cam", "--cam_input", help="select camera input source for inference", default='rgb', choices=cam_options)
parser.add_argument("-nn", "--nn_model", help="select model path for inference", default='/home/ubuntu/yy/deeplab_v3_plus_mnv2_decoder_256_3.blob', type=str)

args = parser.parse_args()

cam_source = args.cam_input
nn_path = args.nn_model

nn_shape = 256

def decode_deeplabv3p(output_tensor):
output = output_tensor.reshape(nn_shape,nn_shape)

# scale to [0 ... 2555] and apply colormap
output = np.array(output) * (255/num_of_classes)
output = output.astype(np.uint8)
output_colors = cv2.applyColorMap(output, cv2.COLORMAP_JET)

# reset the color of 0 class
output_colors[output == 0] = [0,0,0]

return output_colors

def show_deeplabv3p(output_colors, frame):
return cv2.addWeighted(frame,1, output_colors,0.4,0)



# Start defining a pipeline
pipeline = dai.Pipeline()

pipeline.setOpenVINOVersion(version = dai.OpenVINO.VERSION_2021_4)

# Define a neural network that will make predictions based on the source frames
detection_nn = pipeline.create(dai.node.NeuralNetwork)
detection_nn.setBlobPath(nn_path)

detection_nn.setNumPoolFrames(4)
detection_nn.input.setBlocking(False)
detection_nn.setNumInferenceThreads(2)

cam=None
# Define a source - color camera
if cam_source == 'rgb':
cam = pipeline.create(dai.node.ColorCamera)
cam.setPreviewSize(nn_shape,nn_shape)
cam.setInterleaved(False)
cam.preview.link(detection_nn.input)
elif cam_source == 'left':
cam = pipeline.create(dai.node.MonoCamera)
cam.setBoardSocket(dai.CameraBoardSocket.LEFT)
elif cam_source == 'right':
cam = pipeline.create(dai.node.MonoCamera)
cam.setBoardSocket(dai.CameraBoardSocket.RIGHT)

if cam_source != 'rgb':
manip = pipeline.create(dai.node.ImageManip)
manip.setResize(nn_shape,nn_shape)
manip.setKeepAspectRatio(True)
manip.setFrameType(dai.RawImgFrame.Type.BGR888p)
cam.out.link(manip.inputImage)
manip.out.link(detection_nn.input)

cam.setFps(40)

# Create outputs
xout_rgb = pipeline.create(dai.node.XLinkOut)
xout_rgb.setStreamName("nn_input")
xout_rgb.input.setBlocking(False)

detection_nn.passthrough.link(xout_rgb.input)

xout_nn = pipeline.create(dai.node.XLinkOut)
xout_nn.setStreamName("nn")
xout_nn.input.setBlocking(False)

detection_nn.out.link(xout_nn.input)

# Pipeline defined, now the device is assigned and pipeline is started
with dai.Device() as device:
cams = device.getConnectedCameras()
depth_enabled = dai.CameraBoardSocket.LEFT in cams and dai.CameraBoardSocket.RIGHT in cams
if cam_source != "rgb" and not depth_enabled:
raise RuntimeError("Unable to run the experiment on {} camera! Available cameras: {}".format(cam_source, cams))
device.startPipeline(pipeline)

# Output queues will be used to get the rgb frames and nn data from the outputs defined above
q_nn_input = device.getOutputQueue(name="nn_input", maxSize=4, blocking=False)
q_nn = device.getOutputQueue(name="nn", maxSize=4, blocking=False)

start_time = time.time()
counter = 0
fps = 0
layer_info_printed = False
while True:
# instead of get (blocking) used tryGet (nonblocking) which will return the available data or None otherwise
in_nn_input = q_nn_input.get()
in_nn = q_nn.get()

frame = in_nn_input.getCvFrame()

layers = in_nn.getAllLayers()

# get layer1 data
lay1 = np.array(in_nn.getFirstLayerInt32()).reshape(nn_shape,nn_shape)

found_classes = np.unique(lay1)
output_colors = decode_deeplabv3p(lay1)

frame = show_deeplabv3p(output_colors, frame)
cv2.putText(frame, "NN fps: {:.2f}".format(fps), (2, frame.shape[0] - 4), cv2.FONT_HERSHEY_TRIPLEX, 0.4, (255, 0, 0))
cv2.putText(frame, "Found classes {}".format(found_classes), (2, 10), cv2.FONT_HERSHEY_TRIPLEX, 0.4, (255, 0, 0))
cv2.imshow("nn_input", frame)

counter+=1
if (time.time() - start_time) > 1 :
fps = counter / (time.time() - start_time)

counter = 0
start_time = time.time()


if cv2.waitKey(1) == ord('q'):
break

测试结果

37、在OAK摄像头上部署tensorflow deeplabv3+进行实例分割_深度学习_08

官方的那个人体分割的例子是256*256*1*1通道,速度在27fps 多类别的速度和我的差不多是3fps的研究一下为啥单类那么快 

参考

​https://github.com/luxonis/depthai-ml-training/blob/master/colab-notebooks/DeepLabV3plus_MNV2.ipynb​

标签:deeplab,nn,--,37,OAK,deeplabv3,ubuntu,tf,home
From: https://blog.51cto.com/u_12504263/5926653

相关文章

  • P8377 [PFOI Round1] 暴龙的火锅 题解
    题目传送门题目背景暴龙爱吃火锅。题目描述定义\(S(x)\)表示\(x\)的每一位的数字之和,例如:\(S(14)=1+4=5\),\(S(114514)=1+1+4+5+1+4=16.\)另外,定义\(fib(x)\)代......
  • 关于开机老弹出腾讯网迷你网首页,关闭后又弹出www.37ss.com的问题
    endurer2006-11-06第1版有位网友的电脑开机老弹出腾讯网迷你网首页,关闭后又弹出​​www.37ss.com​​。并把HijackThis扫描的log发了过来。在log中发现如下可疑项目:/------......
  • 代码随想录Day37
    LeetCode701.二叉搜索树种的插入操作给定二叉搜索树(BST)的根节点和要插入树中的值,将值插入二叉搜索树。返回插入后二叉搜索树的根节点。输入数据保证,新值和原始二叉搜索......
  • 037-建立Web服务器
    建立服务端账号数据库列表1.运行SSMS,在数据库中新建数据库,取名DBAccount2.在DBAccount中新建表,选择相应路径,设计ID为主键,ID的标识规范设置为是,设计完表后保存命名为Ac......
  • ZROJ237 小T的gcd - 数论 -
    题目链接:http://zhengruioi.com/problem/237题解:首先第一问很简单,如果n个数的gcd为1,答案就是n否则为-1考虑第二问,发现由于lcm是小于等于乘积的,若相等则必然两两互......
  • P3376 【模板】网络最大流
    P3376【模板】网络最大流#include<bits/stdc++.h>usingnamespacestd;#defineintlonglongconstintN=205;constintM=5005;inth[N],ne[M<<1],e[M<<1],w[M<......
  • hdu3715 Go Deeper--二分 & 2-sat
    原题链接:​​http://acm.hdu.edu.cn/showproblem.php?pid=3715​​题意:有一个递归代码:go(intdep,intn,intm)begin   outputthevalueofdep.   ifdep......
  • 力扣378(java&python)-有序矩阵中第 K 小的元素(中等)
    题目:给你一个 nxn 矩阵 matrix,其中每行和每列元素均按升序排序,找到矩阵中第k小的元素。请注意,它是排序后的第k小元素,而不是第k个不同的元素。你必须找到......
  • 【LeetCode】第48天 - 1037. 有效的回旋镖
    1037.有效的回旋镖​​题目描述​​​​解题思路​​​​代码实现​​题目描述解题思路没想到遇到一道纯数学题。所谓的“有效的回旋镖”就是指所给的三个点不在同一条直线......
  • day37 操作bom对象,dom对象
    操作bom对象bom:浏览器对象模型window对象代表浏览器窗口 //window.alert(22)window.innerHeight//595window.innerWidth//131window.innerWidth//322......