首页 > 其他分享 >YOLOv8-seg训练与推理

YOLOv8-seg训练与推理

时间:2024-10-08 13:21:57浏览次数:1  
标签:json masks YOLOv8 seg shape path 推理 dir

1.YOLOv8-seg简介

   YOLOv8-seg是YOLO系列模型的其中一个版本。YOLOv8-seg在继承YOLO系列模型高效性和准确性的基础上,增加了实例分割的能力。 

2.数据集

  使用的数据集较简单,主要以下目录:

  images:存放原始图片(1500张),大小为128x128。部分如下:

 

  images_json:存放labelme标注的json文件与原图。部分图如下:

  masks:存放单通道掩码

 

  mask_txt:存放masks中每个标签掩码图对应的每个像素值

       

   palette_mask:存放标签掩码图的调色板图或伪彩色图。

  事实上,本次训练任务只需要images与images_json。

 

3.下载安装包

  需要下载ultralytics,github下载或者pip安装(pip安装只有ultralytics),建议github下载,里面内容更全,包括例子与说明。

  官网github地址:https://github.com/ultralytics/ultralytics

  官网文档:https://docs.ultralytics.com/

  下载后,主要关注examples与ultralytics

 

3.获取YOLOV8-seg训练的数据集格式及文件

  YOLOV8-seg模型在进行实例分割时,首先会执行目标检测以识别图像中的物体,然后再对这些物体进行分割。故训练时需要分割预训练权重yolov8n-seg.pt的同时,也需要对应的目标检测yolov8n.pt权重。如果网络良好可以不用下载,当程序检测到没有这些文件时,会自动下载。关于这两个文件直接去官网下载或者网上下载(如图),这里也给个百度盘的链接:   链接:https://pan.baidu.com/s/1Tkzi8bflpIuGTIqR18AFOg提取码:hniz

 

  3.1划分数据集与生成yaml文件

# -*- coding: utf-8 -*-
from tqdm import tqdm
import shutil
import random
import os
import argparse
from collections import Counter
import yaml
import json


# 检查文件夹是否存在
def mkdir(path):
    if not os.path.exists(path):
        os.makedirs(path)

def convert_to_polygon(point1,point2):
    x1, y1 = point1
    x2, y2 = point2
    return [[x1,y1],[x2,y1],[x2,y2],[x1,y2]]


def convert_label_json(json_dir, save_dir, classes):
    json_paths = os.listdir(json_dir)
    classes = classes.split(',')
    mkdir(save_dir)

    for json_path in tqdm(json_paths):
        # for json_path in json_paths:
        path = os.path.join(json_dir, json_path)
        with open(path, 'r') as load_f:
            json_dict = json.load(load_f)
        h, w = json_dict['imageHeight'], json_dict['imageWidth']

        # save txt path
        txt_path = os.path.join(save_dir, json_path.replace('json', 'txt'))
        txt_file = open(txt_path, 'w')

        for shape_dict in json_dict['shapes']:
            shape_type = shape_dict.get('shape_type',None)
            label = shape_dict['label']
            label_index = classes.index(label)
            points = shape_dict['points']
            if shape_type == "rectangle":
                point1=points[0]
                point2=points[1]
                points=convert_to_polygon(point1,point2)

            points_nor_list = []

            for point in points:
                points_nor_list.append(point[0] / w)
                points_nor_list.append(point[1] / h)

            points_nor_list = list(map(lambda x: str(x), points_nor_list))
            points_nor_str = ' '.join(points_nor_list)

            label_str = str(label_index) + ' ' + points_nor_str + '\n'
            txt_file.writelines(label_str)


def get_classes(json_dir):
    '''
    统计路径下 JSON 文件里的各类别标签数量
    '''
    names = []
    json_files = [os.path.join(json_dir, f) for f in os.listdir(json_dir) if f.endswith('.json')]

    for json_path in json_files:
        with open(json_path, 'r') as f:
            data = json.load(f)
            for shape in data['shapes']:
                name = shape['label']
                names.append(name)

    result = Counter(names)
    return result


def main(image_dir, json_dir, txt_dir, save_dir):
    # 创建文件夹
    mkdir(save_dir)
    images_dir = os.path.join(save_dir, 'images')
    labels_dir = os.path.join(save_dir, 'labels')

    img_train_path = os.path.join(images_dir, 'train')
    img_val_path = os.path.join(images_dir, 'val')

    label_train_path = os.path.join(labels_dir, 'train')
    label_val_path = os.path.join(labels_dir, 'val')

    mkdir(images_dir)
    mkdir(labels_dir)
    mkdir(img_train_path)
    mkdir(img_val_path)
    mkdir(label_train_path)
    mkdir(label_val_path)

    # 数据集划分比例,训练集75%,验证集15%,测试集15%,按需修改
    train_percent = 0.90
    val_percent = 0.10

    total_txt = os.listdir(txt_dir)
    num_txt = len(total_txt)
    list_all_txt = range(num_txt)  # 范围 range(0, num)

    num_train = int(num_txt * train_percent)
    num_val = int(num_txt * val_percent)

    train = random.sample(list_all_txt, num_train)
    # 在全部数据集中取出train
    val = [i for i in list_all_txt if not i in train]
    # 再从val_test取出num_val个元素,val_test剩下的元素就是test
    # val = random.sample(list_all_txt, num_val)

    print("训练集数目:{}, 验证集数目:{}".format(len(train), len(val)))
    for i in list_all_txt:
        name = total_txt[i][:-4]

        srcImage = os.path.join(image_dir, name + '.png')#如果图片是jpg就改为.jpg
        srcLabel = os.path.join(txt_dir, name + '.txt')

        if i in train:
            dst_train_Image = os.path.join(img_train_path, name + '.png')#如果图片是jpg就改为.jpg
            dst_train_Label = os.path.join(label_train_path, name + '.txt')
            shutil.copyfile(srcImage, dst_train_Image)
            shutil.copyfile(srcLabel, dst_train_Label)
        elif i in val:
            dst_val_Image = os.path.join(img_val_path, name + '.png')#如果图片是jpg就改为.jpg
            dst_val_Label = os.path.join(label_val_path, name + '.txt')
            shutil.copyfile(srcImage, dst_val_Image)
            shutil.copyfile(srcLabel, dst_val_Label)

    obj_classes = get_classes(json_dir)
    classes = list(obj_classes.keys())

    # 编写yaml文件
    classes_txt = {i: classes[i] for i in range(len(classes))}  # 标签类别
    data = {
        'path': os.path.join(os.getcwd(), save_dir),
        'train': "images/train",
        'val': "images/val",
        'names': classes_txt,
        'nc': len(classes)
    }
    with open(save_dir + '/segment.yaml', 'w', encoding="utf-8") as file:
        yaml.dump(data, file, allow_unicode=True)
    print("标签:", dict(obj_classes))


if __name__ == "__main__":
  
    classes_list = 'circle,rect'  # 类名

    parser = argparse.ArgumentParser(description='json convert to txt params')
    parser.add_argument('--image-dir', type=str, default=r'D:\software\pythonworksapce\yolo8_seg_train\data\images', help='图片地址') #图片文件夹路径
    parser.add_argument('--json-dir', type=str, default=r'D:\software\pythonworksapce\yolo8_seg_train\data\json_out', help='json地址')#labelme标注的纯json文件夹路径
    parser.add_argument('--txt-dir', type=str, default=r'D:\software\pythonworksapce\yolo8_seg_train\train_data\save_txt', help='保存txt文件地址')#标注的坐标的txt文件存放的路径
    parser.add_argument('--save-dir', default=r'D:\software\pythonworksapce\yolo8_seg_train\train_data', type=str, help='保存最终分割好的数据集地址')#segment.yaml存放的路径
    parser.add_argument('--classes', type=str, default=classes_list, help='classes')
    args = parser.parse_args()
    json_dir = args.json_dir
    txt_dir = args.txt_dir
    image_dir = args.image_dir
    save_dir = args.save_dir
    classes = args.classes
    # json格式转txt格式
    convert_label_json(json_dir, txt_dir, classes)
    # 划分数据集,生成yaml训练文件
    main(image_dir, json_dir, txt_dir, save_dir)

  上述代码中,生成的数据集,只支持多边形标注与矩形标注。

  划分完后,train_data目录下将会生成如下文件:

   images中有tain,val两个文件夹,每个文件夹包含原始图片

   labels中有tain,val两个文件夹,每个文件夹包含每个图对应的txt文件,文件中每行的最前面为数字类别索引,后面为x1 y1 x2 y2 x3 y3 ......组成的坐标点归一化后的数据。如图:

   save_txt为中间生成的,用于划分labels的

   segment.yaml为训练时需要配置的文件,nc表示类别数,具体内容如下:

 4.训练

from ultralytics import YOLO


if __name__ == '__main__':
    model = YOLO(r"D:\software\pythonworksapce\yolo8_seg_train\yolov8n-seg.yaml",task="segment").load(r"./yolov8n-seg.pt")  # build from YAML and transfer weights
    results = model.train(data=r"D:\software\pythonworksapce\yolo8_seg_train\train_data\segment.yaml", epochs=200,imgsz=128, device=[0])

  注意:我们写的yolov8n-seg.yaml,其实有yolov8-seg.yaml这个文件就可以了,后面的n程序会自动适配到最小的模型,可以参考yolov8-seg.yaml源文件注释。

5.转onnx模型

from ultralytics import YOLO

# Load a model
# model = YOLO("yolo11n.pt")  # load an official model
model = YOLO(r"D:\software\pythonworksapce\yolo8_seg_train\runs\segment\train\weights\best.pt")  # load a custom trained model

# Export the model
model.export(format="onnx")

5.onnx推理

  可以使用ultralytics自带的onnx推理程序。如图:

   这里我稍微添加了几个自定义的函数,推理代码及结果如下:

import argparse
import os

from datetime import datetime
import cv2
import numpy as np
import onnxruntime as ort

from ultralytics.utils import ASSETS, yaml_load
from ultralytics.utils.checks import check_yaml
from ultralytics.utils.plotting import Colors


class YOLOv8Seg:
    """YOLOv8 segmentation model."""

    def __init__(self, onnx_model, yaml_path="coco128.yaml"):
        """
        Initialization.

        Args:
            onnx_model (str): Path to the ONNX model.
        """

        # Build Ort session
        self.session = ort.InferenceSession(onnx_model,
                                            providers=['CUDAExecutionProvider', 'CPUExecutionProvider']
                                            if ort.get_device() == 'GPU' else ['CPUExecutionProvider'])

        # Numpy dtype: support both FP32 and FP16 onnx model
        self.ndtype = np.half if self.session.get_inputs()[0].type == 'tensor(float16)' else np.single

        # Get model width and height(YOLOv8-seg only has one input)
        self.model_height, self.model_width = [x.shape for x in self.session.get_inputs()][0][-2:]

        # Load COCO class names
        self.classes = yaml_load(check_yaml(yaml_path))['names']

        # Create color palette
        self.color_palette = Colors()

    def __call__(self, im0, conf_threshold=0.4, iou_threshold=0.45, nm=32):
        """
        The whole pipeline: pre-process -> inference -> post-process.

        Args:
            im0 (Numpy.ndarray): original input image.
            conf_threshold (float): confidence threshold for filtering predictions.
            iou_threshold (float): iou threshold for NMS.
            nm (int): the number of masks.

        Returns:
            boxes (List): list of bounding boxes.
            segments (List): list of segments.
            masks (np.ndarray): [N, H, W], output masks.
        """

        # Pre-process
        im, ratio, (pad_w, pad_h) = self.preprocess(im0)
        print("im.shape", im.shape)
        # Ort inference
        preds = self.session.run(None, {self.session.get_inputs()[0].name: im})

        # Post-process
        boxes, segments, masks = self.postprocess(preds,
                                                  im0=im0,
                                                  ratio=ratio,
                                                  pad_w=pad_w,
                                                  pad_h=pad_h,
                                                  conf_threshold=conf_threshold,
                                                  iou_threshold=iou_threshold,
                                                  nm=nm)
        return boxes, segments, masks

    def preprocess(self, img):
        """
        Pre-processes the input image.

        Args:
            img (Numpy.ndarray): image about to be processed.

        Returns:
            img_process (Numpy.ndarray): image preprocessed for inference.
            ratio (tuple): width, height ratios in letterbox.
            pad_w (float): width padding in letterbox.
            pad_h (float): height padding in letterbox.
        """

        # Resize and pad input image using letterbox() (Borrowed from Ultralytics)
        shape = img.shape[:2]  # original image shape
        new_shape = (self.model_height, self.model_width)
        r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
        ratio = r, r
        new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
        pad_w, pad_h = (new_shape[1] - new_unpad[0]) / 2, (new_shape[0] - new_unpad[1]) / 2  # wh padding
        if shape[::-1] != new_unpad:  # resize
            img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
        top, bottom = int(round(pad_h - 0.1)), int(round(pad_h + 0.1))
        left, right = int(round(pad_w - 0.1)), int(round(pad_w + 0.1))
        img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=(114, 114, 114))

        # Transforms: HWC to CHW -> BGR to RGB -> div(255) -> contiguous -> add axis(optional)
        img = np.ascontiguousarray(np.einsum('HWC->CHW', img)[::-1], dtype=self.ndtype) / 255.0
        img_process = img[None] if len(img.shape) == 3 else img
        return img_process, ratio, (pad_w, pad_h)

    def postprocess(self, preds, im0, ratio, pad_w, pad_h, conf_threshold, iou_threshold, nm=32):
        """
        Post-process the prediction.

        Args:
            preds (Numpy.ndarray): predictions come from ort.session.run().
            im0 (Numpy.ndarray): [h, w, c] original input image.
            ratio (tuple): width, height ratios in letterbox.
            pad_w (float): width padding in letterbox.
            pad_h (float): height padding in letterbox.
            conf_threshold (float): conf threshold.
            iou_threshold (float): iou threshold.
            nm (int): the number of masks.

        Returns:
            boxes (List): list of bounding boxes.
            segments (List): list of segments.
            masks (np.ndarray): [N, H, W], output masks.
        """
        x, protos = preds[0], preds[1]  # Two outputs: predictions and protos

        # Transpose the first output: (Batch_size, xywh_conf_cls_nm, Num_anchors) -> (Batch_size, Num_anchors, xywh_conf_cls_nm)
        x = np.einsum('bcn->bnc', x)

        # Predictions filtering by conf-threshold
        x = x[np.amax(x[..., 4:-nm], axis=-1) > conf_threshold]

        # Create a new matrix which merge these(box, score, cls, nm) into one
        # For more details about `numpy.c_()`: https://numpy.org/doc/1.26/reference/generated/numpy.c_.html
        x = np.c_[x[..., :4], np.amax(x[..., 4:-nm], axis=-1), np.argmax(x[..., 4:-nm], axis=-1), x[..., -nm:]]

        # NMS filtering
        x = x[cv2.dnn.NMSBoxes(x[:, :4], x[:, 4], conf_threshold, iou_threshold)]
        # print("x",x)
        # Decode and return
        if len(x) > 0:

            # Bounding boxes format change: cxcywh -> xyxy
            x[..., [0, 1]] -= x[..., [2, 3]] / 2
            x[..., [2, 3]] += x[..., [0, 1]]

            # Rescales bounding boxes from model shape(model_height, model_width) to the shape of original image
            x[..., :4] -= [pad_w, pad_h, pad_w, pad_h]
            x[..., :4] /= min(ratio)

            # Bounding boxes boundary clamp
            x[..., [0, 2]] = x[:, [0, 2]].clip(0, im0.shape[1])
            x[..., [1, 3]] = x[:, [1, 3]].clip(0, im0.shape[0])

            # Process masks
            masks = self.process_mask(protos[0], x[:, 6:], x[:, :4], im0.shape)

            # Masks -> Segments(contours)
            segments = self.masks2segments(masks)
            return x[..., :6], segments, masks  # boxes, segments, masks
        else:
            return [], [], []

    @staticmethod
    def masks2segments(masks):
        """
        It takes a list of masks(n,h,w) and returns a list of segments(n,xy) (Borrowed from
        https://github.com/ultralytics/ultralytics/blob/465df3024f44fa97d4fad9986530d5a13cdabdca/ultralytics/utils/ops.py#L750)

        Args:
            masks (numpy.ndarray): the output of the model, which is a tensor of shape (batch_size, 160, 160).

        Returns:
            segments (List): list of segment masks.
        """
        segments = []
        for x in masks.astype('uint8'):
            c = cv2.findContours(x, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[0]  # CHAIN_APPROX_SIMPLE
            if c:
                c = np.array(c[np.array([len(x) for x in c]).argmax()]).reshape(-1, 2)
            else:
                c = np.zeros((0, 2))  # no segments found
            segments.append(c.astype('float32'))
        return segments

    @staticmethod
    def crop_mask(masks, boxes):
        """
        It takes a mask and a bounding box, and returns a mask that is cropped to the bounding box. (Borrowed from
        https://github.com/ultralytics/ultralytics/blob/465df3024f44fa97d4fad9986530d5a13cdabdca/ultralytics/utils/ops.py#L599)

        Args:
            masks (Numpy.ndarray): [n, h, w] tensor of masks.
            boxes (Numpy.ndarray): [n, 4] tensor of bbox coordinates in relative point form.

        Returns:
            (Numpy.ndarray): The masks are being cropped to the bounding box.
        """
        n, h, w = masks.shape
        x1, y1, x2, y2 = np.split(boxes[:, :, None], 4, 1)
        r = np.arange(w, dtype=x1.dtype)[None, None, :]
        c = np.arange(h, dtype=x1.dtype)[None, :, None]
        return masks * ((r >= x1) * (r < x2) * (c >= y1) * (c < y2))

    def process_mask(self, protos, masks_in, bboxes, im0_shape):
        """
        Takes the output of the mask head, and applies the mask to the bounding boxes. This produces masks of higher quality
        but is slower. (Borrowed from https://github.com/ultralytics/ultralytics/blob/465df3024f44fa97d4fad9986530d5a13cdabdca/ultralytics/utils/ops.py#L618)

        Args:
            protos (numpy.ndarray): [mask_dim, mask_h, mask_w].
            masks_in (numpy.ndarray): [n, mask_dim], n is number of masks after nms.
            bboxes (numpy.ndarray): bboxes re-scaled to original image shape.
            im0_shape (tuple): the size of the input image (h,w,c).

        Returns:
            (numpy.ndarray): The upsampled masks.
        """
        c, mh, mw = protos.shape
        masks = np.matmul(masks_in, protos.reshape((c, -1))).reshape((-1, mh, mw)).transpose(1, 2, 0)  # HWN
        masks = np.ascontiguousarray(masks)
        masks = self.scale_mask(masks, im0_shape)  # re-scale mask from P3 shape to original input image shape
        masks = np.einsum('HWN -> NHW', masks)  # HWN -> NHW
        masks = self.crop_mask(masks, bboxes)
        return np.greater(masks, 0.5)

    @staticmethod
    def scale_mask(masks, im0_shape, ratio_pad=None):
        """
        Takes a mask, and resizes it to the original image size. (Borrowed from
        https://github.com/ultralytics/ultralytics/blob/465df3024f44fa97d4fad9986530d5a13cdabdca/ultralytics/utils/ops.py#L305)

        Args:
            masks (np.ndarray): resized and padded masks/images, [h, w, num]/[h, w, 3].
            im0_shape (tuple): the original image shape.
            ratio_pad (tuple): the ratio of the padding to the original image.

        Returns:
            masks (np.ndarray): The masks that are being returned.
        """
        im1_shape = masks.shape[:2]
        if ratio_pad is None:  # calculate from im0_shape
            gain = min(im1_shape[0] / im0_shape[0], im1_shape[1] / im0_shape[1])  # gain  = old / new
            pad = (im1_shape[1] - im0_shape[1] * gain) / 2, (im1_shape[0] - im0_shape[0] * gain) / 2  # wh padding
        else:
            pad = ratio_pad[1]

        # Calculate tlbr of mask
        top, left = int(round(pad[1] - 0.1)), int(round(pad[0] - 0.1))  # y, x
        bottom, right = int(round(im1_shape[0] - pad[1] + 0.1)), int(round(im1_shape[1] - pad[0] + 0.1))
        if len(masks.shape) < 2:
            raise ValueError(f'"len of masks shape" should be 2 or 3, but got {len(masks.shape)}')
        masks = masks[top:bottom, left:right]
        masks = cv2.resize(masks, (im0_shape[1], im0_shape[0]),
                           interpolation=cv2.INTER_LINEAR)  # INTER_CUBIC would be better
        if len(masks.shape) == 2:
            masks = masks[:, :, None]
        return masks

    def draw_and_visualize(self, im, bboxes, segments, vis=False, save=True):
        """
        Draw and visualize results.

        Args:
            im (np.ndarray): original image, shape [h, w, c].
            bboxes (numpy.ndarray): [n, 4], n is number of bboxes.
            segments (List): list of segment masks.
            vis (bool): imshow using OpenCV.
            save (bool): save image annotated.

        Returns:
            None
        """

        # Draw rectangles and polygons
        im_canvas = im.copy()
        for (*box, conf, cls_), segment in zip(bboxes, segments):
            # draw contour and fill mask
            cv2.polylines(im, np.int32([segment]), True, (255, 255, 255), 2)  # white borderline
            cv2.fillPoly(im_canvas, np.int32([segment]), self.color_palette(int(cls_), bgr=True))

            # draw bbox rectangle
            cv2.rectangle(im, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])),
                          self.color_palette(int(cls_), bgr=True), 1, cv2.LINE_AA)
            cv2.putText(im, f'{self.classes[cls_]}: {conf:.3f}', (int(box[0]), int(box[1] - 9)),
                        cv2.FONT_HERSHEY_SIMPLEX, 0.7, self.color_palette(int(cls_), bgr=True), 2, cv2.LINE_AA)

        # Mix image
        im = cv2.addWeighted(im_canvas, 0.3, im, 0.7, 0)

        # Show image
        if vis:
            cv2.imshow('demo', im)
            cv2.waitKey(0)
            cv2.destroyAllWindows()

        # Save image
        if save:
            from datetime import datetime
            # 获取当前时间
            now = datetime.now()
            # 格式化为 '年月日时分秒毫秒'
            formatted_time = now.strftime('%Y%m%d%H%M%S') + str(now.microsecond // 1000).zfill(3)
            cv2.imwrite(f'{formatted_time}.jpg', im)


####self def
def load_yolov8_seg_onnx_model(onnx_path, yaml_path):
    yolov8_seg_model = YOLOv8Seg(onnx_path, yaml_path=yaml_path)

    return yolov8_seg_model


####self def
def call_yolov8_seg_onnx_inference(img, yolov8_seg_model, conf=0.25, iou=0.45):
    boxes, segments, _ = yolov8_seg_model(img, conf_threshold=conf, iou_threshold=iou)
    return boxes, segments, _


####self def
def get_points_rect_class(boxes, segments):
    for box, seg_points in zip(boxes, segments):
        # print("type(seg_points)",type(seg_points))
        class_index = int(box[-1])
        confidence = box[-2]
        # left_top
        left_top_x = box[0]
        left_top_y = box[1]
        # right_botton
        right_bottom_x = box[2]
        right_bottom_y = box[3]

        x = int(left_top_x)
        y = int(left_top_y)
        w = int(right_bottom_x - left_top_x)
        h = int(right_bottom_y - left_top_y)
        seg_points = seg_points.astype(int)

        yield x, y, w, h, seg_points, class_index, confidence


####self def
def get_image_paths(folder_path, extension=".png", is_use_extension=False):
    image_paths = []

    # 遍历目录
    for root, dirs, files in os.walk(folder_path):
        for file in files:
            # 检查文件扩展名
            if file.endswith(extension) or is_use_extension == False:
                # 构造完整的文件路径并添加到列表
                image_path = os.path.join(root, file)
                image_paths.append(image_path)

    return image_paths


####self def
def get_boxes_contour(points):
    contour = points.reshape((-1, 1, 2))
    return contour


if __name__ == '__main__':
    folder_path = r'D:\software\pythonworksapce\yolo8_seg_train\pre'
    onnx_path = r'D:\software\pythonworksapce\yolo8_seg_train\runs\segment\train\weights\best.onnx'
    yaml_path = r'D:\software\pythonworksapce\yolo8_seg_train\train_data\segment.yaml'
    yolov8_seg_model = load_yolov8_seg_onnx_model(onnx_path, yaml_path)
    images_paths = get_image_paths(folder_path, extension=".png")

    for img_path in images_paths:
        print("img_path", img_path)
        img = cv2.imread(img_path, 1)
        boxes, segments, _ = call_yolov8_seg_onnx_inference(img, yolov8_seg_model, conf=0.5, iou=0.4) #每个图的结果都在这里
        if len(boxes) > 0:
            yolov8_seg_model.draw_and_visualize(img, boxes, segments, vis=False, save=True)

  测试的五张图效果如下:

                            原图

                          模型推理的效果图(与上图一一对应,这里使用时间命名了)

  最后训了一个道路的数据集,看下效果。

  数据集(几何图)链接:

  通过网盘分享的文件:data.zip
  链接: https://pan.baidu.com/s/1ZGnxNYz2pynRC1EtSAagjw 提取码: awie

 

  小结:本文只是对yolov8-seg模型的训练进行了叙述,并未讲解模型结构,后续会再补充。另外本文再使用onnx推理图片时,使用了自带的ultralytics中自带的YOLOv8Seg这个类推理预测,但是也会导致程序冗余,比如会加载不需要使用的torch等包,读者可以研读代码,将核心代码提取出来,重新定义自己的前向预处理,后向结构处理函数。

 

  若存在不足之处,欢迎评论与指正。

标签:json,masks,YOLOv8,seg,shape,path,推理,dir
From: https://www.cnblogs.com/wancy/p/18442457

相关文章

  • 适应性推理时间计算:大型语言模型的自我评估能力
    随着大型语言模型(LLMs)的不断发展,提升其在多种应用中的响应质量显得愈发重要。本文探讨了一种新的推理时间计算方法,旨在提高LLMs的效率和性能,尤其是在生成响应的过程中能够自我评估其能力,从而实现更为智能的计算资源分配。......
  • 上海AI Lab视频生成大模型书生.筑梦环境搭建&推理测试
    引子最近视频生成大模型层出不穷,上海AILab推出新一代视频生成大模型“书生・筑梦2.0”(Vchitect2.0)。根据官方介绍,书生・筑梦2.0是集文生视频、图生视频、插帧超分、训练系统一体化的视频生成大模型。OK,那就让我们开始吧。一、模型介绍筑梦2.0支持5s-20s长视频生成......
  • 图像分割(Image segementation)
    图像分割(ImageSegmentation)是指在计算机视觉和图像处理领域中,将一幅图像分割成多个具有不同语义或特征的区域,这些区域通常是连续的像素集合,并且每个区域内包含的像素在某些属性上是相似的。这一过程旨在识别图像中的各个对象或者背景,为后续的图像分析、物体识别与跟踪、三维重建......
  • llama.cpp推理流程和常用函数介绍
    llama.cpp是一个高性能的CPU/GPU大语言模型推理框架,适用于消费级设备或边缘设备。开发者可以通过工具将各类开源大语言模型转换并量化成gguf格式的文件,然后通过llama.cpp实现本地推理。经过我的调研,相比较其它大模型落地方案,中小型研发企业使用llama.cpp可能是唯一的产品落地方案......
  • YOLOv8算法改进【NO.138】基于细节增强卷积改进YOLO算法
     前  言    YOLO算法改进系列出到这,很多朋友问改进如何选择是最佳的,下面我就根据个人多年的写作发文章以及指导发文章的经验来看,按照优先顺序进行排序讲解YOLO算法改进方法的顺序选择。具体有需求的同学可以私信我沟通:首推,是将两种最新推出算法的模块进行融合形成......
  • o1 式开源推理链项目 g1:可基于 Llama 3.2-90b 模型
    g1简介g1是一个开源项目,利用Llama3.170b模型在Groq硬件上实现类似OpenAIo1的推理链能力。项目通过精心设计的提示策略引导语言模型进行逐步推理,解决了传统语言模型在逻辑推理方面的不足。工作原理利用动态推理链,逐步引导Llama3.1模型完成复杂逻辑问题模型按......
  • 在树莓派上部署yolo模型推理并使用onnx加速
    首先在这里感谢一下这位大佬:学不会电磁场的个人空间-学不会电磁场个人主页-哔哩哔哩视频(bilibili.com)这里使用的代码是从手把手教你使用c++部署yolov5模型,opencv推理onnx模型_哔哩哔哩_bilibili处来的我这里只记录下更换成自己的模型的应用以及提供一份全注释的版本这里是链......
  • 'Note' - 'SIGMOD24' - SeRF - Segment Graph for Range-Filtering (RF) Approximate
    Abstract:就是ANNS加了一个范围查询(每个点多个属性,每次查询一个区间),为啥不是线段树来着。他说《SegmentGraph(查前缀\(O(n)\))》《2DSegmentGraph(查区间构建\(O(n\logn)\))》2.Preliminary有太多ANNs负责优化找到的正确率??2.1问题定义\(I_A\)属性区间\(\mathcal......
  • CF2018E2 Complex Segments (Hard Version) 题解
    题目描述\(T\)组数据,给定\(n\)条线段\([l_i,r_i]\),称一个线段集合是复杂的,当且仅当:它可以被划分成若干个大小相等的线段组。两条线段相交当且仅当它们在同一组。求用这\(n\)条线段构成的复杂线段集合的最大值。数据范围\(1\len,\sumn\le3\cdot10^5\)。\(1\l......
  • UT:CoT在LLM推理的机制分析
    ......