首页 > 其他分享 >深度学习目标检测中_如何使用Yolov5训练变电站各种仪表数据集等共6000余张 ,yolo标签,构建一个各种仪表数据集检测的项目。

深度学习目标检测中_如何使用Yolov5训练变电站各种仪表数据集等共6000余张 ,yolo标签,构建一个各种仪表数据集检测的项目。

时间:2025-01-07 21:03:49浏览次数:3  
标签:plt img 检测 self 仪表 余张 import path image

深度学习目标检测中如何使用Yolov5训练变电站各种仪表数据集等共6000余张,并且都已打上标签,构建一个各种仪表数据集检测的项目。
图像信息清晰 yolo格式

在这里插入图片描述
在这里插入图片描述
yolov5目标检测 变电站各种仪表数据集等共6000余张,并且都已打上标签,图像信息清晰


在这里插入图片描述

以下所有代码仅供参考!

构建一个基于YOLOv5的变电站仪表检测系统来处理变电站各种仪表数据集。以下是详细的步骤和代码示例,包括环境部署、模型训练、指标可视化展示以及PyQt5界面设计。

数据集结构

假设你的数据集已经准备好,并且是以YOLO格式存储的。以下是数据集的标准结构:

dataset/
├── images/
│   ├── train/
│   │   ├── image1.jpg
│   │   ├── image2.jpg
│   │   └── ...
│   ├── val/
│   │   ├── image3.jpg
│   │   ├── image4.jpg
│   │   └── ...
│   └── test/
│       ├── image5.jpg
│       ├── image6.jpg
│       └── ...
├── labels/
│   ├── train/
│   │   ├── image1.txt
│   │   ├── image2.txt
│   │   └── ...
│   ├── val/
│   │   ├── image3.txt
│   │   ├── image4.txt
│   │   └── ...
│   └── test/
│       ├── image5.txt
│       ├── image6.txt
│       └── ...
└── classes.txt

classes.txt 内容如下:

meter1
meter2
meter3
...
# 假设有多种类型的仪表,这里列出几个示例

每个图像对应的标签文件是一个文本文件,每行表示一个边界框,格式为:

<class_id> <x_center> <y_center> <width> <height>

环境部署说明

首先,确保你已经安装了必要的库。以下是详细的环境部署步骤:

安装依赖
# 创建虚拟环境(可选)
conda create -n substation_meter_detection_env python=3.9
conda activate substation_meter_detection_env

# 安装PyTorch
pip install torch==1.9 torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu111

# 安装其他依赖
pip install opencv-python pyqt5 yolov5 scikit-learn pandas matplotlib seaborn

模型训练权重和指标可视化展示

我们将使用YOLOv5进行训练,并在训练过程中记录各种指标,如F1曲线、准确率、召回率、损失曲线和混淆矩阵。

训练脚本 train_yolov5.py
[<title="Training YOLOv5 on Substation Meter Detection Dataset">]
import os
from pathlib import Path
import yaml
from IPython.display import display, Image
import torch
from utils.plots import plot_results
from utils.general import increment_path

# Define paths
dataset_path = 'path/to/dataset'
weights_path = 'runs/train/exp/weights/best.pt'

# Create dataset.yaml
yaml_content = f"""
train: {os.path.join(dataset_path, 'images/train')}
val: {os.path.join(dataset_path, 'images/val')}

nc: 10  # 假设有10种不同类型的仪表,根据实际情况调整
names: ['meter1', 'meter2', 'meter3', 'meter4', 'meter5', 'meter6', 'meter7', 'meter8', 'meter9', 'meter10']  # 根据实际情况调整
"""

with open(os.path.join(dataset_path, 'dataset.yaml'), 'w') as f:
    f.write(yaml_content)

# Clone YOLOv5 repository
!git clone https://github.com/ultralytics/yolov5
%cd yolov5
%pip install -r requirements.txt

# Train YOLOv5
!python train.py --img 640 --batch 16 --epochs 100 --data {os.path.join(dataset_path, 'dataset.yaml')} --cfg yolov5s.yaml --weights yolov5s.pt --cache

# Save the best weights
best_weights_path = Path('runs/train/exp/weights/best.pt')
shutil.copy(best_weights_path, weights_path)

请将 path/to/dataset 替换为实际的数据集路径,并根据实际的类别数量和名称调整 ncnames 字段。

指标可视化展示

我们将编写代码来可视化训练过程中的各项指标,包括F1曲线、准确率、召回率、损失曲线和混淆矩阵。

可视化脚本 visualize_metrics.py
[<title="Visualizing Training Metrics for YOLOv5">]
import os
import json
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay

# Load metrics
results_dir = 'runs/train/exp'
metrics_path = os.path.join(results_dir, 'results.json')

with open(metrics_path, 'r') as f:
    results = json.load(f)

# Extract metrics
loss = [entry['train_loss'] for entry in results if 'train_loss' in entry]
precision = [entry['metrics/precision(m)'] for entry in results if 'metrics/precision(m)' in entry]
recall = [entry['metrics/recall(m)'] for entry in results if 'metrics/recall(m)' in entry]
mAP_05 = [entry['metrics/mAP50(m)'] for entry in results if 'metrics/mAP50(m)' in entry]

# Plot loss curve
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
plt.plot(loss, label='Loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.title('Training Loss Curve')
plt.legend()

# Plot precision and recall curves
plt.subplot(1, 3, 2)
plt.plot(precision, label='Precision')
plt.plot(recall, label='Recall')
plt.xlabel('Epochs')
plt.ylabel('Score')
plt.title('Precision and Recall Curves')
plt.legend()

# Plot mAP@0.5 curve
plt.subplot(1, 3, 3)
plt.plot(mAP_05, label='mAP@0.5')
plt.xlabel('Epochs')
plt.ylabel('mAP@0.5')
plt.title('mAP@0.5 Curve')
plt.legend()

plt.tight_layout()
plt.show()

# Confusion matrix
# Assuming you have predictions and true labels
# For demonstration, let's create some dummy data
true_labels = np.random.randint(0, 11, size=100)  # 0 to 10 (background or one of the object types)
predictions = np.random.randint(0, 11, size=100)  # 0 to 10 (background or one of the object types)

cm = confusion_matrix(true_labels, predictions, labels=list(range(11)))
labels = ['Background', 'meter1', 'meter2', 'meter3', 'meter4', 'meter5', 'meter6', 'meter7', 'meter8', 'meter9', 'meter10']

disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=labels)
disp.plot(cmap=plt.cm.Blues)
plt.title('Confusion Matrix')
plt.show()

PyQt5设计的界面

我们将使用PyQt5设计一个简单的GUI界面来进行模型预测。

GUI代码 gui_app.py
[<title="PyQt5 GUI for YOLOv5 Substation Meter Detection">]
import sys
import cv2
import numpy as np
from PyQt5.QtWidgets import QApplication, QMainWindow, QLabel, QPushButton, QVBoxLayout, QWidget, QFileDialog, QMessageBox, QProgressBar, QTextEdit
from PyQt5.QtGui import QImage, QPixmap
from PyQt5.QtCore import QTimer
from models.experimental import attempt_load
from utils.torch_utils import select_device
from utils.general import non_max_suppression, scale_coords
from utils.datasets import letterbox
import torch

class MainWindow(QMainWindow):
    def __init__(self):
        super().__init__()
        self.setWindowTitle("Substation Meter Detection")
        self.setGeometry(100, 100, 800, 600)

        self.central_widget = QWidget(self)
        self.setCentralWidget(self.central_widget)

        self.layout = QVBoxLayout(self.central_widget)

        self.label_display = QLabel(self)
        self.layout.addWidget(self.label_display)

        self.button_layout = QHBoxLayout()

        self.pushButton_image = QPushButton("Open Image", self)
        self.pushButton_image.clicked.connect(self.open_image)
        self.button_layout.addWidget(self.pushButton_image)

        self.pushButton_folder = QPushButton("Open Folder", self)
        self.pushButton_folder.clicked.connect(self.open_folder)
        self.button_layout.addWidget(self.pushButton_folder)

        self.pushButton_video = QPushButton("Open Video", self)
        self.pushButton_video.clicked.connect(self.open_video)
        self.button_layout.addWidget(self.pushButton_video)

        self.pushButton_camera = QPushButton("Start Camera", self)
        self.pushButton_camera.clicked.connect(self.start_camera)
        self.button_layout.addWidget(self.pushButton_camera)

        self.pushButton_stop = QPushButton("Stop Camera", self)
        self.pushButton_stop.clicked.connect(self.stop_camera)
        self.button_layout.addWidget(self.pushButton_stop)

        self.layout.addLayout(self.button_layout)

        self.device = select_device('')
        self.model = attempt_load('runs/train/exp/weights/best.pt', map_location=self.device)
        self.cap = None
        self.timer = QTimer()
        self.timer.timeout.connect(self.process_frame)

    def load_image(self, file_name):
        img = cv2.imread(file_name)  # BGR
        assert img is not None, f'Image Not Found {file_name}'
        return img

    def process_image(self, img):
        img0 = img.copy()
        img = letterbox(img, new_shape=640)[0]
        img = img[:, :, ::-1].transpose(2, 0, 1)  # BGR to RGB, to 3x416x416
        img = np.ascontiguousarray(img)
        img = torch.from_numpy(img).to(self.device)
        img = img.float()  # uint8 to fp16/32
        img /= 255.0  # 0 - 255 to 0.0 - 1.0
        if img.ndimension() == 3:
            img = img.unsqueeze(0)

        pred = self.model(img, augment=False)[0]
        pred = non_max_suppression(pred, 0.25, 0.45, classes=None, agnostic=False)

        for i, det in enumerate(pred):  # detections per image
            if len(det):
                det[:, :4] = scale_coords(img.shape[2:], det[:, :4], img0.shape).round()

                for *xyxy, conf, cls in reversed(det):
                    label = f'{self.model.names[int(cls)]} {conf:.2f}'
                    color = (0, 255, 0)  # Green
                    cv2.rectangle(img0, (int(xyxy[0]), int(xyxy[1])), (int(xyxy[2]), int(xyxy[3])), color, 2)
                    cv2.putText(img0, label, (int(xyxy[0]), int(xyxy[1]) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, color, 2)

        rgb_image = cv2.cvtColor(img0, cv2.COLOR_BGR2RGB)
        h, w, ch = rgb_image.shape
        bytes_per_line = ch * w
        qt_image = QImage(rgb_image.data, w, h, bytes_per_line, QImage.Format_RGB888)
        pixmap = QPixmap.fromImage(qt_image)
        self.label_display.setPixmap(pixmap.scaled(800, 600))

    def open_image(self):
        options = QFileDialog.Options()
        file_name, _ = QFileDialog.getOpenFileName(self, "QFileDialog.getOpenFileName()", "", "Images (*.jpeg *.jpg);;All Files (*)", options=options)
        if file_name:
            img = self.load_image(file_name)
            self.process_image(img)

    def open_folder(self):
        folder_name = QFileDialog.getExistingDirectory(self, "Select Folder")
        if folder_name:
            for filename in os.listdir(folder_name):
                if filename.lower().endswith(('.png', '.jpg', '.jpeg')):
                    file_path = os.path.join(folder_name, filename)
                    img = self.load_image(file_path)
                    self.process_image(img)

    def open_video(self):
        options = QFileDialog.Options()
        file_name, _ = QFileDialog.getOpenFileName(self, "QFileDialog.getOpenFileName()", "", "Videos (*.mp4 *.avi);;All Files (*)", options=options)
        if file_name:
            self.cap = cv2.VideoCapture(file_name)
            self.timer.start(30)  # Process frame every 30 ms

    def start_camera(self):
        self.cap = cv2.VideoCapture(0)
        self.timer.start(30)  # Process frame every 30 ms

    def stop_camera(self):
        if self.cap is not None:
            self.cap.release()
            self.cap = None
            self.timer.stop()

    def process_frame(self):
        if self.cap is not None:
            ret, frame = self.cap.read()
            if ret:
                self.process_image(frame)
            else:
                self.cap.release()
                self.cap = None
                self.timer.stop()

if __name__ == "__main__":
    app = QApplication(sys.argv)
    window = MainWindow()
    window.show()
    sys.exit(app.exec_())

辅助工具文件 utils.py

这个文件可以用来存放一些辅助函数,比如保存结果等。

[<title="Utility Functions for Substation Meter Detection">]
import cv2
import os

def save_results(image, detections, output_dir, filename):
    for det in detections:
        r = det['bbox']
        cls = det['class']
        conf = det['confidence']
        label = f'{cls} {conf:.2f}'
        color = (0, 255, 0)  # Green
        cv2.rectangle(image, (int(r[0]), int(r[1])), (int(r[2]), int(r[3])), color, 2)
        cv2.putText(image, label, (int(r[0]), int(r[1]) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, color, 2)
    
    output_path = os.path.join(output_dir, filename)
    cv2.imwrite(output_path, image)

运行效果展示

假设你已经有了运行效果的图像,可以在 README.md 中添加这些图像以供参考。

# Substation Meter Detection System

## Overview
This project provides a deep learning-based system for detecting various types of meters in substations using thermal infrared images. The system can identify different types of meters in images, folders, videos, and live camera feeds.

## Environment Setup
- Software: PyCharm + Anaconda
- Environment: Python=3.9, OpenCV-Python, PyQt5, Torch=1.9

## Features
- Detects different types of meters.
- Supports detection on images, folders, videos, and live camera feed.
- Batch processing of images.
- Real-time display of detected meters with confidence scores and bounding boxes.
- Saving detection results.

## Usage
1. Run the program.
2. Choose an option to detect meters in images, folders, videos, or via the camera.

## Screenshots
![Example Screenshot](data/screenshots/example_screenshot.png)

总结

构建一个完整的基于YOLOv5的变电站仪表检测系统,包括数据集准备、环境部署、模型训练、指标可视化展示和PyQt5界面设计。以下是所有相关的代码文件:

  1. 训练脚本 (train_yolov5.py)
  2. 指标可视化脚本 (visualize_metrics.py)
  3. GUI应用代码 (gui_app.py)
  4. 辅助工具文件 (utils.py)
  5. 文档 (README.md)

标签:plt,img,检测,self,仪表,余张,import,path,image
From: https://blog.csdn.net/2401_86822270/article/details/144976449

相关文章

  • 构建基于yolov10麦穗目标检测系统 小麦麦头数据集检测 实现对麦穗4000张数据的处理 深
    yolov10麦穗目标检测项目,附h代码和麦穗数据集的检测麦穗目标检测数据集4000张左右yolov8,yolov10系列图像分辨率为1024x1024麦穗数据集标签有yolo格式(txt文件标签)和coco格式(json文件标签)如何水处理这些数据声明:文章内所有代码仅供参考!帮助你使用YOLOv8来训练麦穗......
  • 工业采集系统-天然气计量仪表费用二次计算
    1、在“驱动管理->中间件驱动”中添加NaturalGasCalculation.drive。2、在设备配置中,添加采集设备,假设采集到的天然气费用因子为D1.F1,充值金额因子为D1.F2,要求计算的天然气使用体积因子为D1.F3。3、在“系统功能->中间件”中添加一个天然气费用计算中间件,配置内容格式为: 天然气费......
  • 怎么测电源检测设备好坏
    电源检测设备的好坏是确保电子设备稳定运行的关键。以下是一些常用的方法来检测电源检测设备的性能:使用万用表测量电压和电流:将万用表设置为适当的直流电压或电流档位。连接万用表的探头到电源检测设备的输出端子。读取并记录电压和电流值,与设备规格进行对比,看是否在正常范......
  • libfacedetection人脸检测C++代码实现Demo
    目录1简介2如何编译3注意事项4接口说明5演示Demo5.1开发环境5.2功能介绍5.3下载地址1简介        libfacedetection是一个基于CNN的人脸检测的开源库。CNN模型已在C源文件中转换为stasticvariales。源代码不依赖于任何其他库。你需要的只是一个......
  • 公司内部有测试人员,为什么还要找第三方检测?
    在软件开发过程中,测试是一个至关重要的环节,它确保软件产品的质量和可靠性。通常,一般软件开发公司都是有内部的测试工程师的,有些朋友会很困惑:公司内部有测试人员,为什么还要找第三方检测?今天讲一讲这两种测试方式在目的、执行者和流程上的显著区别。公司内部测试公司内部测试通......
  • “面面俱到”!人脸活体检测让应用告别假面攻击
    随着人脸识别技术在金融、医疗等多个领域的加速落地,网络安全、信息泄露等问题愈为突出,用户对应用稳定性和安全性的要求也更为严格。HarmonyOSSDK场景化视觉服务(VisionKit)提供人脸动作活体检测能力,增强对于非活体攻击的防御能力和活体通过率。在投资理财、在线支付等高风险金融......
  • 基于双重虚警控制XGBoost的海面小目标检测
    摘要:为了提升雷达对海面小目标的探测能力,本文提出一种基于双重虚警控制的极限梯度提升(eX⁃tremeGradientBoosting,XGBoost)的目标检测方法,解决高维特征空间中分类器设计难的问题。首先,从时域、频域、时频域中挖掘了海杂波和含目标回波的精细化差异,并将这些差异凝聚为7个......
  • 【即插即用完整代码】CVPR 2024部分单头注意力SHSA,分类、检测和分割SOTA!
    文章末尾,扫码添加公众号,领取完整版即插即用模块代码!适用于所有的CV二维任务:图像分割、超分辨率、目标检测、图像识别、低光增强、遥感检测等摘要(Abstract)背景与动机:近年来,高效的视觉Transformer(ViT)在资源受限的设备上表现出色,具有低延迟和良好的性能。传统的高效ViT模型......
  • DL00564-图卷积神经网络GCN心电图信号ECG心律失常检测python完整代码
    图卷积神经网络(GraphConvolutionalNetwork,GCN)作为一种图神经网络(GraphNeuralNetwork,GNN)的代表,近年来在各类数据结构上表现出了优异的性能,尤其是在处理具有图结构数据时。心电图(ECG,Electrocardiogram)信号分析,特别是心律失常的检测,是医学信号处理中一个重要且挑战性的任务......
  • FinDKG: 用于检测金融市场全球趋势的动态知识图谱与大型语言模型
    “FinDKG:DynamicKnowledgeGraphswithLargeLanguageModelsforDetectingGlobalTrendsinFinancialMarkets”论文地址:https://arxiv.org/pdf/2407.10909摘要动态知识图(DKG)能够表示对象间随时间变化的关系,适用于从复杂且非结构化的数据中抽取信息。在金融领......