首页 > 其他分享 >openmv循迹&脱机调阈值代码与实现

openmv循迹&脱机调阈值代码与实现

时间:2024-07-24 10:28:30浏览次数:15  
标签:roi blobs Pin 脱机 blob THRESHOLD openmv sensor 循迹

实验用具:

                                                                 openmv4  h7  R2

                                                  立创自己打印的openmv  lcd扩展板 

                                                         1.8寸tft  spi  屏幕芯片st7735s 

                                                        大夏龙雀BT24蓝牙模块

                                                接线:P5接BT24的TX 用来接收蓝牙发来的数据 

   成果展示:                        

<iframe allowfullscreen="true" data-mediaembed="csdn" frameborder="0" id="XgjlTsLW-1721368171487" src="https://live.csdn.net/v/embed/410763"></iframe>

openmv阈值脱机调试

 代码:

原理:利用蓝牙,你发送a b会使阈值number1  number2加加,c  d会减减

import sensor, image, time, math, pyb
from machine import UART
import display




#用来调binary值
NUMBER1=51
NUMBER2=255
#这两变量是用来你调阈值,是为了适应在不同场合光线下刚好把黑色与白色分开 ,借助按键调整这个阈值就可以完成现场调试
turnGRAYSCALE_THRESHOLD = [(NUMBER1, NUMBER2)]
GRAYSCALE_THRESHOLD =[(0,0)]
# 每个roi为(x, y, w, h),线检测算法将尝试找到每个roi中最大的blob的质心。
# 然后用不同的权重对质心的x位置求平均值,其中最大的权重分配给靠近图像底部的roi,
# 较小的权重分配给下一个roi,以此类推。

rois = [(0, 100, 160, 20), (0, 50, 160, 20), (0, 0, 160, 20)]
# roi代表三个取样区域,(x,y,w,h,weight),代表左上顶点(x,y)宽高分别为w和h的矩形,
# weight为当前矩形的权值。注意本例程采用的QQVGA图像大小为160x120,roi即把图像横分成三个矩形。
# 三个矩形的阈值要根据实际情况进行调整,离机器人视野最近的矩形权值要最大,
# 如上图的最下方的矩形,即(0, 100, 160, 20, 0.7)

# 初始化sensor

sensor.reset()
# 设置图像色彩格式,有RGB565色彩图和GRAYSCALE灰度图两种
sensor.set_pixformat(sensor.GRAYSCALE)  # use grayscale.
# 设置图像像素大小
sensor.set_framesize(sensor.QQVGA)  # use QQVGA for speed.
# 让新的设置生效。
sensor.skip_frames(time=2000)  # Let new settings take effect.
# 颜色跟踪必须关闭自动增益
sensor.set_auto_gain(False)  # must be turned off for color tracking
# 颜色跟踪必须关闭白平衡
sensor.set_auto_whitebal(False)  # must be turned off for color tracking
# 跟踪FPS帧率
lcd = display.SPIDisplay()  # Initialize the lcd screen.
kk='y'
sensor.set_vflip(1)
sensor.set_hmirror(1)
clock = time.clock()  # Tracks FPS.
largest2_blob=0
deflection_angle = 0  # Initialize deflection_angle outside of conditional blocks
uart = UART(3,9600,timeout_char=1000)
while True:
    turnGRAYSCALE_THRESHOLD = [(NUMBER1, NUMBER2)]
    clock.tick()  # Track elapsed milliseconds between snapshots.
    img = sensor.snapshot()  # Capture an image.
   
    img.binary(turnGRAYSCALE_THRESHOLD)

    largest_blob = None
    #largest2_blob = None
    #largest3_blob = None

    # Track lines in each defined ROI.
    #blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=rois[0], merge=True)
    #if blobs:
    #    largest_blob = max(blobs, key=lambda b: b.pixels())

    blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=rois[1], merge=True)
    if blobs:
        largest2_blob = max(blobs, key=lambda b: b.pixels())
    #
    #blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=rois[2], merge=True)
    #if blobs:
    #    largest3_blob = max(blobs, key=lambda b: b.pixels())

    # Calculate deflection_angle based on largest2_blob.cx() - 79
    #通过计算图像中心点x与块中心点x的差判断偏移
    pianyi = 0
    if largest2_blob:
        pianyi = largest2_blob.cx() - 79
        img.draw_rectangle(largest2_blob.rect(),color=(0));
        if -5 <= pianyi < 5:
            deflection_angle = 0
        elif -15 <= pianyi < -5:
            deflection_angle = -2
        elif 5 <= pianyi < 15:
            deflection_angle = 2
        elif -30 <= pianyi < -15:
            deflection_angle = -3
        elif 15 <= pianyi < 30:
            deflection_angle = 3
        elif -50 <= pianyi < -30:
            deflection_angle = -5
        elif 30 <= pianyi < 50:
            deflection_angle = 5

   
    if NUMBER1>255:
        NUMBER1=255
    if NUMBER2>255:
        NUMBER2=255
    if NUMBER1<0:
        NUMBER1=0
    if NUMBER2<0:
        NUMBER2=0
    lcd.write(img)  # Take a picture and display the image.
    byte=uart.read(1)
    if byte :
        print(byte)
        if byte ==b'a':
            NUMBER1=NUMBER1+5
        if byte ==b'b':
            NUMBER2=NUMBER2+5
        if byte ==b'c':
            NUMBER1=NUMBER1-5
        if byte ==b'd':
            NUMBER2=NUMBER2-5
    

原理:利用按键,你按下按键1  按键2会使阈值number1  number2加加减减,支持长久按

P3 P4 P5 P6分别接按键模块KEY1  KEY2  KEY3  KEY4,用if判断是否按下

<iframe allowfullscreen="true" data-mediaembed="csdn" frameborder="0" id="sOGWn2KF-1721457721624" src="https://live.csdn.net/v/embed/411091"></iframe>

openmv阈值脱机调试

import sensor, image, time, math, pyb
from machine import UART
import display
from pyb import Pin



#用来调binary值
NUMBER1=51
NUMBER2=255
#这两变量是用来你调阈值,是为了适应在不同场合光线下刚好把黑色与白色分开 ,借助按键调整这个阈值就可以完成现场调试
turnGRAYSCALE_THRESHOLD = [(NUMBER1, NUMBER2)]
GRAYSCALE_THRESHOLD =[(0,0)]
# 每个roi为(x, y, w, h),线检测算法将尝试找到每个roi中最大的blob的质心。
# 然后用不同的权重对质心的x位置求平均值,其中最大的权重分配给靠近图像底部的roi,
# 较小的权重分配给下一个roi,以此类推。

rois = [(0, 100, 160, 20), (0, 50, 160, 20), (0, 0, 160, 20)]
# roi代表三个取样区域,(x,y,w,h,weight),代表左上顶点(x,y)宽高分别为w和h的矩形,
# weight为当前矩形的权值。注意本例程采用的QQVGA图像大小为160x120,roi即把图像横分成三个矩形。
# 三个矩形的阈值要根据实际情况进行调整,离机器人视野最近的矩形权值要最大,
# 如上图的最下方的矩形,即(0, 100, 160, 20, 0.7)

# 初始化sensor

sensor.reset()
# 设置图像色彩格式,有RGB565色彩图和GRAYSCALE灰度图两种
sensor.set_pixformat(sensor.GRAYSCALE)  # use grayscale.
# 设置图像像素大小
sensor.set_framesize(sensor.QQVGA)  # use QQVGA for speed.
# 让新的设置生效。
sensor.skip_frames(time=2000)  # Let new settings take effect.
# 颜色跟踪必须关闭自动增益
sensor.set_auto_gain(False)  # must be turned off for color tracking
# 颜色跟踪必须关闭白平衡
sensor.set_auto_whitebal(False)  # must be turned off for color tracking
# 跟踪FPS帧率
lcd = display.SPIDisplay()  # Initialize the lcd screen.
kk='y'
#p_in4 = Pin('P7', Pin.IN, Pin.PULL_UP)#设置p_in为输入引脚,并开启上拉电阻
p_in4 = Pin('P4', Pin.IN, Pin.PULL_UP)#设置p_in为输入引脚,并开启上拉电阻
p_in6 = Pin('P6', Pin.IN, Pin.PULL_UP)#设置p_in为输入引脚,并开启上拉电阻
#p_in7 = Pin('P7', Pin.IN, Pin.PULL_UP)#设置p_in为输入引脚,并开启上拉电阻
p_in5 = Pin('P5', Pin.IN, Pin.PULL_UP)#设置p_in为输入引脚,并开启上拉电阻
p_in3 = Pin('P3', Pin.IN, Pin.PULL_UP)#设置p_in为输入引脚,并开启上拉电阻
sensor.set_vflip(1)
sensor.set_hmirror(1)
clock = time.clock()  # Tracks FPS.
largest2_blob=0
deflection_angle = 0  # Initialize deflection_angle outside of conditional blocks
uart = UART(3,9600,timeout_char=1000)
while True:
    turnGRAYSCALE_THRESHOLD = [(NUMBER1, NUMBER2)]
    clock.tick()  # Track elapsed milliseconds between snapshots.
    img = sensor.snapshot()  # Capture an image.
   
    img.binary(turnGRAYSCALE_THRESHOLD)

    largest_blob = None
    #largest2_blob = None
    #largest3_blob = None

    # Track lines in each defined ROI.
    #blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=rois[0], merge=True)
    #if blobs:
    #    largest_blob = max(blobs, key=lambda b: b.pixels())

    blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=rois[1], merge=True)
    if blobs:
        largest2_blob = max(blobs, key=lambda b: b.pixels())
    #
    #blobs = img.find_blobs(GRAYSCALE_THRESHOLD, roi=rois[2], merge=True)
    #if blobs:
    #    largest3_blob = max(blobs, key=lambda b: b.pixels())

    # Calculate deflection_angle based on largest2_blob.cx() - 79
    #通过计算图像中心点x与块中心点x的差判断偏移
    pianyi = 0
    if largest2_blob:
        pianyi = largest2_blob.cx() - 79
        img.draw_rectangle(largest2_blob.rect(),color=(0));
        if -5 <= pianyi < 5:
            deflection_angle = 0
        elif -15 <= pianyi < -5:
            deflection_angle = -2
        elif 5 <= pianyi < 15:
            deflection_angle = 2
        elif -30 <= pianyi < -15:
            deflection_angle = -3
        elif 15 <= pianyi < 30:
            deflection_angle = 3
        elif -50 <= pianyi < -30:
            deflection_angle = -5
        elif 30 <= pianyi < 50:
            deflection_angle = 5

    if p_in3.value()==0:  
        NUMBER1=NUMBER1+5
    if p_in4.value()==0:  
        NUMBER2=NUMBER2+5
    if p_in5.value()==0:  
        NUMBER1=NUMBER1-5
    if p_in6.value()==0:  
        NUMBER2=NUMBER2-5
        
    if NUMBER1>255:
        NUMBER1=255
    if NUMBER2>255:
        NUMBER2=255
    if NUMBER1<0:
        NUMBER1=0
    if NUMBER2<0:
        NUMBER2=0
    lcd.write(img)  # Take a picture and display the image.
    byte=uart.read(1)
    if byte :
        print(byte)
        if byte ==b'a':
            NUMBER1=NUMBER1+5
        if byte ==b'b':
            NUMBER2=NUMBER2+5
        if byte ==b'c':
            NUMBER1=NUMBER1-5
        if byte ==b'd':
            NUMBER2=NUMBER2-5
    

标签:roi,blobs,Pin,脱机,blob,THRESHOLD,openmv,sensor,循迹
From: https://blog.csdn.net/2301_80317247/article/details/140547101

相关文章

  • Arduino循迹小车
    #include<Servo.h>//引用库//因为很多子函数要用这个变量,所以把servo定义称全局变量,作用域是整个代码文件ServomyServo;intleftX=11;intrightX=12;//全速:digitalWrite(2,3左轮;4,5右轮)//调速:analogwrite(pin,0~255)--3,5左轮6,9//analogWrite只支持3,5,6,9,10,11引脚//所......
  • 51小车红外循迹及蓝牙代码
     main.c#include<REGX52.H>#include"time.h"#include"motordriver.h"#include"MoveState.h"#include"follow.h"#include"lanya.h"externunsignedintPWMR;externunsignedintPWML;unsignedi......
  • MSPM0G3507——读取引脚的高低电平方法(数字信号循迹模块)
     SYSCFG配置  代码部分//第一个传感器if(DL_GPIO_readPins(xunji_PORT_PIN1_PORT,xunji_PORT_PIN1_PIN)==xunji_PORT_PIN1_PIN)//黑,不亮高{a=1;}......
  • 一文看懂智能循迹小车的L298N电机驱动模块到底怎么用
    一、L298N电机驱动模块有什么用?  我们在做单片机智能循迹小车的时候,经常看到上面有一个L298N电机驱动模块一端连接着小车的电机,另一端连接着单片机的IO口。  那为什么没有直接用单片机的IO口控制电机呢?  其中一个原因就是单片机输出的功率较小,不足以驱动电机工作......
  • OPENMV——识别绿色小球并通过串口把信息发送到单片机,进而控制小车追小球
    OPENMV代码#Measurethedistance##Thisexampleshowsoffhowtomeasurethedistancethroughthesizeinimgage#Thisexampleinparticularlooksforyellowpingpongball.importsensor,image,timefrommachineimportUARTuart=UART(3,115200)#......
  • OPENMV——追踪AprilTags,并将位置信息传给单片机从而控制小车追AprilTags
    #AprilTagsExample##ThisexampleshowsthepoweroftheOpenMVCamtodetectAprilTags#ontheOpenMVCamM7.TheM4versionscannotdetectAprilTags.importsensor,image,time,mathfrommachineimportUARTuart=UART(3,115200)#OpenMVRT注......
  • openmv训练神经网络
    1.打开edgeimpulse网站,要注册一个账号点击右上角的搜索图标,选择创建新项目3.填写一些基础配置 完成创建后,打开OPENMVIDE点击新数据集,并创建一个文件夹用于存储采集的图片  点击右侧的文件夹,创建组,并命名。选择创建的组,然后点击摄像头进行拍照,拍照次数尽量要多,保......
  • 【VMware vSphere】存储提供程序中I/O 筛选器状态显示为脱机以及证书已到期的解决办法
    存储提供程序是由VMware提供或由第三方通过vSphereAPIsforStorageAwareness(VASA)开发的软件组件。存储提供程序也可以称为VASA提供程序。存储提供程序可与包含外部物理存储和存储抽象的各种存储实体(例如 vSAN 和 VirtualVolumes)集成。存储提供程序也可以支持软件......
  • ROS笔记[2]-获取OpenMV数据并发布到ROS消息
    摘要Orangepi(香橙派)通过USB-CDC获取OpenMV数据并使用Python发布到ROS的/openmv主题,实现打印"helloros"字符串.关键信息python3.8ROS1:NoeticUbuntu20.04Orangepi3B原理简介OpenMV的USB-CDC虚拟串口(VCP)通信[https://blog.csdn.net/qq_34440409/article/details/1......
  • 视觉循迹小车(旭日x3派、摄像头、循迹)
    1、旭日x3派(烧录好系统镜像)2、USB摄像头3、TB66124、小车底盘(直流电机或直流减速电机) 视觉循迹原理x3派读取摄像头图像,转换成灰度图像,从灰度图像中选择第 120 行(图像的一个水平线),遍历第120行的全部320列,根据像素值小于或大于阈值,将相应的值(0 或 1)添加到 date 列表......