首页 > 其他分享 >折腾笔记[10]-使用rust进行ORB角点检测

折腾笔记[10]-使用rust进行ORB角点检测

时间:2025-01-22 16:43:24浏览次数:1  
标签:10 matches orb 角点 let time img1 ORB

摘要

打包ORB算法到bye_orb_rs库,使用rust进行ORB角点检测.
Package the ORB algorithm into the bye_orb_rs library, and use Rust for ORB corner detection.

关键词

rust;ORB;FAST;slam;

关键信息

项目地址:[https://github.com/ByeIO/slambook2.rs]

[package]
name = "exp65-rust-ziglang-slambook2"
version = "0.1.0"
edition = "2021"

[dependencies]
env_logger = { version = "0.11.6", default-features = false, features = [
    "auto-color",
    "humantime",
] }

# 随机数
rand = "0.8.5"
rand_distr = "0.4.3"
fastrand = "2.3.0"

# 线性代数
nalgebra = { version = "0.33.2",features = ["rand"]}
ndarray = "0.16.1"

# winit
wgpu = "23.0.1"
winit = "0.30.8"

# egui
eframe = "0.30.0"
egui = { version = "0.30.0", features = [
    "default"
]}
egui_extras = {version = "0.30.0",features = ["default", "image"]}

# three_d
three-d = {path = "./static/three-d" , features=["egui-gui"] }
three-d-asset = {version = "0.9",features = ["hdr", "http"] }

# sophus
sophus = { version = "0.11.0" }
sophus_autodiff = { version = "0.11.0" }
sophus_geo = "0.11.0"
sophus_image = "0.11.0"
sophus_lie = "0.11.0"
sophus_opt = "0.11.0"
sophus_renderer = "0.11.0"
sophus_sensor = "0.11.0"
sophus_sim = "0.11.0"
sophus_spline = "0.11.0"
sophus_tensor = "0.11.0"
sophus_timeseries = "0.11.0"
sophus_viewer = "0.11.0"
tokio = "1.43.0"
approx = "0.5.1"
bytemuck = "1.21.0"
thingbuf = "0.1.6"

# rust-cv计算机视觉
cv = { version = "0.6.0" , features = ["default"] }
cv-core = "0.15.0"
cv-geom = "0.7.0"
cv-pinhole = "0.6.0"
akaze = "0.7.0"
eight-point = "0.8.0"
lambda-twist = "0.7.0"
image = "0.25.5"
imageproc = "0.25.0"

# 最小二乘优化
gomez = "0.5.0"

# 图优化
factrs = "0.2.0"

# ORB角点检测
bye_orb_rs = { path = "./static/bye_orb_rs" }

# 依赖覆盖
[patch.crates-io]
pulp = { path = "./static/pulp" }

原理简介

ORB角点检测概述

  • rust 角点检测库[https://github.com/ByeIO/bye.orb.rs]
  • Rublee E, Rabaud V, Konolige K, et al. ORB: An efficient alternative to SIFT or SURF[C]//2011 International conference on computer vision. Ieee, 2011: 2564-2571.
    关键词:BRIEF,FAST

ORB(Oriented FAST and Rotated BRIEF)是一种高效的角点检测和特征描述算法,由Ethan Rublee等人在2011年提出。ORB结合了FAST角点检测器和BRIEF描述子,并在其基础上进行了改进,使其在保持较高性能的同时,具有更好的旋转不变性和计算效率。

1. FAST角点检测

FAST(Features from Accelerated Segment Test)是一种高效的角点检测算法。其核心思想是通过比较像素点与其周围邻域像素的灰度值来判断是否为角点。FAST算法的步骤如下:

  • 选择一个像素点 ( p ),其灰度值为 ( I_p )。
  • 设定一个阈值 ( T )。
  • 检查以 ( p ) 为中心的圆形邻域(通常为16个像素)内的像素。
  • 如果邻域内有连续 ( N ) 个像素的灰度值都大于 ( I_p + T ) 或小于 ( I_p - T ),则 ( p ) 被认为是角点。

FAST算法的优点是计算速度快,适合实时应用。

2. BRIEF描述子

BRIEF(Binary Robust Independent Elementary Features)是一种二进制特征描述子,用于描述图像中的关键点。BRIEF通过比较关键点周围像素对的灰度值来生成一个二进制字符串,作为该关键点的描述子。BRIEF的优点是计算简单、存储空间小,适合大规模图像匹配。

3. ORB的改进

ORB在FAST和BRIEF的基础上进行了以下改进:

  • 方向性(Oriented):ORB通过计算关键点的方向(通常使用灰度质心法),使得BRIEF描述子具有旋转不变性。
  • 尺度不变性:ORB通过构建图像金字塔来实现尺度不变性。
  • 高效性:ORB在保持较高性能的同时,计算速度比SIFT和SURF更快,适合实时应用。

python的matplotlib设置中文字体

查找系统字体:

brew install font-noto-sans-cjk
fc-list :lang=zh | grep "得意黑"

设置:

from matplotlib import rcParams

# 设置中文字体
rcParams['font.sans-serif'] = ['Smiley Sans']  # 指定默认字体为得意黑
# rcParams['font.sans-serif'] = ['Noto Sans CJK SC']  # 指定默认字体为Noto Sans
rcParams['axes.unicode_minus'] = False  # 解决保存图像时负号'-'显示为方块的问题

实现

原始cpp代码:[https://github.com/gaoxiang12/slambook2/blob/master/ch7/orb_cv.cpp]

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <chrono>

using namespace std;
using namespace cv;

int main(int argc, char **argv) {
  if (argc != 3) {
    cout << "usage: feature_extraction img1 img2" << endl;
    return 1;
  }
  //-- 读取图像
  Mat img_1 = imread(argv[1], CV_LOAD_IMAGE_COLOR);
  Mat img_2 = imread(argv[2], CV_LOAD_IMAGE_COLOR);
  assert(img_1.data != nullptr && img_2.data != nullptr);

  //-- 初始化
  std::vector<KeyPoint> keypoints_1, keypoints_2;
  Mat descriptors_1, descriptors_2;
  Ptr<FeatureDetector> detector = ORB::create();
  Ptr<DescriptorExtractor> descriptor = ORB::create();
  Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("BruteForce-Hamming");

  //-- 第一步:检测 Oriented FAST 角点位置
  chrono::steady_clock::time_point t1 = chrono::steady_clock::now();
  detector->detect(img_1, keypoints_1);
  detector->detect(img_2, keypoints_2);

  //-- 第二步:根据角点位置计算 BRIEF 描述子
  descriptor->compute(img_1, keypoints_1, descriptors_1);
  descriptor->compute(img_2, keypoints_2, descriptors_2);
  chrono::steady_clock::time_point t2 = chrono::steady_clock::now();
  chrono::duration<double> time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "extract ORB cost = " << time_used.count() << " seconds. " << endl;

  Mat outimg1;
  drawKeypoints(img_1, keypoints_1, outimg1, Scalar::all(-1), DrawMatchesFlags::DEFAULT);
  imshow("ORB features", outimg1);

  //-- 第三步:对两幅图像中的BRIEF描述子进行匹配,使用 Hamming 距离
  vector<DMatch> matches;
  t1 = chrono::steady_clock::now();
  matcher->match(descriptors_1, descriptors_2, matches);
  t2 = chrono::steady_clock::now();
  time_used = chrono::duration_cast<chrono::duration<double>>(t2 - t1);
  cout << "match ORB cost = " << time_used.count() << " seconds. " << endl;

  //-- 第四步:匹配点对筛选
  // 计算最小距离和最大距离
  auto min_max = minmax_element(matches.begin(), matches.end(),
                                [](const DMatch &m1, const DMatch &m2) { return m1.distance < m2.distance; });
  double min_dist = min_max.first->distance;
  double max_dist = min_max.second->distance;

  printf("-- Max dist : %f \n", max_dist);
  printf("-- Min dist : %f \n", min_dist);

  //当描述子之间的距离大于两倍的最小距离时,即认为匹配有误.但有时候最小距离会非常小,设置一个经验值30作为下限.
  std::vector<DMatch> good_matches;
  for (int i = 0; i < descriptors_1.rows; i++) {
    if (matches[i].distance <= max(2 * min_dist, 30.0)) {
      good_matches.push_back(matches[i]);
    }
  }

  //-- 第五步:绘制匹配结果
  Mat img_match;
  Mat img_goodmatch;
  drawMatches(img_1, keypoints_1, img_2, keypoints_2, matches, img_match);
  drawMatches(img_1, keypoints_1, img_2, keypoints_2, good_matches, img_goodmatch);
  imshow("all matches", img_match);
  imshow("good matches", img_goodmatch);
  waitKey(0);

  return 0;
}

重构的rust代码:

#![allow(dead_code)]
#![allow(unused_variables)]
#![allow(unused_imports)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![cfg_attr(not(debug_assertions), windows_subsystem = "windows")]
#![allow(rustdoc::missing_crate_level_docs)]
#![allow(unsafe_code)]
#![allow(clippy::undocumented_unsafe_blocks)]
#![allow(unused_must_use)]
#![allow(non_snake_case)]

use image::{
    open, ImageBuffer, Rgb, DynamicImage
};
use imageproc::{
    drawing::draw_cross_mut, drawing::draw_line_segment_mut
};

use bye_orb_rs::{
    orb, fast, common::Matchable
};

use std::time::Instant;

fn main() {
    let img1_path = "./assets/ch7-1.png";
    let img2_path = "./assets/ch7-2.png";

    main_orb(img1_path, img2_path);
}

fn main_orb(img1_path: &str, img2_path: &str) {
    // 读取图像
    let mut img1 = open(img1_path).unwrap();
    let mut img2 = open(img2_path).unwrap();

    // 设置关键点数量
    let n_keypoints = 500;

    // 第一步: 检测Oriented FAST角点位置并计算BRIEF描述子
    let start_time = Instant::now();
    let img1_keypoints = orb::orb(&mut img1, n_keypoints).unwrap();
    let img2_keypoints = orb::orb(&mut img2, n_keypoints).unwrap();
    let end_time = Instant::now();
    println!("提取ORB特征点耗时: {:?} 秒", end_time - start_time);

    // 第二步: 使用Hamming距离进行匹配
    let start_time = Instant::now();
    let pair_indices = orb::match_brief(&img1_keypoints, &img2_keypoints);
    let end_time = Instant::now();
    println!("匹配ORB特征点耗时: {:?} 秒", end_time - start_time);

    // 第三步: 匹配点对筛选
    let mut matches: Vec<(usize, usize, f32)> = pair_indices.iter()
        .map(|&(i, j)| (i, j, img1_keypoints[i].distance(&img2_keypoints[j]) as f32))
        .collect();

    matches.sort_by(|a, b| a.2.partial_cmp(&b.2).unwrap());

    let min_dist = matches[0].2;
    let max_dist = matches[matches.len() - 1].2;

    println!("-- 最大距离: {}", max_dist);
    println!("-- 最小距离: {}", min_dist);

    // 当描述子之间的距离大于两倍的最小距离时,认为匹配有误。但最小距离可能会非常小,设置一个经验值30作为下限。
    let good_matches: Vec<(usize, usize, f32)> = matches.iter()
        .filter(|&&(_, _, dist)| dist <= (2.0 * min_dist).max(30.0))
        .cloned()
        .collect();

    // 第四步: 绘制匹配结果
    let mut img1_rgb = img1.to_rgb8();
    let mut img2_rgb = img2.to_rgb8();

    // 绘制所有匹配点
    for &(i, j, _) in &matches {
        let kp1 = (img1_keypoints[i].x, img1_keypoints[i].y);
        let kp2 = (img2_keypoints[j].x, img2_keypoints[j].y);
        let color = Rgb([0, 255, 0]);
        draw_cross_mut(&mut img1_rgb, color, kp1.0 as i32, kp1.1 as i32);
        draw_cross_mut(&mut img2_rgb, color, kp2.0 as i32, kp2.1 as i32);
    }

    // 保存绘制结果
    img1_rgb.save("all_matches.png").unwrap();
    img2_rgb.save("all_matches2.png").unwrap();

    // 绘制筛选后的匹配点
    let mut img1_rgb_good = img1.to_rgb8();
    let mut img2_rgb_good = img2.to_rgb8();

    for &(i, j, _) in &good_matches {
        let kp1 = (img1_keypoints[i].x, img1_keypoints[i].y);
        let kp2 = (img2_keypoints[j].x, img2_keypoints[j].y);
        let color = Rgb([0, 255, 0]);
        draw_cross_mut(&mut img1_rgb_good, color, kp1.0 as i32, kp1.1 as i32);
        draw_cross_mut(&mut img2_rgb_good, color, kp2.0 as i32, kp2.1 as i32);
    }

    // 保存绘制结果
    img1_rgb_good.save("good_matches.png").unwrap();
    img2_rgb_good.save("good_matches2.png").unwrap();

    // 第五步: 创建并排放置的图像
    let (width1, height1) = img1_rgb.dimensions();
    let (width2, height2) = img2_rgb.dimensions();
    let total_width = width1 + width2;
    let max_height = height1.max(height2);

    let mut combined_image = ImageBuffer::new(total_width, max_height);

    // 将第一张图片复制到组合图像的左侧
    for y in 0..height1 {
        for x in 0..width1 {
            combined_image.put_pixel(x, y, *img1_rgb.get_pixel(x, y));
        }
    }

    // 将第二张图片复制到组合图像的右侧
    for y in 0..height2 {
        for x in 0..width2 {
            combined_image.put_pixel(width1 + x, y, *img2_rgb.get_pixel(x, y));
        }
    }

    // 绘制连接线
    for &(i, j, _) in &good_matches {
        let kp1 = (img1_keypoints[i].x, img1_keypoints[i].y);
        let kp2 = (img2_keypoints[j].x, img2_keypoints[j].y);
        let color = Rgb([0, 255, 0]);
        draw_line_segment_mut(
            &mut combined_image,
            (kp1.0 as f32, kp1.1 as f32),
            ((width1 as f32 + kp2.0 as f32), kp2.1 as f32),
            color,
        );
    }

    // 保存并排放置的图像
    combined_image.save("combined_matches.png").unwrap();
}

python代码:

import cv2
import numpy as np
import matplotlib.pyplot as plt
import time
from matplotlib import rcParams

# 设置中文字体
rcParams['font.sans-serif'] = ['Smiley Sans']  # 指定默认字体为得意黑
rcParams['axes.unicode_minus'] = False  # 解决保存图像时负号'-'显示为方块的问题

def main(img1_path, img2_path):
    # 读取图像
    img1 = cv2.imread(img1_path, cv2.IMREAD_COLOR)
    img2 = cv2.imread(img2_path, cv2.IMREAD_COLOR)
    assert img1 is not None and img2 is not None, "图片读取失败"

    # 初始化ORB检测器
    orb = cv2.ORB_create()

    # 第一步: 检测Oriented FAST角点位置
    start_time = time.time()
    keypoints1 = orb.detect(img1, None)
    keypoints2 = orb.detect(img2, None)

    # 第二步: 计算BRIEF描述子
    keypoints1, descriptors1 = orb.compute(img1, keypoints1)
    keypoints2, descriptors2 = orb.compute(img2, keypoints2)
    end_time = time.time()
    print(f"提取ORB特征点耗时: {end_time - start_time} 秒")

    # 绘制特征点
    outimg1 = cv2.drawKeypoints(img1, keypoints1, None, color=(0, 255, 0), flags=0)
    plt.imshow(outimg1)
    plt.title("ORB特征点")
    plt.show()

    # 第三步: 使用Hamming距离进行匹配
    bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
    start_time = time.time()
    matches = bf.match(descriptors1, descriptors2)
    end_time = time.time()
    print(f"匹配ORB特征点耗时: {end_time - start_time} 秒")

    # 第四步: 匹配点对筛选
    matches = sorted(matches, key=lambda x: x.distance)
    min_dist = matches[0].distance
    max_dist = matches[-1].distance

    print(f"-- 最大距离: {max_dist}")
    print(f"-- 最小距离: {min_dist}")

    # 当描述子之间的距离大于两倍的最小距离时,认为匹配有误。但最小距离可能会非常小,设置一个经验值30作为下限。
    good_matches = [m for m in matches if m.distance <= max(2 * min_dist, 30.0)]

    # 第五步: 绘制匹配结果
    img_matches = cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches, None, flags=2)
    img_good_matches = cv2.drawMatches(img1, keypoints1, img2, keypoints2, good_matches, None, flags=2)

    plt.imshow(img_matches)
    plt.title("所有匹配点")
    plt.show()

    plt.imshow(img_good_matches)
    plt.title("筛选后的匹配点")
    plt.show()

if __name__ == "__main__":
    import sys
    main("../assets/ch7-1.png", "../assets/ch7-2.png")

效果

rust输出:

提取ORB特征点耗时: 1.921584042s 秒
匹配ORB特征点耗时: 1.893269416s 秒
-- 最大距离: 49
-- 最小距离: 0

python输出:

提取ORB特征点耗时: 0.055989980697631836 秒
匹配ORB特征点耗时: 0.001383066177368164 秒
-- 最大距离: 81.0
-- 最小距离: 4.0
所有检测结果1 所有检测结果2 最佳检测结果1 最佳检测结果2 直线连接筛选后的匹配点 python结果1 python结果2

标签:10,matches,orb,角点,let,time,img1,ORB
From: https://www.cnblogs.com/qsbye/p/18686371

相关文章

  • 深化Edge AI 应用:德承工控机GM-1100安装Ubuntu 24.04.1 LTS系统操作指南
    EdgeAI:边缘运算(EdgeComputing)结合人工智能(AI),将AI模型和算法安排在负责处理边缘运算的工控机上,除了能够就近撷取设备端的数据外,还能够进行资料处理与机器学习的任务,透过EdgeAI,不再需要将大量数据传到云端服务器,有效缩短处理时间、提高反应速度,还能够降低对于网络带宽的需求也更......
  • TensorFlow迁移学习Resnet50预测10-monkey-species
     In [15]:fromtensorflowimportkerasimporttensorflowastfimportnumpyasnpimportpandasaspdfromscipyimportndimageimportmatplotlib.pyplotasplt In [2]:resnet50=keras.applications.ResNet50(include_top=False,po......
  • TensorFlow迁移学习Resnet50预测10-monkey-species
     In [15]:fromtensorflowimportkerasimporttensorflowastfimportnumpyasnpimportpandasaspdfromscipyimportndimageimportmatplotlib.pyplotasplt In [2]:resnet50=keras.applications.ResNet50(include_top=False,po......
  • 全志 F1C100A Melis OS 公版 SDK
    全志F1C100AMelisOS公版SDK资源文件列表C100A_zyac/clean_o.bat , 18C100A_zyac/eLibs/config/suni/assembler.via , 141C100A_zyac/eLibs/config/suni/compiler.via , 692C100A_zyac/eLibs/config/sunii/assembler.via , 141C100A_zyac/eLibs/config/sunii/co......
  • 使用Python3.8写的代码比Python3.10写的性能差吗?
    一般情况下,Python3.10的性能是要好于Python3.8的。那么是否意味着同等条件下,使用Python3.8写出来的代码要比Python3.10写出来的代码性能差呢?笔者曾经写过一个项目,项目一开始使用Python3.8。重构时,因为3.8不支持某些功能,一度将Python版本升到了Python3.10。升到3.10......
  • 自学网络安全(黑客技术)2025年 —100天学习计划
    前言什么是网络安全网络安全可以基于攻击和防御视角来分类,我们经常听到的“红队”、“渗透测试”等就是研究攻击技术,而“蓝队”、“安全运营”、“安全运维”则研究防御技术。如何成为一名黑客很多朋友在学习安全方面都会半路转行,因为不知如何去学,在这里,我将这个整......
  • Cursor太贵?字节Trae可免费用Claude,10分钟带你实现全栈开发
    作为一名开发者,你是否曾为了提高开发效率,尝试过各种AI编程工具?或者你不会开发,但想借助AI编程将自己的想法落地?Cursor无疑是其中的佼佼者,但其高昂的订阅费用也让不少开发者望而却步。现在,你的福音来了!字节跳动推出了一款全新的AIIDE——Trae。它不仅提供了原生中文支持和人......
  • OpenWRT24.10旁路由挂载USB移动硬盘,配置Samba4,作为NAS使用,解决中文不显示,乱码,解决断电
    1.为何选择OpenWRT24.10,及如何配置旁路由,或者IPv6地址看这篇:参OpenWRT24.10配置作为旁路由,并配置获取IPv4和IPv6地址使用的OpenWRT固件是从这里下载的:https://openwrt.ai/2.挂载大容量USB移动硬盘2.1安装必备插件kmod-fs-ntfs3kmod-fs-ext4kmod-fs-exfat#根据自己的......
  • No.10 缺失值的识别与处理
    主要内容:什么是缺失值缺失值的识别缺失模式探索缺失值处理1.什么是缺失值1.1查看R内置数据集data() mydata<-mtcars data()mydata<-mtcars#创造1个有空值的dataframe#给mydata的第1列的第1-5行赋值为NAmydata[(1:5),1]<-NAmydata结果:>data......
  • 基于YOLOv5、YOLOv8和YOLOv10的机场安检行李检测:深度学习应用与实现
    引言随着全球航空运输业的持续增长,机场的安全性变得越来越重要。机场安检作为航空安全的重要组成部分,主要负责对乘客和行李进行检查,防止危险物品进入机场或飞行器。传统的安检方式多依赖人工检查,效率低下且容易出错。因此,基于深度学习的自动化行李检测系统应运而生,通过计算......