首页 > 其他分享 >学习记录 -官网文档学习-相机标定和三维重建

学习记录 -官网文档学习-相机标定和三维重建

时间:2025-01-13 10:55:53浏览次数:1  
标签:img image cv2 学习 np img2 img1 官网 三维重建

1、 相机标定

https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_calib3d/py_calibration/py_calibration.html#calibration

图片:https://files.cnblogs.com/files/blogs/760881/%E7%9B%B8%E6%9C%BA%E6%A0%A1%E5%87%86%E5%92%8C3%E7%BB%B4%E9%87%8D%E5%BB%BA.zip?t=1736736254&download=true

import numpy as np
import cv2
import glob

# termination criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)

# 获得棋盘格数据 用来相机标定
# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)

# Arrays to store object points and image points from all the images.
objpoints = [] # 3d point in real world space
imgpoints = [] # 2d points in image plane.

images = glob.glob('pics_calibration/*.jpg')

for fname in images:
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    # Find the chess board corners
    ret, corners = cv2.findChessboardCorners(gray, (7,6),None)

    # If found, add object points, image points (after refining them)
    if ret == True:
        objpoints.append(objp)

        cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)
        imgpoints.append(corners)

        # Draw and display the corners
        cv2.drawChessboardCorners(img, (7,6), corners,ret)
        cv2.imshow('img',img)
        cv2.waitKey(500)

# 标定相机
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1],None,None)
# retval:标定的重投影误差,该误差越小,标定结果越好。
# cameraMatrix:估计的相机内参矩阵(如果作为输入参数提供,则此参数将被更新)。
# distCoeffs:估计的畸变系数(如果作为输入参数提供,则此参数将被更新)。
# rvecs:每幅图像的旋转向量。
# tvecs:每幅图像的平移向量。

# 消除图片的畸变 方法1
img = cv2.imread('pics_calibration/left12.jpg')
h,  w = img.shape[:2]
newcameramtx, roi=cv2.getOptimalNewCameraMatrix(mtx,dist,(w,h),0,(w,h))
# undistort
dst = cv2.undistort(img, mtx, dist, None, newcameramtx)
# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imwrite('calibresult1.png',dst)

# 消除图片的畸变 方法2
# undistort  
mapx,mapy = cv2.initUndistortRectifyMap(mtx,dist,None,newcameramtx,(w,h),5)
dst = cv2.remap(img,mapx,mapy,cv2.INTER_LINEAR)#消除img的畸变得到新图片 dst

# crop the image
x,y,w,h = roi
dst = dst[y:y+h, x:x+w]
cv2.imwrite('calibresult2.png',dst)

cv2.imshow('undistort', dst)

# 计算误差
mean_error = 0
for i in range(len(objpoints)):
    imgpoints2, _ = cv2.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    error = cv2.norm(imgpoints[i],imgpoints2, cv2.NORM_L2)/len(imgpoints2)
    mean_error += error

print("total error: ", mean_error/len(objpoints))

cv2.waitKey(0)
cv2.destroyAllWindows()

 2、位姿估计

 在图片上绘制坐标系

import cv2
import numpy as np
import glob

# Load previously saved data
import numpy as np

# 直接加载.npy文件,假设它保存了一个字典
data = np.load('B.npy',
               allow_pickle=True).item()  # 使用allow_pickle=True来加载可能包含pickle对象的.npy文件,并使用.item()将字典从NumPy数组中提取出来(如果.npy文件直接保存的是字典,则可能不需要.item())

# 解包数据
mtx, dist, rvecs, tvecs = data['mtx'], data['dist'], data['rvecs'], data['tvecs']



def draw(img, corners, imgpts):
    corner = tuple(corners[0].ravel())
    img = cv2.line(img, corner, tuple(imgpts[0].ravel()), (255,0,0), 5)
    img = cv2.line(img, corner, tuple(imgpts[1].ravel()), (0,255,0), 5)
    img = cv2.line(img, corner, tuple(imgpts[2].ravel()), (0,0,255), 5)
    return img

# 类型:cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER
# cv2.TERM_CRITERIA_EPS 表示算法会在达到某个指定的精度(或误差)时停止。
# cv2.TERM_CRITERIA_MAX_ITER 表示算法会在达到最大迭代次数时停止。
# 通过使用按位或运算符 +(在Python中实际上是按位或 |,但在这里由于历史原因使用了 +,尽管在OpenCV的C++接口中它是按位或 |),您可以同时指定这两个条件。算法会在满足任一条件时停止。
# 最大迭代次数:30
# 这是算法在停止之前可以执行的最大迭代次数。
# 所需精度:0.001
# 这是算法在停止之前需要达到的精度(或误差)阈值。
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)


objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)


axis = np.float32([[3,0,0], [0,3,0], [0,0,-3]]).reshape(-1,3)

for fname in glob.glob('pics_calibration/left*.jpg'):
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    ret, corners = cv2.findChessboardCorners(gray, (7,6),None)

    if ret == True:
        corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)

        # Find the rotation and translation vectors.
        # rvecs:旋转向量(旋转矩阵的紧凑表示)。
        # tvecs:平移向量。
        # inliers:内点索引数组(表示哪些点被认为是内点)。
        # mask:与
        # inliers
        # 对应的掩码数组(通常用于标记哪些点是内点) 内点,即被认为是有效匹配的数据点
        retval, rvecs, tvecs, inliers= cv2.solvePnPRansac(objp, corners2, mtx, dist)
        #print(objp.shape)
        # print(mtx)
        # print(dist)
        # print(rvecs.shape)
        # print(rvecs.dtype)
        print(rvecs)

        # project 3D points to image plane
        imgpts, jac = cv2.projectPoints(axis, rvecs, tvecs, mtx, dist)

        img = draw(img,corners2,imgpts)
        cv2.imshow('img',img)
        k = cv2.waitKey(0) & 0xff
        if k == 's':
            cv2.imwrite(fname[:6]+'.png', img)

cv2.waitKey(0)
cv2.destroyAllWindows()

在图片上绘制一个cube

import cv2
import numpy as np
import glob

# Load previously saved data
import numpy as np

# 直接加载.npy文件,假设它保存了一个字典
data = np.load('B.npy',
               allow_pickle=True).item()  # 使用allow_pickle=True来加载可能包含pickle对象的.npy文件,并使用.item()将字典从NumPy数组中提取出来(如果.npy文件直接保存的是字典,则可能不需要.item())

# 解包数据
mtx, dist, rvecs, tvecs = data['mtx'], data['dist'], data['rvecs'], data['tvecs']



# def draw(img, corners, imgpts):
#     corner = tuple(corners[0].ravel())
#     img = cv2.line(img, corner, tuple(imgpts[0].ravel()), (255,0,0), 5)
#     img = cv2.line(img, corner, tuple(imgpts[1].ravel()), (0,255,0), 5)
#     img = cv2.line(img, corner, tuple(imgpts[2].ravel()), (0,0,255), 5)
#     return img
def draw(img, corners, imgpts):
    imgpts = np.int32(imgpts).reshape(-1,2)

    # draw ground floor in green
    img = cv2.drawContours(img, [imgpts[:4]],-1,(0,255,0),-3)

    # draw pillars in blue color
    for i,j in zip(range(4),range(4,8)):
        img = cv2.line(img, tuple(imgpts[i]), tuple(imgpts[j]),(255),3)

    # draw top layer in red color
    img = cv2.drawContours(img, [imgpts[4:]],-1,(0,0,255),3)

    return img
# 类型:cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER
# cv2.TERM_CRITERIA_EPS 表示算法会在达到某个指定的精度(或误差)时停止。
# cv2.TERM_CRITERIA_MAX_ITER 表示算法会在达到最大迭代次数时停止。
# 通过使用按位或运算符 +(在Python中实际上是按位或 |,但在这里由于历史原因使用了 +,尽管在OpenCV的C++接口中它是按位或 |),您可以同时指定这两个条件。算法会在满足任一条件时停止。
# 最大迭代次数:30
# 这是算法在停止之前可以执行的最大迭代次数。
# 所需精度:0.001
# 这是算法在停止之前需要达到的精度(或误差)阈值。
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)


objp = np.zeros((6*7,3), np.float32)
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)


axis = np.float32([[3,0,0], [0,3,0], [0,0,-3]]).reshape(-1,3)
axis = np.float32([[0,0,0], [0,3,0], [3,3,0], [3,0,0],
                   [0,0,-3],[0,3,-3],[3,3,-3],[3,0,-3] ])
for fname in glob.glob('pics_calibration/left*.jpg'):
    img = cv2.imread(fname)
    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
    ret, corners = cv2.findChessboardCorners(gray, (7,6),None)

    if ret == True:
        corners2 = cv2.cornerSubPix(gray,corners,(11,11),(-1,-1),criteria)

        # Find the rotation and translation vectors.
        # rvecs:旋转向量(旋转矩阵的紧凑表示)。
        # tvecs:平移向量。
        # inliers:内点索引数组(表示哪些点被认为是内点)。
        # mask:与
        # inliers
        # 对应的掩码数组(通常用于标记哪些点是内点) 内点,即被认为是有效匹配的数据点
        retval, rvecs, tvecs, inliers= cv2.solvePnPRansac(objp, corners2, mtx, dist)
        #print(objp.shape)
        # print(mtx)
        # print(dist)
        # print(rvecs.shape)
        # print(rvecs.dtype)
        print(rvecs)

        # project 3D points to image plane
        imgpts, jac = cv2.projectPoints(axis, rvecs, tvecs, mtx, dist)

        img = draw(img,corners2,imgpts)
        cv2.imshow('img',img)
        k = cv2.waitKey(0) & 0xff
        if k == 's':
            cv2.imwrite(fname[:6]+'.png', img)

cv2.waitKey(0)
cv2.destroyAllWindows()

 

3、Epipolar Geometry

3.1 ORB 特征检测和描述器

import cv2
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow

img1 = cv2.imread('pics_Epipolar_Geometry/left.jpg',0)  #queryimage # left image
img2 = cv2.imread('pics_Epipolar_Geometry/right.jpg',0) #trainimage # right image
print(img1 is not None, img2 is not None)

# cv2.imshow('img2',img2)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# sift = cv2.SIFT()
#
# # find the keypoints and descriptors with SIFT
# kp1, des1 = sift.detectAndCompute(img1,None)
# kp2, des2 = sift.detectAndCompute(img2,None)

# 创建 ORB 特征检测和描述器实例
#orb = cv2.ORB()
orb = cv2.ORB_create()
# 使用 ORB 算法找到关键点和描述符
# 注意:这里的注释已经更改为反映实际使用的算法
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)

# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)# 使用 KDTree 算法和 5 棵树
search_params = dict(checks=50) # 每个查询检查 50 个节点

flann = cv2.FlannBasedMatcher(index_params,search_params)

des1 = des1.astype(np.float32)
des2 = des2.astype(np.float32)

#matches = flann.knnMatch(des1,des2,k=2)
# 假设 des1 和 des2 已经是有效的 np.float32 类型的描述符矩阵
matches = flann.knnMatch(des1, des2, k=2)

good = []
pts1 = []
pts2 = []

# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.8*n.distance:
        good.append(m)
        pts2.append(kp2[m.trainIdx].pt)
        pts1.append(kp1[m.queryIdx].pt)

pts1 = np.int32(pts1)
pts2 = np.int32(pts2)
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)

# We select only inlier points
pts1 = pts1[mask.ravel()==1]
pts2 = pts2[mask.ravel()==1]


def drawlines(img1,img2,lines,pts1,pts2):
    ''' img1 - image on which we draw the epilines for the points in img2
        lines - corresponding epilines '''
    r,c = img1.shape
    img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)
    img2 = cv2.cvtColor(img2,cv2.COLOR_GRAY2BGR)
    for r,pt1,pt2 in zip(lines,pts1,pts2):
        color = tuple(np.random.randint(0,255,3).tolist())
        x0,y0 = map(int, [0, -r[2]/r[1] ])
        x1,y1 = map(int, [c, -(r[2]+r[0]*c)/r[1] ])
        img1 = cv2.line(img1, (x0,y0), (x1,y1), color,1)
        img1 = cv2.circle(img1,tuple(pt1),5,color,-1)
        img2 = cv2.circle(img2,tuple(pt2),5,color,-1)
    return img1,img2

# Find epilines corresponding to points in right image (second image) and
# drawing its lines on left image
lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F)
lines1 = lines1.reshape(-1,3)
img5,img6 = drawlines(img1,img2,lines1,pts1,pts2)

# Find epilines corresponding to points in left image (first image) and
# drawing its lines on right image
lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F)
lines2 = lines2.reshape(-1,3)
img3,img4 = drawlines(img2,img1,lines2,pts2,pts1)

plt.subplot(121),plt.imshow(img5)
plt.subplot(122),plt.imshow(img3)
plt.show()

3.2  使用 SIFT算法,必须安装 opencv-contrib-python  4.4.0.46或以上

pip install opencv-contrib-python==4.4.0.46

sift 查找关键点

import cv2
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.pyplot import imshow

img1 = cv2.imread('pics_Epipolar_Geometry/left.jpg',0)  #queryimage # left image
img2 = cv2.imread('pics_Epipolar_Geometry/right.jpg',0) #trainimage # right image
print(img1 is not None, img2 is not None)

# cv2.imshow('img2',img2)
# cv2.waitKey(0)
# cv2.destroyAllWindows()
# sift = cv2.SIFT()
#
# # find the keypoints and descriptors with SIFT
# kp1, des1 = sift.detectAndCompute(img1,None)
# kp2, des2 = sift.detectAndCompute(img2,None)

# 创建 ORB 特征检测和描述器实例
#orb = cv2.ORB()

# orb = cv2.ORB_create()
# # 使用 ORB 算法找到关键点和描述符
# # 注意:这里的注释已经更改为反映实际使用的算法
# kp1, des1 = orb.detectAndCompute(img1, None)
# kp2, des2 = orb.detectAndCompute(img2, None)
# 使用SIFT 算法找到关键点和描述符
sift = cv2.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)
# FLANN parameters
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)# 使用 KDTree 算法和 5 棵树
search_params = dict(checks=50) # 每个查询检查 50 个节点

flann = cv2.FlannBasedMatcher(index_params,search_params)

des1 = des1.astype(np.float32)
des2 = des2.astype(np.float32)

#matches = flann.knnMatch(des1,des2,k=2)
# 假设 des1 和 des2 已经是有效的 np.float32 类型的描述符矩阵
matches = flann.knnMatch(des1, des2, k=2)

good = []
pts1 = []
pts2 = []

# ratio test as per Lowe's paper
for i,(m,n) in enumerate(matches):
    if m.distance < 0.8*n.distance:
        good.append(m)
        pts2.append(kp2[m.trainIdx].pt)
        pts1.append(kp1[m.queryIdx].pt)

pts1 = np.int32(pts1)
pts2 = np.int32(pts2)
F, mask = cv2.findFundamentalMat(pts1,pts2,cv2.FM_LMEDS)

# We select only inlier points
pts1 = pts1[mask.ravel()==1]
pts2 = pts2[mask.ravel()==1]


def drawlines(img1,img2,lines,pts1,pts2):
    ''' img1 - image on which we draw the epilines for the points in img2
        lines - corresponding epilines '''
    r,c = img1.shape
    img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)
    img2 = cv2.cvtColor(img2,cv2.COLOR_GRAY2BGR)
    for r,pt1,pt2 in zip(lines,pts1,pts2):
        color = tuple(np.random.randint(0,255,3).tolist())
        x0,y0 = map(int, [0, -r[2]/r[1] ])
        x1,y1 = map(int, [c, -(r[2]+r[0]*c)/r[1] ])
        img1 = cv2.line(img1, (x0,y0), (x1,y1), color,1)
        img1 = cv2.circle(img1,tuple(pt1),5,color,-1)
        img2 = cv2.circle(img2,tuple(pt2),5,color,-1)
    return img1,img2

# Find epilines corresponding to points in right image (second image) and
# drawing its lines on left image
lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1,1,2), 2,F)
lines1 = lines1.reshape(-1,3)
img5,img6 = drawlines(img1,img2,lines1,pts1,pts2)

# Find epilines corresponding to points in left image (first image) and
# drawing its lines on right image
lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1,1,2), 1,F)
lines2 = lines2.reshape(-1,3)
img3,img4 = drawlines(img2,img1,lines2,pts2,pts1)

plt.subplot(121),plt.imshow(img5)
plt.subplot(122),plt.imshow(img3)
plt.show()

 4、

标签:img,image,cv2,学习,np,img2,img1,官网,三维重建
From: https://www.cnblogs.com/excellentHellen/p/18666779

相关文章

  • springboot高校计算机专业学习资料共享平台-计算机设计毕业源码24752
    高校计算机专业学习资料共享平台的设计与实现摘 要在信息化、数字化的时代背景下,教育资源的共享与高效利用已成为推动教育现代化的关键。高校作为培养未来人才的重要基地,其计算机专业的学习资料共享显得尤为重要。这些资料不仅涵盖了基础理论知识,还涉及前沿技术、实践项目......
  • 使用 Podman Desktop 在 Windows 11 WSL2 环境中启动宿主机的 GPU 进行深度学习
    使用PodmanDesktop在Windows11WSL2环境中启动宿主机的GPU进行深度学习概述本文将指导您如何利用PodmanDesktop安装时提供的WSL2环境,来启动宿主机的GPU进行深度学习任务。前提条件确保您的Windows11已经启用了WSL2和虚拟化功能,并且安装了最新版本的NVIDI......
  • Powershell-2学习笔记
    声明!学习视频来自B站up主**泷羽sec**有兴趣的师傅可以关注一下,如涉及侵权马上删除文章,笔记只是方便各位师傅的学习和探讨,文章所提到的网站以及内容,只做学习交流,其他均与本人以及泷羽sec团队无关,切勿触碰法律底线,否则后果自负!!!!有兴趣的小伙伴可以点击下面连接进入b站主页[B站......
  • RISCV学习(4)GD32VF103 MCU芯片学习
    笔者有空学习了GD32的RSICV芯片,故来总结一下。GD32RISCV芯片系列GD:GidaDeivce,兆易创新,生产的MCU的内核架构系列如下图所述,主要是ARM架构的,Cortex-M23、M3、M4、M33以及M7,然后也涉及到了RISC-V架构的,笔者今天就来聊一下RISCV架构的MCU产品。GD32的RISC-V的芯片类型主......
  • 07、Docker学习,容器间通信
    Docker学习,容器间通信今天在Docker中安装Nacos连接MySQL的时候出现了问题,发现容器间需要通信。现在记录下来:1、创建自定义网络(用于容器间通讯)dockernetworkcreatecommon-network2、查看网络dockernetworkls3、重新安装mysqldockerrun-p13306:3306--......
  • 【机器学习】Kaggle实战Rossmann商店销售预测(项目背景、数据介绍/加载/合并、特征工程
    文章目录1、项目背景2、数据介绍3、数据加载3.1查看数据3.2空数据处理3.2.1训练数据3.2.2测试数据3.3.3商店数据处理3.3.4销售时间关系4、合并数据5、特征工程6、构建训练数据和测试数据7、数据属性间相关性系数8、提取模型训练的数据集9、构建模型9.1定义评价......
  • 深度学习实战中英字符识别-网页版
    文章目录前言视频演示效果1.DB文字定位环境配置安装教程与资源说明1.1DB概述1.2DB算法原理1.2.1整体框架1.2.2特征提取网络Resnet1.2.3自适应阈值1.2.4文字区域标注生成1.2.5DB文字定位模型训练2.CRNN文字识别2.1CRNN概述2.2CRNN原理2.2.1CRNN网络架构实现2......
  • THREE.js学习笔记3——Animations
    这一小节主要学习动画。在JavaScript中的动画,其实就是渲染一帧,停顿一会儿,再渲染一帧。所以我们需要在每一帧都更新物体和重新渲染。调用window.requestAnimationFrame();函数来实现。ThepurposeofrequestAnimationFrameistocallthefunctionprovidedonthenextframe......
  • 【芯片封测学习专栏 -- D2D 和 C2C 之间的区别】
    请阅读【嵌入式开发学习必备专栏Cache|MMU|AMBABUS|CoreSight|Trace32|CoreLink|ARMGCC|CSH】文章目录OverviewD2D(Die-to-Die)互联D2D定义D2D特点D2D使用场景C2C(Chip-to-Chip)互联C2C定义C2C特点C2C使用场景D2D和C2C的差异总结实际案例D2D......
  • 【AI游戏】使用强化学习玩 Flappy Bird:从零实现 Q-Learning 算法(附完整资源)
    1. 引言FlappyBird是一款经典的休闲游戏,玩家需要控制小鸟穿过管道,避免碰撞。虽然游戏规则简单,但实现一个AI来自动玩FlappyBird却是一个有趣的挑战。本文将介绍如何使用 Q-Learning 强化学习算法来训练一个AI,使其能够自动玩FlappyBird。我们将从游戏的基本框架开......