首页 > 其他分享 >Mediapipe 手势识别:石头、剪刀、布

Mediapipe 手势识别:石头、剪刀、布

时间:2022-10-08 16:55:14浏览次数:83  
标签:Mediapipe img 剪刀 self cv2 lmList print import 手势

参考:

Mediapipe 手势识别   

使用该文章代码时,报错如下:TypeError: create_int(): incompatible function arguments. The following argument types

原因:self.mpHands.Hands()中总共有5个参数,但是作者的代码中缺少了一个参数。   TypeError: create_int():函数参数不兼容

 

正文

效果图

 

需要调用的类:HandTrackingModule.py

import cv2
import mediapipe as mp
import time
import math


class handDetctor():
    def __init__(self, mode=False, maxHands=2, modelComplexity=1, detectionCon=0.5, trackCon=0.5):
        self.mode = mode
        self.maxHands = maxHands
        self.modelComplexity = modelComplexity  # 新加入
        self.detectionCon = detectionCon
        self.trackCon = trackCon

        self.mpHands = mp.solutions.hands
        self.hands = self.mpHands.Hands(self.mode, self.maxHands, self.modelComplexity,
                                        self.detectionCon, self.trackCon)
        self.mpDraw = mp.solutions.drawing_utils

    def findHands(self, img, draw=True, ):
        imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)  # 转换为rgb
        self.results = self.hands.process(imgRGB)

        # print(results.multi_hand_landmarks)
        if self.results.multi_hand_landmarks:
            for handLms in self.results.multi_hand_landmarks:
                if draw:
                    self.mpDraw.draw_landmarks(img, handLms, self.mpHands.HAND_CONNECTIONS)

        return img

    def findPosition(self, img, handNo=0, draw=True):
        lmList = []
        if self.results.multi_hand_landmarks:
            myHand = self.results.multi_hand_landmarks[handNo]
            for id, lm in enumerate(myHand.landmark):
                # print(id, lm)
                # 获取手指关节点
                h, w, c = img.shape
                cx, cy = int(lm.x * w), int(lm.y * h)
                lmList.append([id, cx, cy])
                if draw:
                    cv2.putText(img, str(int(id)), (cx + 10, cy + 10), cv2.FONT_HERSHEY_PLAIN,
                                1, (0, 0, 255), 2)

        return lmList

    # 返回列表 包含每个手指的开合状态
    def fingerStatus(self, lmList):

        fingerList = []
        id, originx, originy = lmList[0]
        keypoint_list = [[2, 4], [6, 8], [10, 12], [14, 16], [18, 20]]
        for point in keypoint_list:
            id, x1, y1 = lmList[point[0]]
            id, x2, y2 = lmList[point[1]]
            if math.hypot(x2 - originx, y2 - originy) > math.hypot(x1 - originx, y1 - originy):
                fingerList.append(True)
            else:
                fingerList.append(False)

        return fingerList


def main():
    cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
    # 帧率统计
    pTime = 0
    cTime = 0
    detector = handDetctor()
    while True:
        success, img = cap.read()

        img = detector.findHands(img)
        lmList = detector.findPosition(img, draw=False)
        if len(lmList) != 0:
            # print(lmList)
            print(detector.fingerStatus(lmList))

        # 统计屏幕帧率
        cTime = time.time()
        fps = 1 / (cTime - pTime)
        pTime = cTime
        cv2.putText(img, str(int(fps)), (10, 70), cv2.FONT_HERSHEY_PLAIN, 3, (255, 0, 255), 3)

        cv2.imshow("image", img)
        if cv2.waitKey(2) & 0xFF == 27:
            break

    cap.release()


if __name__ == '__main__':
    main()
View Code

主类:gestureRecognition.py

import time
import cv2
import os
import HandTrackingModule as htm

wCam, hCam = 640, 480
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
cap.set(3, wCam)
cap.set(4, hCam)

# 缓冲图像
picture_path = "gesture_picture"
myList = os.listdir(picture_path)
print(myList)
overlayList = []
for imPath in myList:
    image = cv2.imread(f'{picture_path}/{imPath}')
    overlayList.append(image)

detector = htm.handDetctor(detectionCon=0.7)


while True:
    success, img = cap.read()

    img = detector.findHands(img)
    lmList = detector.findPosition(img, draw=False)
    if len(lmList) != 0:
        thumbOpen, firstOpen, secondOpen, thirdOpen, fourthOpen = detector.fingerStatus(lmList)
        # print('---------------------------')
        # print('thumbOpen:', thumbOpen)
        # print('firstOpen:', firstOpen)
        # print('secondOpen:', secondOpen)
        # print('thirdOpen:', thirdOpen)
        # print('fourthOpen:', fourthOpen)
        if not firstOpen and not secondOpen and not thirdOpen and not fourthOpen:
            img[0:200, 0:200] = overlayList[1]
        if firstOpen and secondOpen and not thirdOpen and not fourthOpen:
            img[0:200, 0:200] = overlayList[0]
        if firstOpen and secondOpen and thirdOpen and fourthOpen:
            img[0:200, 0:200] = overlayList[2]
    cv2.imshow("image", img)
    if cv2.waitKey(2) & 0xFF == 27:
        break
View Code

 

标签:Mediapipe,img,剪刀,self,cv2,lmList,print,import,手势
From: https://www.cnblogs.com/yiyezhouming/p/16769467.html

相关文章

  • ctfshow新手杯剪刀石头布(session反序列化)
    看到ini_set('session.serialize_handler','php');让我不由自主的想起了session反序列化漏洞的一道题。直接百度会有很多文章这里不多介绍。因此我们的解法就是:1.post一......
  • 使用iOS手势UIGestureRecognizer(转)
    UIKit中包含了UIGestureRecognizer类,用于检测发生在设备中的手势。UIGestureRecognizer是一个抽象类,定义了所有手势的基本行为,它有下面一些子类用于处理具体的手势:1、......
  • 49、使用Visual Studio 2019进行Mediapipe的封装调用
    基本思想:因为项目中使用mediapipe的检测框架,奈何google对其官方提供的tflite封装解析不开源,只能曲线救国,因此使用visualstudio2019进行封装调用一、先测试python版本的medi......
  • 笔记本触摸板手势操作
    来看看你的笔记本电脑触摸板支持这些手势操作吗-知乎(zhihu.com)不同品牌电脑可能略有不同,或者某些触摸板不支持这些操作以我手上这台联想笔记本来说一、单击操作:单......
  • Python中的石头剪刀布游戏
    Python中的石头剪刀布游戏继续阅读WordPress继续阅读知乎版权声明:本文为博主原创文章,遵循CC4.0BY-SA版权协议,转载请附上原文出处链接和本声明本文链接:https:/......
  • 手势与时间表达共现的身体关键点轨迹分类
    手势与时间表达共现的身体关键点轨迹分类关于这篇文章——在这篇博客中,我将详细解释我的项目,“与时间表达同时发生的手势的身体关键点轨迹分类”,这是与RedHenLab组织......
  • [AcWing 258] 石头剪刀布
    带扩展域的并查集点击查看代码#include<bits/stdc++.h>usingnamespacestd;typedeflonglongLL;constintN=1e6+10;intn,m;intp[N];structN......