OpenCV實現人體姿態估計(人體關鍵點檢測)OpenPose

pan_jinquan 2021-08-15 21:12:22 阅读数:74

本文一共[544]字,预计阅读时长:1分钟~
opencv openpose

OpenCV實現人體姿態估計(人體關鍵點檢測)OpenPose


OpenPose人體姿態識別項目是美國卡耐基梅隆大學(CMU)基於卷積神經網絡和監督學習並以Caffe為框架開發的開源庫。可以實現人體動作、面部錶情、手指運動等姿態估計。適用於單人和多人,具有極好的魯棒性。是世界上首個基於深度學習的實時多人二維姿態估計應用,基於它的實例如雨後春笋般湧現。

其理論基礎來自Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields ,是CVPR 2017的一篇論文,作者是來自CMU感知計算實驗室的曹哲(http://people.eecs.berkeley.edu/~zhecao/#top),Tomas Simon,Shih-En Wei,Yaser Sheikh 。

人體姿態估計技術在體育健身、動作采集、3D試衣、輿情監測等領域具有廣闊的應用前景,人們更加熟悉的應用就是抖音尬舞機。

OpenPose的效果並不怎麼好,强烈推薦《2D Pose人體關鍵點檢測(Python/Android /C++ Demo)https://panjinquan.blog.csdn.net/article/details/115765863 ,提供了C++推理代碼和Android Demo

OpenPose項目Github鏈接:https://github.com/CMU-Perceptual-Computing-Lab/openpose

OpenCV實現的Demo鏈接:https://github.com/PanJinquan/opencv-learning-tutorials/blob/master/opencv_dnn_pro/openpose-opencv/openpose_for_image_test.py


1、實現原理

輸入一幅圖像,經過卷積網絡提取特征,得到一組特征圖,然後分成兩個岔路,分別使用 CNN網絡提取Part Confidence Maps 和 Part Affinity Fields;


得到這兩個信息後,我們使用圖論中的 Bipartite Matching(偶匹配) 求出Part Association,將同一個人的關節點連接起來,由於PAF自身的矢量性,使得生成的偶匹配很正確,最終合並為一個人的整體骨架;
最後基於PAFs求Multi-Person Parsing—>把Multi-person parsing問題轉換成graphs問題—>Hungarian Algorithm(匈牙利算法)
(匈牙利算法是部圖匹配最常見的算法,該算法的核心就是尋找增廣路徑,它是一種用增廣路徑求二分圖最大匹配的算法。)


2、實現神經網絡

階段一:VGGNet的前10層用於為輸入圖像創建特征映射。

階段二:使用2分支多階段CNN,其中第一分支預測身體部比特比特置(例如肘部,膝部等)的一組2D置信度圖(S)。 如下圖所示,給出關鍵點的置信度圖和親和力圖 - 左肩。

第二分支預測一組部分親和度的2D矢量場(L),其編碼部分之間的關聯度。 如下圖所示,顯示頸部和左肩之間的部分親和力。

階段三: 通過貪心推理解析置信度和親和力圖,對圖像中的所有人生成2D關鍵點。


3.OpenCV-OpenPose實現推理代碼

# -*-coding: utf-8 -*-
"""
@Project: python-learning-notes
@File : openpose_for_image_test.py
@Author : panjq
@E-mail : [email protected]
@Date : 2019-07-29 21:50:17
"""
import cv2 as cv
import os
import glob
BODY_PARTS = {"Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
"LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
"RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
"LEye": 15, "REar": 16, "LEar": 17, "Background": 18}
POSE_PAIRS = [["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"]]
def detect_key_point(model_path, image_path, out_dir, inWidth=368, inHeight=368, threshhold=0.2):
net = cv.dnn.readNetFromTensorflow(model_path)
frame = cv.imread(image_path)
frameWidth = frame.shape[1]
frameHeight = frame.shape[0]
scalefactor = 2.0
net.setInput(
cv.dnn.blobFromImage(frame, scalefactor, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
out = net.forward()
out = out[:, :19, :, :] # MobileNet output [1, 57, -1, -1], we only need the first 19 elements
assert (len(BODY_PARTS) == out.shape[1])
points = []
for i in range(len(BODY_PARTS)):
# Slice heatmap of corresponging body's part.
heatMap = out[0, i, :, :]
# Originally, we try to find all the local maximums. To simplify a sample
# we just find a global one. However only a single pose at the same time
# could be detected this way.
_, conf, _, point = cv.minMaxLoc(heatMap)
x = (frameWidth * point[0]) / out.shape[3]
y = (frameHeight * point[1]) / out.shape[2]
# Add a point if it's confidence is higher than threshold.
points.append((int(x), int(y)) if conf > threshhold else None)
for pair in POSE_PAIRS:
partFrom = pair[0]
partTo = pair[1]
assert (partFrom in BODY_PARTS)
assert (partTo in BODY_PARTS)
idFrom = BODY_PARTS[partFrom]
idTo = BODY_PARTS[partTo]
if points[idFrom] and points[idTo]:
cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (0, 0, 255), cv.FILLED)
t, _ = net.getPerfProfile()
freq = cv.getTickFrequency() / 1000
cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))
cv.imwrite(os.path.join(out_dir, os.path.basename(image_path)), frame)
cv.imshow('OpenPose using OpenCV', frame)
cv.waitKey(0)
def detect_image_list_key_point(image_dir, out_dir, inWidth=480, inHeight=480, threshhold=0.3):
image_list = glob.glob(image_dir)
for image_path in image_list:
detect_key_point(image_path, out_dir, inWidth, inHeight, threshhold)
if __name__ == "__main__":
model_path = "pb/graph_opt.pb"
# image_path = "body/*.jpg"
out_dir = "result"
# detect_image_list_key_point(image_path,out_dir)
image_path = "./test.jpg"
detect_key_point(model_path, image_path, out_dir, inWidth=368, inHeight=368, threshhold=0.05)

參考資料:

[1].https://blog.csdn.net/m0_38106923/article/details/89416514

版权声明:本文为[pan_jinquan]所创,转载请带上原文链接,感谢。 https://gsmany.com/2021/08/20210815211145791v.html