官术网_书友最值得收藏!

  • OpenCV 4 with Python Blueprints
  • Dr. Menua Gevorgyan Arsen Mamikonyan Michael Beyeler
  • 373字
  • 2021-06-24 16:50:02

Hand Gesture Recognition Using a Kinect Depth Sensor

The goal of this chapter is to develop an app that detects and tracks simple hand gestures in real time, using the output of a depth sensor, such as that of a Microsoft Kinect 3D sensor or an ASUS Xtion sensor. The app will analyze each captured frame to perform the following tasks:

  • Hand region segmentation: The user's hand region will be extracted in each frame by analyzing the depth map output of the Kinect sensor, which is done by thresholding, applying some morphological operations, and finding connected components.
  • Hand shape analysis: The shape of the segmented hand region will be analyzed by determining contours, convex hull, and convexity defects.
  • Hand gesture recognition: The number of extended fingers will be determined based on the hand contour's convexity defects, and the gesture will be classified accordingly (with no extended fingers corresponding to a fist, and five extended fingers corresponding to an open hand).

Gesture recognition is an ever-popular topic in computer science. This is because it not only enables humans to communicate with machines (Human-Machine Interaction (HMI)) but also constitutes the first step for machines to begin understanding human body language. With affordable sensors such as Microsoft Kinect or Asus Xtion and open source software such as OpenKinect and OpenNI, it has never been easier to get started in the field yourself. So, what shall we do with all this technology?

In this chapter, we will cover the following topics:

  • Planning the app
  • Setting up the app
  • Tracking hand gestures in real time
  • Understanding hand region segmentation
  • Performing hand shape analysis
  • Performing hand gesture recognition

The beauty of the algorithm that we are going to implement in this chapter is that it works well for many hand gestures, yet it is simple enough to run in real time on a generic laptop. Also, if we want, we can easily extend it to incorporate more complicated hand-pose estimations.

Once you complete the app, you will understand how to use depth sensors in your own apps. You will learn how to compose shapes of interest with OpenCV from the depth information, as well as understanding how to analyze shapes with OpenCV, using their geometric properties.

主站蜘蛛池模板: 浦北县| 资中县| 定边县| 延安市| 繁昌县| 中江县| 阿瓦提县| 汶上县| 安西县| 原平市| 泰安市| 镇平县| 石棉县| 江孜县| 新竹市| 大埔区| 上蔡县| 温宿县| 彩票| 晋中市| 如皋市| 南汇区| 昌平区| 任丘市| 满城县| 阳曲县| 姚安县| 黎川县| 南投县| 无棣县| 平邑县| 新和县| 临颍县| 资讯 | 鄱阳县| 新化县| 于都县| 九寨沟县| 涟水县| 瓮安县| 肇东市|