- OpenCV 4 with Python Blueprints
- Dr. Menua Gevorgyan Arsen Mamikonyan Michael Beyeler
- 413字
- 2021-06-24 16:50:03
Finding the most prominent depth of the image center region
Once the hand is placed roughly in the center of the screen, we can start finding all image pixels that lie on the same depth plane as the hand. This is done by following these steps:
- First, we simply need to determine the most prominent depth value of the center region of the image. The simplest approach would be to look only at the depth value of the center pixel, like this:
width, height = depth.shape center_pixel_depth = depth[width/2, height/2]
- Then, create a mask in which all pixels at a depth of center_pixel_depth are white and all others are black, as follows:
import numpy as np depth_mask = np.where(depth == center_pixel_depth, 255,
0).astype(np.uint8)
However, this approach will not be very robust, because there is the chance that it will be compromised by the following:
- Your hand will not be placed perfectly parallel to the Kinect sensor.
- Your hand will not be perfectly flat.
- The Kinect sensor values will be noisy.
Therefore, different regions of your hand will have slightly different depth values.
The segment_arm method takes a slightly better approach—it looks at a small neighborhood in the center of the image and determines the median depth value. This is done by following these steps:
- First, we find the center region (for example, 21 x 21 pixels) of the image frame, like this:
def segment_arm(frame: np.ndarray, abs_depth_dev: int = 14) -> np.ndarray:
height, width = frame.shape
# find center (21x21 pixels) region of imageheight frame
center_half = 10 # half-width of 21 is 21/2-1
center = frame[height // 2 - center_half:height // 2 + center_half,
width // 2 - center_half:width // 2 + center_half]
- Then, we determine the median depth value, med_val, as follows:
med_val = np.median(center)
We can now compare med_val with the depth value of all pixels in the image and create a mask in which all pixels whose depth values are within a particular range [med_val-abs_depth_dev, med_val+abs_depth_dev] are white, and all other pixels are black.
However, for reasons that will become clear in a moment, let's paint the pixels gray instead of white, like this:
frame = np.where(abs(frame - med_val) <= abs_depth_dev,
128, 0).astype(np.uint8)
- The result will look like this:
You will note that the segmentation mask is not smooth. In particular, it contains holes at points where the depth sensor failed to make a prediction. Let's learn how to apply morphological closing to smoothen the segmentation mask, in the next section.
- scikit-learn Cookbook
- Learning Microsoft Windows Server 2012 Dynamic Access Control
- DB2 V9權威指南
- Mastering OpenLayers 3
- 自制編譯器
- Computer Vision for the Web
- Visual Basic編程:從基礎到實踐(第2版)
- 實用防銹油配方與制備200例
- Learning AWS Lumberyard Game Development
- Hands-On GPU:Accelerated Computer Vision with OpenCV and CUDA
- Hands-On Automation Testing with Java for Beginners
- Corona SDK Mobile Game Development:Beginner's Guide(Second Edition)
- Natural Language Processing with Python Quick Start Guide
- 交互設計師成長手冊:從零開始學交互
- XML程序設計(第二版)