- OpenCV 4 with Python Blueprints
- Dr. Menua Gevorgyan Arsen Mamikonyan Michael Beyeler
- 371字
- 2021-06-24 16:50:02
Accessing the Kinect 3D sensor
The easiest way to access a Kinect sensor is by using an OpenKinect module called freenect. For installation instructions, take a look at the preceding section.
The freenect module has functions such as sync_get_depth() and sync_get_video(), used to obtain images synchronously from the depth sensor and camera sensor respectively. For this chapter, we will need only the Kinect depth map, which is a single-channel (grayscale) image in which each pixel value is the estimated distance from the camera to a particular surface in the visual scene.
Here, we will design a function that will read a frame from the sensor and convert it to the desired format, and return the frame together with a success status, as follows:
def read_frame(): -> Tuple[bool,np.ndarray]:
The function consists of the following steps:
- Grab a frame; terminate the function if a frame was not acquired, like this:
frame, timestamp = freenect.sync_get_depth()
if frame is None:
return False, None
The sync_get_depth method returns both the depth map and a timestamp. By default, the map is in an 11-bit format. The last 10 bits of the sensor describes the depth, while the first bit states that the distance estimation was not successful when it's equal to 1.
- It is a good idea to standardize the data into an 8-bit precision format, as an 11-bit format is inappropriate to be visualized with cv2.imshow right away, as well as in the future. We might want to use some different sensor that returns in a different format, as follows:
np.clip(depth, 0, 2**10-1, depth) depth >>= 2
In the previous code, we have first clipped the values to 1,023 (or 2**10-1) to fit in 10 bits. Such clipping results in the assignment of the undetected distance to the farthest possible point. Next, we shift 2 bits to the right to fit the distance in 8 bits.
- Finally, we convert the image into an 8-bit unsigned integer array and return the result, as follows:
return True, depth.astype(np.uint8)
Now, the depth image can be visualized as follows:
cv2.imshow("depth", read_frame()[1])
Let's see how to use OpenNI-compatible sensors in the next section.
- 零基礎搭建量化投資系統:以Python為工具
- Network Automation Cookbook
- Data Analysis with IBM SPSS Statistics
- Learning Neo4j 3.x(Second Edition)
- 云計算通俗講義(第3版)
- 用Python實現深度學習框架
- C#開發案例精粹
- MySQL入門很輕松(微課超值版)
- Scala for Machine Learning(Second Edition)
- 深度學習原理與PyTorch實戰(第2版)
- OpenCV 3 Blueprints
- Zabbix Performance Tuning
- Python 快速入門(第3版)
- Swift High Performance
- Python 3.6從入門到精通(視頻教學版)