- OpenCV 4 with Python Blueprints
- Dr. Menua Gevorgyan Arsen Mamikonyan Michael Beyeler
- 371字
- 2021-06-24 16:50:02
Accessing the Kinect 3D sensor
The easiest way to access a Kinect sensor is by using an OpenKinect module called freenect. For installation instructions, take a look at the preceding section.
The freenect module has functions such as sync_get_depth() and sync_get_video(), used to obtain images synchronously from the depth sensor and camera sensor respectively. For this chapter, we will need only the Kinect depth map, which is a single-channel (grayscale) image in which each pixel value is the estimated distance from the camera to a particular surface in the visual scene.
Here, we will design a function that will read a frame from the sensor and convert it to the desired format, and return the frame together with a success status, as follows:
def read_frame(): -> Tuple[bool,np.ndarray]:
The function consists of the following steps:
- Grab a frame; terminate the function if a frame was not acquired, like this:
frame, timestamp = freenect.sync_get_depth()
if frame is None:
return False, None
The sync_get_depth method returns both the depth map and a timestamp. By default, the map is in an 11-bit format. The last 10 bits of the sensor describes the depth, while the first bit states that the distance estimation was not successful when it's equal to 1.
- It is a good idea to standardize the data into an 8-bit precision format, as an 11-bit format is inappropriate to be visualized with cv2.imshow right away, as well as in the future. We might want to use some different sensor that returns in a different format, as follows:
np.clip(depth, 0, 2**10-1, depth) depth >>= 2
In the previous code, we have first clipped the values to 1,023 (or 2**10-1) to fit in 10 bits. Such clipping results in the assignment of the undetected distance to the farthest possible point. Next, we shift 2 bits to the right to fit the distance in 8 bits.
- Finally, we convert the image into an 8-bit unsigned integer array and return the result, as follows:
return True, depth.astype(np.uint8)
Now, the depth image can be visualized as follows:
cv2.imshow("depth", read_frame()[1])
Let's see how to use OpenNI-compatible sensors in the next section.
- Linux C/C++服務器開發實踐
- NLTK基礎教程:用NLTK和Python庫構建機器學習應用
- Instant Typeahead.js
- Python進階編程:編寫更高效、優雅的Python代碼
- INSTANT CakePHP Starter
- Expert Android Programming
- Creating Data Stories with Tableau Public
- Java程序設計與項目案例教程
- Mastering Apache Storm
- 實戰Python網絡爬蟲
- ASP.NET Core and Angular 2
- Vue.js 3.x高效前端開發(視頻教學版)
- Dart:Scalable Application Development
- Android項目實戰:博學谷
- 計算機軟件項目實訓指導