- Hands-On GPU:Accelerated Computer Vision with OpenCV and CUDA
- Bhaumik Vaidya
- 426字
- 2021-08-13 15:48:20
Configuring kernel parameters
For starting multiple threads on the device in parallel, we have to configure parameters in the kernel call, which are written inside the kernel launch operator. They specify the number of blocks and the number of threads per block. We can launch many blocks in parallel with many threads in each block. Normally, there is a limit of 512 or 1,024 threads per block. Each block runs on the streaming multiprocessor, and threads in one block can communicate with one another via shared memory. The programmer can't choose which multiprocessor will execute a particular block and in which order blocks or threads will execute.
Suppose you want to start 500 threads in parallel; what is the modification that you can make to the kernel launch syntax that was shown previously? One option is to start one block of 500 threads via the following syntax:
gpuAdd<< <1,500> >> (1,4, d_c)
We can also start 500 blocks of one thread each or two blocks of 250 threads each. Accordingly, you have to modify values in the kernel launch operator. The programmer has to be careful that the number of threads per block does not go beyond the maximum supported limit of your GPU device. In this book, we are targeting computer vision applications where we need to work on two-and three-dimensional images. Here, it would be great if blocks and threads are not one-dimensional but more than that for better processing and visualization.
GPU supports a three-dimensional grids of blocks and three-dimensional blocks of threads. It has the following syntax:
mykernel<< <dim3(Nbx, Nby,Nbz), dim3(Ntx, Nty,Ntz) > >> ()
Here Nbx, Nby, and Nbz indicate the number of blocks in a grid in the direction of the x, y, and z axes, respectively. Similarly, Ntx, Nty, and Ntz indicate the number of threads in a block in the direction of the x, y, and z axes. If the y and z dimensions are not specified, they are taken as 1 by default. So, for example, to process an image, you can start a 16 x 16 grid of blocks, all containing 16 x 16 threads. The syntax will be as follows:
mykernel << <dim3(16,16),dim3(16,16)> >> ()
To summarize, the configuration of the number of blocks and the number of threads is very important while launching the kernel. It should be chosen with proper care depending on the application that we are working on and the GPU resources. The next section will explain some important CUDA functions added over regular ANSI C functions.
- Mastering Visual Studio 2017
- 零基礎學C++程序設計
- ASP.NET Core 5.0開發入門與實戰
- C語言程序設計基礎與實驗指導
- 神經網絡編程實戰:Java語言實現(原書第2版)
- Banana Pi Cookbook
- Bootstrap 4:Responsive Web Design
- C#程序設計
- Learning JavaScript Data Structures and Algorithms
- Protocol-Oriented Programming with Swift
- Java Web從入門到精通(第3版)
- Django 3.0應用開發詳解
- Python預測之美:數據分析與算法實戰(雙色)
- AngularJS UI Development
- Learning ECMAScript 6