- Hands-On GPU:Accelerated Computer Vision with OpenCV and CUDA
- Bhaumik Vaidya
- 189字
- 2021-08-13 15:48:20
CUDA program structure
We have seen a very simple Hello, CUDA! program earlier, that showcased some important concepts related to CUDA programs. A CUDA program is a combination of functions that are executed either on the host or on the GPU device. The functions that do not exhibit parallelism are executed on the CPU, and the functions that exhibit data parallelism are executed on the GPU. The GPU compiler segregates these functions during compilation. As seen in the previous chapter, functions meant for execution on the device are defined using the __global__ keyword and compiled by the NVCC compiler, while normal C host code is compiled by the C compiler. A CUDA code is basically the same ANSI C code with the addition of some keywords needed for exploiting data parallelism.
So, in this section, a simple two-variable addition program is taken to explain important concepts related to CUDA programming, such as kernel calls, passing parameters to kernel functions from host to device, the configuration of kernel parameters, CUDA APIs needed to exploit data parallelism, and how memory allocation takes place on the host and the device.
- Vue.js 3.x快速入門
- Expert C++
- 程序員面試白皮書
- Mastering Objectoriented Python
- OpenCV實例精解
- Visual Basic程序設計習題解答與上機指導
- Java設計模式及實踐
- Building Serverless Applications with Python
- Linux C編程:一站式學習
- Webpack實戰:入門、進階與調優(第2版)
- Distributed Computing in Java 9
- IBM Cognos TM1 Developer's Certification guide
- Mobile Forensics:Advanced Investigative Strategies
- Hack與HHVM權威指南
- 計算機應用基礎案例教程(第二版)