官术网_书友最值得收藏!

Summary

To summarize, in this chapter, you were introduced to programming concepts in CUDA C and how parallel computing can be done using CUDA. It was shown that CUDA programs can run on any NVIDIA GPU hardware efficiently and in parallel. So, CUDA is both efficient and scalable. The CUDA API functions over and above existing ANSI C functions needed for parallel data computations were discussed in detail. How to call device code from the host code via a kernel call, configuring of kernel parameters, and a passing of parameters to the kernel were also discussed by taking a simple two-variable addition example. It was also shown that CUDA does not guarantee the order in which the blocks or thread will run and which block is assigned to which multi-processor in hardware. Moreover, vector operations, which take advantage of parallel-processing capabilities of GPU and CUDA, were discussed. It can be seen that, by performing vector operations on the GPU, it can improve the throughput drastically, compared to the CPU. In the last section, various common communication patterns followed in parallel programming were discussed in detail. Still, we have not discussed memory architecture and how threads can communicate with one another in CUDA. If one thread needs data of the other thread, then what can be done is also not discussed. So, in the next chapter, we will discuss memory architecture and thread synchronization in detail.

主站蜘蛛池模板: 黄陵县| 汝州市| 安宁市| 阿拉善右旗| 牡丹江市| 鄂州市| 内江市| 鄄城县| 广丰县| 邵阳市| 汕尾市| 都江堰市| 关岭| 北安市| 容城县| 陵水| 吴堡县| 蒙阴县| 玛沁县| 淳安县| 保靖县| 绥棱县| 潮安县| 兴山县| 黔南| 钟祥市| 和硕县| 太康县| 玉龙| 都江堰市| 金溪县| 晋城| 江津市| 乳源| 沂水县| 商都县| 始兴县| 聂拉木县| 时尚| 晋城| 叶城县|