官术网_书友最值得收藏!

Threads, Synchronization, and Memory

In the last chapter, we saw how to write CUDA programs that leverage the processing capabilities of a GPU by executing multiple threads and blocks in parallel. In all programs, until the last chapter, all threads were independent of each other and there was no communication between multiple threads. Most of the real-life applications need communication between intermediate threads. So, in this chapter, we will look in detail at how communication between different threads can be done, and explain the synchronization between multiple threads working on the same data. We will examine the hierarchical memory architecture of a CUDA and how different memories can be used to accelerate CUDA programs. The last part of this chapter explains a very useful application of a CUDA in the dot product of vectors and matrix multiplication, using all the concepts we have covered earlier.

The following topics will be covered in this chapter:

  • Thread calls
  • CUDA memory architecture
  • Global, local, and cache memory
  • Shared memory and thread synchronization
  • Atomic operations
  • Constant and texture memory
  • Dot product and a matrix multiplication example

主站蜘蛛池模板: 定边县| 华池县| 原平市| 璧山县| 遂川县| 中超| 寿光市| 蓬溪县| 永靖县| 恭城| 九台市| 永泰县| 防城港市| 辛集市| 溧阳市| 奉化市| 卫辉市| 贵港市| 镶黄旗| 武乡县| 东海县| 玉田县| 汶川县| 蒙山县| 汉寿县| 富锦市| 新河县| 舟曲县| 建始县| 盘山县| 桑日县| 德保县| 泰兴市| 分宜县| 安国市| 石阡县| 岐山县| 上高县| 龙南县| 日喀则市| 德阳市|