- Hands-On GPU:Accelerated Computer Vision with OpenCV and CUDA
- Bhaumik Vaidya
- 183字
- 2021-08-13 15:48:24
Threads, Synchronization, and Memory
In the last chapter, we saw how to write CUDA programs that leverage the processing capabilities of a GPU by executing multiple threads and blocks in parallel. In all programs, until the last chapter, all threads were independent of each other and there was no communication between multiple threads. Most of the real-life applications need communication between intermediate threads. So, in this chapter, we will look in detail at how communication between different threads can be done, and explain the synchronization between multiple threads working on the same data. We will examine the hierarchical memory architecture of a CUDA and how different memories can be used to accelerate CUDA programs. The last part of this chapter explains a very useful application of a CUDA in the dot product of vectors and matrix multiplication, using all the concepts we have covered earlier.
The following topics will be covered in this chapter:
- Thread calls
- CUDA memory architecture
- Global, local, and cache memory
- Shared memory and thread synchronization
- Atomic operations
- Constant and texture memory
- Dot product and a matrix multiplication example
- Puppet 4 Essentials(Second Edition)
- R語言數據分析從入門到精通
- 程序設計與實踐(VB.NET)
- Java系統分析與架構設計
- Python高效開發實戰:Django、Tornado、Flask、Twisted(第2版)
- WordPress Plugin Development Cookbook(Second Edition)
- Android Game Programming by Example
- Drupal Search Engine Optimization
- 3D Printing Designs:The Sun Puzzle
- Beginning C# 7 Hands-On:The Core Language
- Visual FoxPro程序設計習題及實驗指導
- Visual C#(學習筆記)
- Spring MVC Blueprints
- Windows 10 for Enterprise Administrators
- 流暢的Python