- Deep Learning with PyTorch
- Vishnu Subramanian
- 280字
- 2021-06-24 19:16:21
Hardware availability
Deep learning requires complex mathematical operations to be performed on millions, sometimes billions, of parameters. Existing CPUs take a long time to perform these kinds of operations, although this has improved over the last several years. A new kind of hardware called a graphics processing unit (GPU) has completed these huge mathematical operations, such as matrix multiplications, orders of magnitude faster.
GPUs were initially built for the gaming industry by companies such as Nvidia and AMD. It turned out that this hardware is extremely efficient, not only for rendering high quality video games, but also to speed up the DL algorithms. One recent GPU from Nvidia, the 1080ti, takes a few days to build an image-classification system on top of an ImageNet dataset, which previously could have taken around a month.
If you are planning to buy hardware for running deep learning, I would recommend choosing a GPU from Nvidia based on your budget. Choose one with a good amount of memory. Remember, your computer memory and GPU memory are two different things. The 1080ti comes with 11 GB of memory and it costs around $700.
You can also use various cloud providers such as AWS, Google Cloud, or Floyd (this company offers GPU machines optimized for DL). Using a cloud provider is economical if you are just starting with DL or if you are setting up machines for organization usage where you may have more financial freedom.
The following image shows some of the benchmarks that compare performance between CPUs and GPUs :

- ATmega16單片機項目驅動教程
- 新型電腦主板關鍵電路維修圖冊
- FPGA從入門到精通(實戰篇)
- SDL Game Development
- 數字道路技術架構與建設指南
- 嵌入式技術基礎與實踐(第5版)
- Hands-On Machine Learning with C#
- OUYA Game Development by Example
- 微軟互聯網信息服務(IIS)最佳實踐 (微軟技術開發者叢書)
- 筆記本電腦應用技巧
- OpenGL Game Development By Example
- Arduino項目開發:智能生活
- 單片機原理與技能訓練
- 觸摸屏應用技術從入門到精通
- The Deep Learning with PyTorch Workshop