- Deep Learning with PyTorch
- Vishnu Subramanian
- 188字
- 2021-06-24 19:16:30
Loading PyTorch tensors as batches
It is a common practice in deep learning or machine learning to batch samples of images, as modern graphics processing units (GPUs) and CPUs are optimized to run operations faster on a batch of images. The batch size generally varies depending on the kind of GPU we use. Each GPU has its own memory, which can vary from 2 GB to 12 GB, and sometimes more for commercial GPUs. PyTorch provides the DataLoader class, which takes in a dataset and returns us a batch of images. It abstracts a lot of complexities in batching, such as the usage of multi-workers for applying transformation. The following code converts the previous train and valid datasets into data loaders:
train_data_gen =
torch.utils.data.DataLoader(train,batch_size=64,num_workers=3)
valid_data_gen =
torch.utils.data.DataLoader(valid,batch_size=64,num_workers=3)
The DataLoader class provides us with a lot of options and some of the most commonly used ones are as follows:
- shuffle: When true, this shuffles the images every time the data loader is called.
- num_workers: This is responsible for parallelization. It is common practice to use a number of workers fewer than the number of cores available in your machine.
- 零點起飛學Xilinx FPG
- Cortex-M3 + μC/OS-II嵌入式系統開發入門與應用
- Linux KVM虛擬化架構實戰指南
- Intel FPGA/CPLD設計(高級篇)
- 硬件產品經理手冊:手把手構建智能硬件產品
- 電腦維護365問
- Svelte 3 Up and Running
- Spring Cloud微服務架構實戰
- SiFive 經典RISC-V FE310微控制器原理與實踐
- 深入理解序列化與反序列化
- “硬”核:硬件產品成功密碼
- 3D Printing Blueprints
- The Artificial Intelligence Infrastructure Workshop
- 基于S5PV210處理器的嵌入式開發完全攻略
- 筆記本電腦的結構、原理與維修