舉報(bào)

會(huì)員
Deep Learning with PyTorch
Thisbookisformachinelearningengineers,dataanalysts,datascientistsinterestedindeeplearningandarelookingtoexploreimplementingadvancedalgorithmsinPyTorch.Someknowledgeofmachinelearningishelpfulbutnotamandatoryneed.WorkingknowledgeofPythonprogrammingisexpected.
目錄(248章)
倒序
- coverpage
- Title Page
- Dedication
- Packt Upsell
- Why subscribe?
- PacktPub.com
- Contributors
- About the author
- About the reviewer
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Getting Started with Deep Learning Using PyTorch
- Artificial intelligence
- The history of AI
- Machine learning
- Examples of machine learning in real life
- Deep learning
- Applications of deep learning
- Hype associated with deep learning
- The history of deep learning
- Why now?
- Hardware availability
- Data and algorithms
- Deep learning frameworks
- PyTorch
- Summary
- Building Blocks of Neural Networks
- Installing PyTorch
- Our first neural network
- Data preparation
- Scalar (0-D tensors)
- Vectors (1-D tensors)
- Matrix (2-D tensors)
- 3-D tensors
- Slicing tensors
- 4-D tensors
- 5-D tensors
- Tensors on GPU
- Variables
- Creating data for our neural network
- Creating learnable parameters
- Neural network model
- Network implementation
- Loss function
- Optimize the neural network
- Loading data
- Dataset class
- DataLoader class
- Summary
- Diving Deep into Neural Networks
- Deep dive into the building blocks of neural networks
- Layers – fundamental blocks of neural networks
- Non-linear activations
- Sigmoid
- Tanh
- ReLU
- Leaky ReLU
- PyTorch non-linear activations
- The PyTorch way of building deep learning algorithms
- Model architecture for different machine learning problems
- Loss functions
- Optimizing network architecture
- Image classification using deep learning
- Loading data into PyTorch tensors
- Loading PyTorch tensors as batches
- Building the network architecture
- Training the model
- Summary
- Fundamentals of Machine Learning
- Three kinds of machine learning problems
- Supervised learning
- Unsupervised learning
- Reinforcement learning
- Machine learning glossary
- Evaluating machine learning models
- Training validation and test split
- Simple holdout validation
- K-fold validation
- K-fold validation with shuffling
- Data representativeness
- Time sensitivity
- Data redundancy
- Data preprocessing and feature engineering
- Vectorization
- Value normalization
- Handling missing values
- Feature engineering
- Overfitting and underfitting
- Getting more data
- Reducing the size of the network
- Applying weight regularization
- Dropout
- Underfitting
- Workflow of a machine learning project
- Problem definition and dataset creation
- Measure of success
- Evaluation protocol
- Prepare your data
- Baseline model
- Large model enough to overfit
- Applying regularization
- Learning rate picking strategies
- Summary
- Deep Learning for Computer Vision
- Introduction to neural networks
- MNIST – getting data
- Building a CNN model from scratch
- Conv2d
- Pooling
- Nonlinear activation – ReLU
- View
- Linear layer
- Training the model
- Classifying dogs and cats – CNN from scratch
- Classifying dogs and cats using transfer learning
- Creating and exploring a VGG16 model
- Freezing the layers
- Fine-tuning VGG16
- Training the VGG16 model
- Calculating pre-convoluted features
- Understanding what a CNN model learns
- Visualizing outputs from intermediate layers
- Visualizing weights of the CNN layer
- Summary
- Deep Learning with Sequence Data and Text
- Working with text data
- Tokenization
- Converting text into characters
- Converting text into words
- N-gram representation
- Vectorization
- One-hot encoding
- Word embedding
- Training word embedding by building a sentiment classifier
- Downloading IMDB data and performing text tokenization
- torchtext.data
- torchtext.datasets
- Building vocabulary
- Generate batches of vectors
- Creating a network model with embedding
- Training the model
- Using pretrained word embeddings
- Downloading the embeddings
- Loading the embeddings in the model
- Freeze the embedding layer weights
- Recursive neural networks
- Understanding how RNN works with an example
- LSTM
- Long-term dependency
- LSTM networks
- Preparing the data
- Creating batches
- Creating the network
- Training the model
- Convolutional network on sequence data
- Understanding one-dimensional convolution for sequence data
- Creating the network
- Training the model
- Summary
- Generative Networks
- Neural style transfer
- Loading the data
- Creating the VGG model
- Content loss
- Style loss
- Extracting the losses
- Creating loss function for each layers
- Creating the optimizer
- Training
- Generative adversarial networks
- Deep convolutional GAN
- Defining the generator network
- Transposed convolutions
- Batch normalization
- Generator
- Defining the discriminator network
- Defining loss and optimizer
- Training the discriminator
- Training the discriminator with real images
- Training the discriminator with fake images
- Training the generator network
- Training the complete network
- Inspecting the generated images
- Language modeling
- Preparing the data
- Generating the batches
- Batches
- Backpropagation through time
- Defining a model based on LSTM
- Defining the train and evaluate functions
- Training the model
- Summary
- Modern Network Architectures
- Modern network architectures
- ResNet
- Creating PyTorch datasets
- Creating loaders for training and validation
- Creating a ResNet model
- Extracting convolutional features
- Creating a custom PyTorch dataset class for the pre-convoluted features and loader
- Creating a simple linear model
- Training and validating the model
- Inception
- Creating an Inception model
- Extracting convolutional features using register_forward_hook
- Creating a new dataset for the convoluted features
- Creating a fully connected model
- Training and validating the model
- Densely connected convolutional networks – DenseNet
- DenseBlock
- DenseLayer
- Creating a DenseNet model
- Extracting DenseNet features
- Creating a dataset and loaders
- Creating a fully connected model and train
- Model ensembling
- Creating models
- Extracting the image features
- Creating a custom dataset along with data loaders
- Creating an ensembling model
- Training and validating the model
- Encoder-decoder architecture
- Encoder
- Decoder
- Summary
- What Next?
- What next?
- Overview
- Interesting ideas to explore
- Object detection
- Image segmentation
- OpenNMT in PyTorch
- Alien NLP
- fast.ai – making neural nets uncool again
- Open Neural Network Exchange
- How to keep yourself updated
- Summary
- Other Books You May Enjoy
- Leave a review - let other readers know what you think 更新時(shí)間:2021-06-24 19:17:02
推薦閱讀
- ATmega16單片機(jī)項(xiàng)目驅(qū)動(dòng)教程
- 顯卡維修知識(shí)精解
- FPGA從入門到精通(實(shí)戰(zhàn)篇)
- SDL Game Development
- Linux運(yùn)維之道(第2版)
- Learning Stencyl 3.x Game Development Beginner's Guide
- VCD、DVD原理與維修
- 分布式微服務(wù)架構(gòu):原理與實(shí)戰(zhàn)
- Wireframing Essentials
- Hands-On Deep Learning for Images with TensorFlow
- The Applied Artificial Intelligence Workshop
- Deep Learning with Keras
- Advanced Machine Learning with R
- 創(chuàng)客電子:Arduino和Raspberry Pi智能制作項(xiàng)目精選
- 詳解FPGA:人工智能時(shí)代的驅(qū)動(dòng)引擎
- Learning Microsoft Cognitive Services
- 計(jì)算機(jī)組裝與維護(hù)立體化教程(微課版)
- 計(jì)算機(jī)組裝與維護(hù)教程
- Exceptional C++:47個(gè)C++工程難題、編程問題和解決方案(中文版)
- The Deep Learning Workshop
- 新型復(fù)印機(jī)·傳真機(jī)維修數(shù)據(jù)速查寶典
- FPGA的人工智能之路:基于Intel FPGA開發(fā)的入門到實(shí)踐
- GLSL Essentials
- Mastering Adobe Photoshop Elements 2020
- CANoe開發(fā)從入門到精通
- Vue.js 3 Cookbook
- 電腦軟硬件維修從入門到精通(第2版)
- Managing Multimedia and Unstructured Data in the Oracle Database
- Spring Boot+Spring Cloud微服務(wù)開發(fā)實(shí)戰(zhàn)
- Learning PowerCLI