舉報

會員
Deep Learning Quick Reference
Mike Bernico 著
更新時間:2021-06-24 18:40:56
開會員,本書免費讀 >
IfyouareaDataScientistoraMachineLearningexpert,thenthisbookisaveryusefulreadintrainingyouradvancedmachinelearninganddeeplearningmodels.Youcanalsoreferthisbookifyouarestuckin-betweentheneuralnetworkmodelingandneedimmediateassistanceingettingaccomplishingthetasksmoothly.SomepriorknowledgeofPythonandtightholdonthebasicsofmachinelearningisrequired.
最新章節
- Leave a review - let other readers know what you think
- Other Books You May Enjoy
- Summary
- Model evaluation
- The training loop
- Building the discriminator
品牌:中圖公司
上架時間:2021-06-24 17:55:43
出版社:Packt Publishing
本書數字版權由中圖公司提供,并由其授權上海閱文信息技術有限公司制作發行
- Leave a review - let other readers know what you think 更新時間:2021-06-24 18:40:56
- Other Books You May Enjoy
- Summary
- Model evaluation
- The training loop
- Building the discriminator
- Building the generator
- Loading CIFAR-10
- Generating CIFAR-10 images using a Keras GAN
- Model evaluation
- The training loop
- Building the stacked model
- Building the discriminator
- Building the generator
- Loading the dataset
- Generating MNIST images using a Keras GAN
- Safe choices for GAN
- Mode collapse
- Stability
- How GANs can fail
- Step 2 – train the stack
- Step 1 – train the discriminator
- Stacked training
- Discriminator architecture
- Generator architecture
- Adversarial training architecture
- Deep Convolutional GAN architecture
- An overview of the GAN
- Generative Adversarial Networks
- Summary
- Results
- Training
- Agent
- Memory and policy
- Lunar Lander network architecture
- Lunar Lander
- Results
- Training
- Agent
- Policy
- Memory
- CartPole neural network architecture
- CartPole
- Building a reinforcement learning agent in Keras
- Using OpenAI gym
- Installing OpenAI gym
- Installing Keras-RL
- The Keras reinforcement learning framework
- DeepMind
- Exploitation versus exploration
- Memory and experience replay
- Online learning
- Deep Q networks
- Infinite state space
- Q Learning
- Markov Decision Processes
- Reinforcement learning overview
- Using Deep Reinforcement Learning
- Summary
- Example translations
- Decoding a sequence
- Translating a sequence
- Loading models
- Creating reverse indices
- Loading data
- Inference
- Training
- Putting it all together
- Network architecture (for inference)
- Training network architecture
- One hot encoding
- Loading data
- Understanding the data
- Machine translation
- Translation metrics
- Attention
- Teacher forcing
- Characters versus words
- Encoders and decoders
- Sequence-to-sequence model architecture
- Sequence-to-sequence model applications
- Sequence-to-sequence models
- Training Seq2Seq Models
- Summary
- Performance
- Training
- Putting it all together
- Output layer
- Convolution layers
- With GloVe vectors
- Without GloVe vectors
- Input and embedding layer architecture
- Loading pretrained word vectors
- Preparing the data
- Document classification with and without GloVe
- Performance
- Training the network
- Putting it all together
- Output layer
- LSTM layer
- Input and embedding layer architecture
- Preparing the data
- Sentiment analysis with Keras embedding layers and LSTMs
- Case studies for document classifications
- 1D CNNs for natural language processing
- Keras embedding layer
- Getting from words to documents
- Learning word embeddings with counting
- Learning word embeddings with prediction
- A quick example
- Word embedding
- Count and TF-IDF vectorization
- Stemming lemmatization and stopwords
- Bag of Word models
- NLP terminology
- Vectorizing text
- Document classification
- Semantic analysis
- An introduction to natural language processing
- Training LSTMs with Word Embeddings from Scratch
- Summary
- Measuring performance
- Training
- Stateful versus stateless LSTMs
- Network architecture
- Network output
- Data preparation glue
- Input shape
- Creating a lagged training set
- Scaling a time series
- Differencing a time series
- Slicing train and test by date
- Loading the dataset
- Data preparation
- Using an LSTM for time series prediction
- ARIMA and ARIMAX forecasting
- Stock and flow
- A refresher on time series problems
- Backpropagation through time
- Long Short Term Memory Networks
- What makes a neuron recurrent?
- Introducing recurrent neural networks
- Training an RNN from scratch
- Summary
- Training (fine-tuning)
- Training (feature extraction)
- Data input
- Data preparation
- Transfer network architecture
- Source network architecture
- Source domain overview
- Target domain overview
- Transfer learning in Keras
- Source/target domain similarity
- More data is always beneficial
- The impact of source/target volume and similarity
- Common problem domains
- Limited data
- When transfer learning should be used
- Overview of transfer learning
- Transfer Learning with Pretrained CNNs
- Summary
- Training with a generator
- The Keras ImageDataGenerator
- Using data augmentation
- Training
- Multi-GPU models in Keras
- Fully connected layers
- Convolutional layers
- Cost function and metrics
- Output
- Input
- Training a convolutional neural network in Keras
- Batch normalization
- Pooling layers
- Local connectivity
- Parameter sharing
- Benefits of convolutional layers
- A layer of convolutions
- Convolutions in three dimensions
- How do convolutional layers work?
- Introducing convolutions
- Training a CNN from Scratch
- Summary
- Hyperband
- Using random search with scikit-learn
- Common strategies
- Hyperparameter optimization strategies
- Which hyperparameters should we optimize?
- Practical advice
- Adding until you overfit then regularizing
- Finding a giant and then standing on his shoulders
- Should network architecture be considered a hyperparameter?
- Hyperparameter Optimization
- Summary
- Controlling variance with regularization
- Controlling variance with dropout
- Using scikit-learn metrics with multiclass models
- Training
- Putting it all together
- Softmax activation
- Output layer
- Hidden layers
- Input layer
- Loading MNIST
- Building a multiclass classifier in Keras
- Metrics
- Cost function
- Categorical outputs
- Flattening inputs
- Model inputs and outputs
- Problem definition
- Case study - handwritten digit classification
- Drawbacks
- Benefits
- Multiclass classification and deep neural networks
- Using Keras to Solve Multiclass Classification Problems
- Summary
- Measuring precision recall and f1-score
- Measuring ROC AUC in a custom callback
- Using the checkpoint callback in Keras
- Training our model
- Putting it all together
- The output layer
- Coding the hidden layers for our example
- Choosing a hidden layer architecture
- What happens if we use too few neurons?
- What happens if we use too many neurons?
- The hidden layers
- The input layer
- Building a binary classifier in Keras
- Using metrics to assess the performance
- The cost function
- Model inputs and outputs
- Loading data
- Defining our dataset
- Case study – epileptic seizure recognition
- Drawbacks of deep neural networks
- Benefits of deep neural networks
- Binary classification and deep neural networks
- Using Deep Learning to Solve Binary Classification Problems
- Summary
- Visualizing a broken network
- Visualizing network graphs
- Visualizing training
- Using TensorBoard
- Creating a TensorBoard callback
- Introducing Keras callbacks
- Connecting Keras to TensorBoard
- Running TensorBoard
- How TensorBoard talks to Keras/TensorFlow
- Installing TensorBoard
- Setting up TensorBoard
- A brief overview of TensorBoard
- Monitoring Network Training Using TensorBoard
- Summary
- Saving and loading a trained Keras model
- Tuning the model hyperparameters
- Measuring the deep neural network performance
- Building a deep neural network in Keras
- Measuring the performance of our model
- Training the Keras model
- Neural network architecture
- Output layer shape
- Hidden layer shape
- Input layer shape
- Building an MLP in Keras
- Defining our cost function
- Loading the dataset
- Defining our example problem
- How to plan a machine learning problem
- Using deep neural networks for regression
- Drawbacks to consider when using a neural network for regression
- Benefits of using a neural network for regression
- Regression analysis and deep neural networks
- Using Deep Learning to Solve Regression Problems
- Summary
- K-Fold cross-validation
- Managing bias and variance in deep neural networks
- The train val and test datasets
- Bias and variance errors in deep learning
- Building datasets for deep learning
- Installing TensorFlow and Keras
- Installing Python
- Installing Nvidia CUDA Toolkit and cuDNN
- GPU requirements for TensorFlow and Keras
- Popular alternatives to TensorFlow
- What is Keras?
- What is TensorFlow?
- Deep learning frameworks
- The Adam optimizer
- The RMSProp algorithm
- Using momentum with gradient descent
- Optimization algorithms for deep learning
- Stochastic and minibatch gradient descents
- The back propagation function
- The forward propagation process
- The loss and cost functions in deep learning
- Neuron activation functions
- The neuron linear function
- Neurons
- The deep neural network architectures
- The Building Blocks of Deep Learning
- Reviews
- Get in touch
- Conventions used
- Download the example code files
- To get the most out of this book
- What this book covers
- Who this book is for
- Preface
- Packt is searching for authors like you
- About the reviewer
- About the author
- Contributors
- Foreword
- PacktPub.com
- Why subscribe?
- Packt Upsell
- Dedication
- Title Page
- coverpage
- coverpage
- Title Page
- Dedication
- Packt Upsell
- Why subscribe?
- PacktPub.com
- Foreword
- Contributors
- About the author
- About the reviewer
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Conventions used
- Get in touch
- Reviews
- The Building Blocks of Deep Learning
- The deep neural network architectures
- Neurons
- The neuron linear function
- Neuron activation functions
- The loss and cost functions in deep learning
- The forward propagation process
- The back propagation function
- Stochastic and minibatch gradient descents
- Optimization algorithms for deep learning
- Using momentum with gradient descent
- The RMSProp algorithm
- The Adam optimizer
- Deep learning frameworks
- What is TensorFlow?
- What is Keras?
- Popular alternatives to TensorFlow
- GPU requirements for TensorFlow and Keras
- Installing Nvidia CUDA Toolkit and cuDNN
- Installing Python
- Installing TensorFlow and Keras
- Building datasets for deep learning
- Bias and variance errors in deep learning
- The train val and test datasets
- Managing bias and variance in deep neural networks
- K-Fold cross-validation
- Summary
- Using Deep Learning to Solve Regression Problems
- Regression analysis and deep neural networks
- Benefits of using a neural network for regression
- Drawbacks to consider when using a neural network for regression
- Using deep neural networks for regression
- How to plan a machine learning problem
- Defining our example problem
- Loading the dataset
- Defining our cost function
- Building an MLP in Keras
- Input layer shape
- Hidden layer shape
- Output layer shape
- Neural network architecture
- Training the Keras model
- Measuring the performance of our model
- Building a deep neural network in Keras
- Measuring the deep neural network performance
- Tuning the model hyperparameters
- Saving and loading a trained Keras model
- Summary
- Monitoring Network Training Using TensorBoard
- A brief overview of TensorBoard
- Setting up TensorBoard
- Installing TensorBoard
- How TensorBoard talks to Keras/TensorFlow
- Running TensorBoard
- Connecting Keras to TensorBoard
- Introducing Keras callbacks
- Creating a TensorBoard callback
- Using TensorBoard
- Visualizing training
- Visualizing network graphs
- Visualizing a broken network
- Summary
- Using Deep Learning to Solve Binary Classification Problems
- Binary classification and deep neural networks
- Benefits of deep neural networks
- Drawbacks of deep neural networks
- Case study – epileptic seizure recognition
- Defining our dataset
- Loading data
- Model inputs and outputs
- The cost function
- Using metrics to assess the performance
- Building a binary classifier in Keras
- The input layer
- The hidden layers
- What happens if we use too many neurons?
- What happens if we use too few neurons?
- Choosing a hidden layer architecture
- Coding the hidden layers for our example
- The output layer
- Putting it all together
- Training our model
- Using the checkpoint callback in Keras
- Measuring ROC AUC in a custom callback
- Measuring precision recall and f1-score
- Summary
- Using Keras to Solve Multiclass Classification Problems
- Multiclass classification and deep neural networks
- Benefits
- Drawbacks
- Case study - handwritten digit classification
- Problem definition
- Model inputs and outputs
- Flattening inputs
- Categorical outputs
- Cost function
- Metrics
- Building a multiclass classifier in Keras
- Loading MNIST
- Input layer
- Hidden layers
- Output layer
- Softmax activation
- Putting it all together
- Training
- Using scikit-learn metrics with multiclass models
- Controlling variance with dropout
- Controlling variance with regularization
- Summary
- Hyperparameter Optimization
- Should network architecture be considered a hyperparameter?
- Finding a giant and then standing on his shoulders
- Adding until you overfit then regularizing
- Practical advice
- Which hyperparameters should we optimize?
- Hyperparameter optimization strategies
- Common strategies
- Using random search with scikit-learn
- Hyperband
- Summary
- Training a CNN from Scratch
- Introducing convolutions
- How do convolutional layers work?
- Convolutions in three dimensions
- A layer of convolutions
- Benefits of convolutional layers
- Parameter sharing
- Local connectivity
- Pooling layers
- Batch normalization
- Training a convolutional neural network in Keras
- Input
- Output
- Cost function and metrics
- Convolutional layers
- Fully connected layers
- Multi-GPU models in Keras
- Training
- Using data augmentation
- The Keras ImageDataGenerator
- Training with a generator
- Summary
- Transfer Learning with Pretrained CNNs
- Overview of transfer learning
- When transfer learning should be used
- Limited data
- Common problem domains
- The impact of source/target volume and similarity
- More data is always beneficial
- Source/target domain similarity
- Transfer learning in Keras
- Target domain overview
- Source domain overview
- Source network architecture
- Transfer network architecture
- Data preparation
- Data input
- Training (feature extraction)
- Training (fine-tuning)
- Summary
- Training an RNN from scratch
- Introducing recurrent neural networks
- What makes a neuron recurrent?
- Long Short Term Memory Networks
- Backpropagation through time
- A refresher on time series problems
- Stock and flow
- ARIMA and ARIMAX forecasting
- Using an LSTM for time series prediction
- Data preparation
- Loading the dataset
- Slicing train and test by date
- Differencing a time series
- Scaling a time series
- Creating a lagged training set
- Input shape
- Data preparation glue
- Network output
- Network architecture
- Stateful versus stateless LSTMs
- Training
- Measuring performance
- Summary
- Training LSTMs with Word Embeddings from Scratch
- An introduction to natural language processing
- Semantic analysis
- Document classification
- Vectorizing text
- NLP terminology
- Bag of Word models
- Stemming lemmatization and stopwords
- Count and TF-IDF vectorization
- Word embedding
- A quick example
- Learning word embeddings with prediction
- Learning word embeddings with counting
- Getting from words to documents
- Keras embedding layer
- 1D CNNs for natural language processing
- Case studies for document classifications
- Sentiment analysis with Keras embedding layers and LSTMs
- Preparing the data
- Input and embedding layer architecture
- LSTM layer
- Output layer
- Putting it all together
- Training the network
- Performance
- Document classification with and without GloVe
- Preparing the data
- Loading pretrained word vectors
- Input and embedding layer architecture
- Without GloVe vectors
- With GloVe vectors
- Convolution layers
- Output layer
- Putting it all together
- Training
- Performance
- Summary
- Training Seq2Seq Models
- Sequence-to-sequence models
- Sequence-to-sequence model applications
- Sequence-to-sequence model architecture
- Encoders and decoders
- Characters versus words
- Teacher forcing
- Attention
- Translation metrics
- Machine translation
- Understanding the data
- Loading data
- One hot encoding
- Training network architecture
- Network architecture (for inference)
- Putting it all together
- Training
- Inference
- Loading data
- Creating reverse indices
- Loading models
- Translating a sequence
- Decoding a sequence
- Example translations
- Summary
- Using Deep Reinforcement Learning
- Reinforcement learning overview
- Markov Decision Processes
- Q Learning
- Infinite state space
- Deep Q networks
- Online learning
- Memory and experience replay
- Exploitation versus exploration
- DeepMind
- The Keras reinforcement learning framework
- Installing Keras-RL
- Installing OpenAI gym
- Using OpenAI gym
- Building a reinforcement learning agent in Keras
- CartPole
- CartPole neural network architecture
- Memory
- Policy
- Agent
- Training
- Results
- Lunar Lander
- Lunar Lander network architecture
- Memory and policy
- Agent
- Training
- Results
- Summary
- Generative Adversarial Networks
- An overview of the GAN
- Deep Convolutional GAN architecture
- Adversarial training architecture
- Generator architecture
- Discriminator architecture
- Stacked training
- Step 1 – train the discriminator
- Step 2 – train the stack
- How GANs can fail
- Stability
- Mode collapse
- Safe choices for GAN
- Generating MNIST images using a Keras GAN
- Loading the dataset
- Building the generator
- Building the discriminator
- Building the stacked model
- The training loop
- Model evaluation
- Generating CIFAR-10 images using a Keras GAN
- Loading CIFAR-10
- Building the generator
- Building the discriminator
- The training loop
- Model evaluation
- Summary
- Other Books You May Enjoy
- Leave a review - let other readers know what you think 更新時間:2021-06-24 18:40:56