舉報

會員
Deep Learning Quick Reference
IfyouareaDataScientistoraMachineLearningexpert,thenthisbookisaveryusefulreadintrainingyouradvancedmachinelearninganddeeplearningmodels.Youcanalsoreferthisbookifyouarestuckin-betweentheneuralnetworkmodelingandneedimmediateassistanceingettingaccomplishingthetasksmoothly.SomepriorknowledgeofPythonandtightholdonthebasicsofmachinelearningisrequired.
目錄(322章)
倒序
- coverpage
- Title Page
- Dedication
- Packt Upsell
- Why subscribe?
- PacktPub.com
- Foreword
- Contributors
- About the author
- About the reviewer
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Conventions used
- Get in touch
- Reviews
- The Building Blocks of Deep Learning
- The deep neural network architectures
- Neurons
- The neuron linear function
- Neuron activation functions
- The loss and cost functions in deep learning
- The forward propagation process
- The back propagation function
- Stochastic and minibatch gradient descents
- Optimization algorithms for deep learning
- Using momentum with gradient descent
- The RMSProp algorithm
- The Adam optimizer
- Deep learning frameworks
- What is TensorFlow?
- What is Keras?
- Popular alternatives to TensorFlow
- GPU requirements for TensorFlow and Keras
- Installing Nvidia CUDA Toolkit and cuDNN
- Installing Python
- Installing TensorFlow and Keras
- Building datasets for deep learning
- Bias and variance errors in deep learning
- The train val and test datasets
- Managing bias and variance in deep neural networks
- K-Fold cross-validation
- Summary
- Using Deep Learning to Solve Regression Problems
- Regression analysis and deep neural networks
- Benefits of using a neural network for regression
- Drawbacks to consider when using a neural network for regression
- Using deep neural networks for regression
- How to plan a machine learning problem
- Defining our example problem
- Loading the dataset
- Defining our cost function
- Building an MLP in Keras
- Input layer shape
- Hidden layer shape
- Output layer shape
- Neural network architecture
- Training the Keras model
- Measuring the performance of our model
- Building a deep neural network in Keras
- Measuring the deep neural network performance
- Tuning the model hyperparameters
- Saving and loading a trained Keras model
- Summary
- Monitoring Network Training Using TensorBoard
- A brief overview of TensorBoard
- Setting up TensorBoard
- Installing TensorBoard
- How TensorBoard talks to Keras/TensorFlow
- Running TensorBoard
- Connecting Keras to TensorBoard
- Introducing Keras callbacks
- Creating a TensorBoard callback
- Using TensorBoard
- Visualizing training
- Visualizing network graphs
- Visualizing a broken network
- Summary
- Using Deep Learning to Solve Binary Classification Problems
- Binary classification and deep neural networks
- Benefits of deep neural networks
- Drawbacks of deep neural networks
- Case study – epileptic seizure recognition
- Defining our dataset
- Loading data
- Model inputs and outputs
- The cost function
- Using metrics to assess the performance
- Building a binary classifier in Keras
- The input layer
- The hidden layers
- What happens if we use too many neurons?
- What happens if we use too few neurons?
- Choosing a hidden layer architecture
- Coding the hidden layers for our example
- The output layer
- Putting it all together
- Training our model
- Using the checkpoint callback in Keras
- Measuring ROC AUC in a custom callback
- Measuring precision recall and f1-score
- Summary
- Using Keras to Solve Multiclass Classification Problems
- Multiclass classification and deep neural networks
- Benefits
- Drawbacks
- Case study - handwritten digit classification
- Problem definition
- Model inputs and outputs
- Flattening inputs
- Categorical outputs
- Cost function
- Metrics
- Building a multiclass classifier in Keras
- Loading MNIST
- Input layer
- Hidden layers
- Output layer
- Softmax activation
- Putting it all together
- Training
- Using scikit-learn metrics with multiclass models
- Controlling variance with dropout
- Controlling variance with regularization
- Summary
- Hyperparameter Optimization
- Should network architecture be considered a hyperparameter?
- Finding a giant and then standing on his shoulders
- Adding until you overfit then regularizing
- Practical advice
- Which hyperparameters should we optimize?
- Hyperparameter optimization strategies
- Common strategies
- Using random search with scikit-learn
- Hyperband
- Summary
- Training a CNN from Scratch
- Introducing convolutions
- How do convolutional layers work?
- Convolutions in three dimensions
- A layer of convolutions
- Benefits of convolutional layers
- Parameter sharing
- Local connectivity
- Pooling layers
- Batch normalization
- Training a convolutional neural network in Keras
- Input
- Output
- Cost function and metrics
- Convolutional layers
- Fully connected layers
- Multi-GPU models in Keras
- Training
- Using data augmentation
- The Keras ImageDataGenerator
- Training with a generator
- Summary
- Transfer Learning with Pretrained CNNs
- Overview of transfer learning
- When transfer learning should be used
- Limited data
- Common problem domains
- The impact of source/target volume and similarity
- More data is always beneficial
- Source/target domain similarity
- Transfer learning in Keras
- Target domain overview
- Source domain overview
- Source network architecture
- Transfer network architecture
- Data preparation
- Data input
- Training (feature extraction)
- Training (fine-tuning)
- Summary
- Training an RNN from scratch
- Introducing recurrent neural networks
- What makes a neuron recurrent?
- Long Short Term Memory Networks
- Backpropagation through time
- A refresher on time series problems
- Stock and flow
- ARIMA and ARIMAX forecasting
- Using an LSTM for time series prediction
- Data preparation
- Loading the dataset
- Slicing train and test by date
- Differencing a time series
- Scaling a time series
- Creating a lagged training set
- Input shape
- Data preparation glue
- Network output
- Network architecture
- Stateful versus stateless LSTMs
- Training
- Measuring performance
- Summary
- Training LSTMs with Word Embeddings from Scratch
- An introduction to natural language processing
- Semantic analysis
- Document classification
- Vectorizing text
- NLP terminology
- Bag of Word models
- Stemming lemmatization and stopwords
- Count and TF-IDF vectorization
- Word embedding
- A quick example
- Learning word embeddings with prediction
- Learning word embeddings with counting
- Getting from words to documents
- Keras embedding layer
- 1D CNNs for natural language processing
- Case studies for document classifications
- Sentiment analysis with Keras embedding layers and LSTMs
- Preparing the data
- Input and embedding layer architecture
- LSTM layer
- Output layer
- Putting it all together
- Training the network
- Performance
- Document classification with and without GloVe
- Preparing the data
- Loading pretrained word vectors
- Input and embedding layer architecture
- Without GloVe vectors
- With GloVe vectors
- Convolution layers
- Output layer
- Putting it all together
- Training
- Performance
- Summary
- Training Seq2Seq Models
- Sequence-to-sequence models
- Sequence-to-sequence model applications
- Sequence-to-sequence model architecture
- Encoders and decoders
- Characters versus words
- Teacher forcing
- Attention
- Translation metrics
- Machine translation
- Understanding the data
- Loading data
- One hot encoding
- Training network architecture
- Network architecture (for inference)
- Putting it all together
- Training
- Inference
- Loading data
- Creating reverse indices
- Loading models
- Translating a sequence
- Decoding a sequence
- Example translations
- Summary
- Using Deep Reinforcement Learning
- Reinforcement learning overview
- Markov Decision Processes
- Q Learning
- Infinite state space
- Deep Q networks
- Online learning
- Memory and experience replay
- Exploitation versus exploration
- DeepMind
- The Keras reinforcement learning framework
- Installing Keras-RL
- Installing OpenAI gym
- Using OpenAI gym
- Building a reinforcement learning agent in Keras
- CartPole
- CartPole neural network architecture
- Memory
- Policy
- Agent
- Training
- Results
- Lunar Lander
- Lunar Lander network architecture
- Memory and policy
- Agent
- Training
- Results
- Summary
- Generative Adversarial Networks
- An overview of the GAN
- Deep Convolutional GAN architecture
- Adversarial training architecture
- Generator architecture
- Discriminator architecture
- Stacked training
- Step 1 – train the discriminator
- Step 2 – train the stack
- How GANs can fail
- Stability
- Mode collapse
- Safe choices for GAN
- Generating MNIST images using a Keras GAN
- Loading the dataset
- Building the generator
- Building the discriminator
- Building the stacked model
- The training loop
- Model evaluation
- Generating CIFAR-10 images using a Keras GAN
- Loading CIFAR-10
- Building the generator
- Building the discriminator
- The training loop
- Model evaluation
- Summary
- Other Books You May Enjoy
- Leave a review - let other readers know what you think 更新時間:2021-06-24 18:40:56
推薦閱讀
- 亮劍.NET:.NET深入體驗與實戰精要
- Mastering Mesos
- Mastering Proxmox(Third Edition)
- 火格局的時空變異及其在電網防火中的應用
- Dreamweaver CS3網頁制作融會貫通
- 協作機器人技術及應用
- Hadoop 2.x Administration Cookbook
- 一本書玩轉數據分析(雙色圖解版)
- JMAG電機電磁仿真分析與實例解析
- 統計策略搜索強化學習方法及應用
- 系統安裝與重裝
- 樂高機器人—槍械武器庫
- 網絡服務搭建、配置與管理大全(Linux版)
- 大數據案例精析
- Linux Shell Scripting Cookbook(Third Edition)
- Hands-On Deep Learning with Go
- 案例解說Delphi典型控制應用
- 樂高創意機器人教程(中級 上冊 10~16歲) (青少年iCAN+創新創意實踐指導叢書)
- VMware vSphere 6.5 Cookbook(Third Edition)
- 智能與智慧:人工智能遇見中國哲學家
- 嵌入式系統原理與接口技術
- Ripple Quick Start Guide
- ARM Cortex-M3嵌入式開發實例詳解
- 仿人機器人開發指南
- R Deep Learning Projects
- 多媒體技術基礎及應用
- MCGS嵌入版組態應用技術
- Troubleshooting OpenStack
- 物聯網安全技術
- 多變量過程智能優化辨識理論及應用