舉報

會員
Deep Learning Essentials
Wei Di Anurag Bhardwaj Jianing Wei 著
更新時間:2021-06-30 19:18:45
開會員,本書免費讀 >
Aspiringdatascientistsandmachinelearningexpertswhohavelimitedornoexposuretodeeplearningwillfindthisbooktobeveryuseful.Ifyouarelookingforaresourcethatgetsyouupandrunningwiththefundamentalsofdeeplearningandneuralnetworks,thisbookisforyou.AsthemodelsinthebookaretrainedusingthepopularPython-basedlibrariessuchasTensorflowandKeras,itwouldbeusefultohavesoundprogrammingknowledgeofPython.
最新章節
- Leave a review – let other readers know what you think
- Other Books You May Enjoy
- Summary
- Code synthesis
- Visual reasoning
- Lip reading
品牌:中圖公司
上架時間:2021-06-30 18:31:38
出版社:Packt Publishing
本書數字版權由中圖公司提供,并由其授權上海閱文信息技術有限公司制作發行
- Leave a review – let other readers know what you think 更新時間:2021-06-30 19:18:45
- Other Books You May Enjoy
- Summary
- Code synthesis
- Visual reasoning
- Lip reading
- Clinical imaging
- Predictive medicine
- Genomics
- Novel applications
- Capsule networks
- Generative Adversarial Networks
- Recent models for deep learning
- Deep Learning Trends
- Summary
- Model compression
- List of pre-trained models
- Tricks and techniques
- When not to use fine-tuning
- When to use fine-tuning
- Fine-tuning
- Fine-tuning
- Early stopping
- Dropout
- Batch normalization
- Preventing overfitting
- Others
- Regression
- Multi-class multi-label classification
- Multi-class classification
- Choosing the loss function
- Clip gradients
- Mini-batch
- Learning rate
- Optimization
- Xavier initialization
- ReLU initialization
- Random initialization
- All-zero
- Weight initialization
- Tricks in training
- Data normalization
- Data augmentation
- Data cleaning
- Massaging your data
- Deep Learning Hacks
- Summary
- Reinforcement learning with Q-learning example
- Simple reinforcement learning example
- Implementing reinforcement learning
- Dueling DQN
- Prioritized experience delay
- Double-DQN
- Reward clipping
- Target network
- Experience replay
- Deep Q-network (DQN)
- Deep reinforcement learning
- Actor-critic-based algorithms
- Policy search-based algorithms
- Value learning-based algorithms
- Problem setup
- What is reinforcement learning (RL)?
- Deep Reinforcement Learning
- Summary
- Multi-source based self-driving
- Visual question answering
- The difference between hard attention and soft attention
- Attention in computer vision
- Attention in NLP
- Attention models
- Rank position
- SPICE
- CIDEr
- METEOR
- ROUGE
- BLEU
- Evaluation
- Datasets
- Other types of approaches
- Beam Search
- Testing/inference
- Training
- Decoder
- Encoder
- Show and tell
- Image captioning
- Co-learning
- Fusion
- Alignment
- Translation
- Representation
- Challenges of multimodality learning
- What is multimodality learning?
- Multimodality
- Summary
- Chatbots
- Seq2Seq inference
- Machine translation
- Sequence tagging
- Language modeling
- Applications
- LSTM implementation with tensorflow
- Long short-term memory network
- Training RNN is tough
- Basic RNN model
- RNN architectures
- Recurrent neural networks
- Limitations of neural networks
- Deep learning for text
- Advanced Natural Language Processing
- Summary
- Fine-tuning
- Example use cases
- Applications
- FastText
- Understanding GloVe
- Using the pre-trained Word2Vec embeddings
- Word2Vec from Google News
- Using existing pre-trained Word2Vec embeddings
- Training a Word2Vec using TensorFlow
- Continuous Bag-of-Words model
- The loss function
- The output layer
- The hidden layer
- The input layer
- Skip-Gram model
- Other hyperparameters
- Hierarchical softmax
- Negative sampling
- Generating training data
- The word windows
- Basic idea of Word2Vec
- Word2Vec
- Commonly used pre-trained word embeddings
- Problems of distributed representation
- Advantages of distributed representation
- Idea of word embeddings
- Word embeddings
- Motivation and distributed representation
- Deep learning NLP
- Weighting the terms tf-idf
- Bag of words
- Traditional NLP
- NLP - Vector Representation
- Summary
- ResNet
- GoogLeNet
- Visual Geometry Group
- AlexNet
- Popular CNN architectures
- Fine-tuning CNNs
- Handwritten digit classification example
- Model visualization
- Loss functions
- Regularization
- Network initialization
- Fully connected or dense layer
- Pooling or subsampling layer
- Convolution layer
- Network layers
- Data augmentation
- Input preprocessing
- Data transformations
- Convolutional Neural Networks
- Origins of CNNs
- Deep Learning in Computer Vision
- Summary
- Handwritten digits recognition
- TensorFlow setup and key concepts
- Practical examples
- Step 3 – The output gate
- Step 2 – Updating memory/cell state
- Step 1 – The forget gate
- Cells and gates in LTSM
- Vanishing gradient and LTSM
- Backpropagation through time
- Cells in RNN and unrolling
- Recurrent neural networks (RNN/LSTM)
- RBM versus Boltzmann Machines
- Stacked/continuous RBM
- Contrastive divergence (CD-k)
- Encoding and decoding
- Energy function
- Restricted Boltzmann Machines
- Overall
- Fully connected layer
- Pooling/subsampling
- Convolution
- Convolutional Neural Networks
- Deep learning models
- Regularization
- Optimization algorithms
- Vanishing and exploding gradients
- Automatic differentiation
- Updating the network
- Backpropagation
- Calculating errors
- Backpropagation
- Forward propagation
- Weight initialization
- How a network learns
- Choosing the right activation function
- Softmax
- Leaky ReLU and maxout
- ReLU
- Tanh or hyperbolic tangent function
- Sigmoid or logistic function
- Activation functions
- Hidden layers
- The output layer
- The input layer
- Multilayer perceptrons
- Getting Started with Neural Networks
- Summary
- Setup using Docker
- Setup from scratch
- Setting up deep learning on AWS
- Framework comparison
- Keras
- Microsoft Cognitive Toolkit
- Theano
- Torch
- MXNet
- Caffe
- TensorFlow – a deep learning library
- Deep learning software frameworks
- Cooling systems
- Hard drive
- RAM size
- CPU cache size
- CPU cores
- Deep learning hardware guide
- Deep learning with GPU
- Matrix properties
- Data operations
- Data representation
- Basics of linear algebra
- Getting Yourself Ready for Deep Learning
- Summary
- Future potential and challenges
- Deep learning for business
- Success stories
- Lucrative applications
- Applications
- Hierarchical feature representation
- Distributed feature representation
- The representation viewpoint
- The neural viewpoint
- The motivation of deep architecture
- Impact of deep learning
- Advantages over traditional shallow methods
- Why deep learning?
- The history and rise of deep learning
- What is AI and deep learning?
- Why Deep Learning?
- Reviews
- Get in touch
- Conventions used
- Download the color images
- Download the example code files
- To get the most out of this book
- What this book covers
- Who this book is for
- Preface
- Packt is searching for authors like you
- About the reviewer
- About the authors
- Contributors
- PacktPub.com
- Why subscribe?
- Packt Upsell
- Title Page
- coverpage
- coverpage
- Title Page
- Packt Upsell
- Why subscribe?
- PacktPub.com
- Contributors
- About the authors
- About the reviewer
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Why Deep Learning?
- What is AI and deep learning?
- The history and rise of deep learning
- Why deep learning?
- Advantages over traditional shallow methods
- Impact of deep learning
- The motivation of deep architecture
- The neural viewpoint
- The representation viewpoint
- Distributed feature representation
- Hierarchical feature representation
- Applications
- Lucrative applications
- Success stories
- Deep learning for business
- Future potential and challenges
- Summary
- Getting Yourself Ready for Deep Learning
- Basics of linear algebra
- Data representation
- Data operations
- Matrix properties
- Deep learning with GPU
- Deep learning hardware guide
- CPU cores
- CPU cache size
- RAM size
- Hard drive
- Cooling systems
- Deep learning software frameworks
- TensorFlow – a deep learning library
- Caffe
- MXNet
- Torch
- Theano
- Microsoft Cognitive Toolkit
- Keras
- Framework comparison
- Setting up deep learning on AWS
- Setup from scratch
- Setup using Docker
- Summary
- Getting Started with Neural Networks
- Multilayer perceptrons
- The input layer
- The output layer
- Hidden layers
- Activation functions
- Sigmoid or logistic function
- Tanh or hyperbolic tangent function
- ReLU
- Leaky ReLU and maxout
- Softmax
- Choosing the right activation function
- How a network learns
- Weight initialization
- Forward propagation
- Backpropagation
- Calculating errors
- Backpropagation
- Updating the network
- Automatic differentiation
- Vanishing and exploding gradients
- Optimization algorithms
- Regularization
- Deep learning models
- Convolutional Neural Networks
- Convolution
- Pooling/subsampling
- Fully connected layer
- Overall
- Restricted Boltzmann Machines
- Energy function
- Encoding and decoding
- Contrastive divergence (CD-k)
- Stacked/continuous RBM
- RBM versus Boltzmann Machines
- Recurrent neural networks (RNN/LSTM)
- Cells in RNN and unrolling
- Backpropagation through time
- Vanishing gradient and LTSM
- Cells and gates in LTSM
- Step 1 – The forget gate
- Step 2 – Updating memory/cell state
- Step 3 – The output gate
- Practical examples
- TensorFlow setup and key concepts
- Handwritten digits recognition
- Summary
- Deep Learning in Computer Vision
- Origins of CNNs
- Convolutional Neural Networks
- Data transformations
- Input preprocessing
- Data augmentation
- Network layers
- Convolution layer
- Pooling or subsampling layer
- Fully connected or dense layer
- Network initialization
- Regularization
- Loss functions
- Model visualization
- Handwritten digit classification example
- Fine-tuning CNNs
- Popular CNN architectures
- AlexNet
- Visual Geometry Group
- GoogLeNet
- ResNet
- Summary
- NLP - Vector Representation
- Traditional NLP
- Bag of words
- Weighting the terms tf-idf
- Deep learning NLP
- Motivation and distributed representation
- Word embeddings
- Idea of word embeddings
- Advantages of distributed representation
- Problems of distributed representation
- Commonly used pre-trained word embeddings
- Word2Vec
- Basic idea of Word2Vec
- The word windows
- Generating training data
- Negative sampling
- Hierarchical softmax
- Other hyperparameters
- Skip-Gram model
- The input layer
- The hidden layer
- The output layer
- The loss function
- Continuous Bag-of-Words model
- Training a Word2Vec using TensorFlow
- Using existing pre-trained Word2Vec embeddings
- Word2Vec from Google News
- Using the pre-trained Word2Vec embeddings
- Understanding GloVe
- FastText
- Applications
- Example use cases
- Fine-tuning
- Summary
- Advanced Natural Language Processing
- Deep learning for text
- Limitations of neural networks
- Recurrent neural networks
- RNN architectures
- Basic RNN model
- Training RNN is tough
- Long short-term memory network
- LSTM implementation with tensorflow
- Applications
- Language modeling
- Sequence tagging
- Machine translation
- Seq2Seq inference
- Chatbots
- Summary
- Multimodality
- What is multimodality learning?
- Challenges of multimodality learning
- Representation
- Translation
- Alignment
- Fusion
- Co-learning
- Image captioning
- Show and tell
- Encoder
- Decoder
- Training
- Testing/inference
- Beam Search
- Other types of approaches
- Datasets
- Evaluation
- BLEU
- ROUGE
- METEOR
- CIDEr
- SPICE
- Rank position
- Attention models
- Attention in NLP
- Attention in computer vision
- The difference between hard attention and soft attention
- Visual question answering
- Multi-source based self-driving
- Summary
- Deep Reinforcement Learning
- What is reinforcement learning (RL)?
- Problem setup
- Value learning-based algorithms
- Policy search-based algorithms
- Actor-critic-based algorithms
- Deep reinforcement learning
- Deep Q-network (DQN)
- Experience replay
- Target network
- Reward clipping
- Double-DQN
- Prioritized experience delay
- Dueling DQN
- Implementing reinforcement learning
- Simple reinforcement learning example
- Reinforcement learning with Q-learning example
- Summary
- Deep Learning Hacks
- Massaging your data
- Data cleaning
- Data augmentation
- Data normalization
- Tricks in training
- Weight initialization
- All-zero
- Random initialization
- ReLU initialization
- Xavier initialization
- Optimization
- Learning rate
- Mini-batch
- Clip gradients
- Choosing the loss function
- Multi-class classification
- Multi-class multi-label classification
- Regression
- Others
- Preventing overfitting
- Batch normalization
- Dropout
- Early stopping
- Fine-tuning
- Fine-tuning
- When to use fine-tuning
- When not to use fine-tuning
- Tricks and techniques
- List of pre-trained models
- Model compression
- Summary
- Deep Learning Trends
- Recent models for deep learning
- Generative Adversarial Networks
- Capsule networks
- Novel applications
- Genomics
- Predictive medicine
- Clinical imaging
- Lip reading
- Visual reasoning
- Code synthesis
- Summary
- Other Books You May Enjoy
- Leave a review – let other readers know what you think 更新時間:2021-06-30 19:18:45