首頁(yè) > 工業(yè)技術(shù) >
自動(dòng)化技術(shù)
> Hands-On Deep Learning Architectures with Python最新章節(jié)目錄
舉報(bào)

會(huì)員
Hands-On Deep Learning Architectures with Python
Deeplearningarchitecturesarecomposedofmultilevelnonlinearoperationsthatrepresenthigh-levelabstractions;thisallowsyoutolearnusefulfeaturerepresentationsfromthedata.Thisbookwillhelpyoulearnandimplementdeeplearningarchitecturestoresolvevariousdeeplearningresearchproblems.Hands-OnDeepLearningArchitectureswithPythonexplainstheessentiallearningalgorithmsusedfordeepandshallowarchitectures.Packedwithpracticalimplementationsandideastohelpyoubuildefficientartificialintelligencesystems(AI),thisbookwillhelpyoulearnhowneuralnetworksplayamajorroleinbuildingdeeparchitectures.Youwillunderstandvariousdeeplearningarchitectures(suchasAlexNet,VGGNet,GoogleNet)witheasy-to-followcodeanddiagrams.Inadditiontothis,thebookwillalsoguideyouinbuildingandtrainingvariousdeeparchitecturessuchastheBoltzmannmechanism,autoencoders,convolutionalneuralnetworks(CNNs),recurrentneuralnetworks(RNNs),naturallanguageprocessing(NLP),GAN,andmore—allwithpracticalimplementations.Bytheendofthisbook,youwillbeabletoconstructdeepmodelsusingpopularframeworksanddatasetswiththerequireddesignpatternsforeacharchitecture.Youwillbereadytoexplorethepotentialofdeeparchitecturesintoday'sworld.
目錄(202章)
倒序
- coverpage
- Title Page
- Copyright and Credits
- Hands-On Deep Learning Architectures with Python
- About Packt
- Why subscribe?
- Packt.com
- Contributors
- About the authors
- About the reviewers
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Section 1: The Elements of Deep Learning
- Getting Started with Deep Learning
- Artificial intelligence
- Machine learning
- Supervised learning
- Regression
- Classification
- Unsupervised learning
- Reinforcement learning
- Deep learning
- Applications of deep learning
- Self-driving cars
- Image translation
- Machine translation
- Encoder-decoder structure
- Chatbots
- Building the fundamentals
- Biological inspiration
- ANNs
- Activation functions
- Linear activation
- Sigmoid activation
- Tanh activation
- ReLU activation
- Softmax activation
- TensorFlow and Keras
- Setting up the environment
- Introduction to TensorFlow
- Installing TensorFlow CPU
- Installing TensorFlow GPU
- Testing your installation
- Getting to know TensorFlow
- Building a graph
- Creating a Session
- Introduction to Keras
- Sequential API
- Functional API
- Summary
- Deep Feedforward Networks
- Evolutionary path to DFNs
- Architecture of DFN
- Training
- Loss function
- Regression loss
- Mean squared error (MSE)
- Mean absolute error
- Classification loss
- Cross entropy
- Gradient descent
- Types of gradient descent
- Batch gradient descent
- Stochastic gradient descent
- Mini-batch gradient descent
- Backpropagation
- Optimizers
- Train test and validation
- Training set
- Validation set
- Test set
- Overfitting and regularization
- L1 and L2 regularization
- Dropout
- Early stopping
- Building our first DFN
- MNIST fashion data
- Getting the data
- Visualizing data
- Normalizing and splitting data
- Model parameters
- One-hot encoding
- Building a model graph
- Adding placeholders
- Adding layers
- Adding loss function
- Adding an optimizer
- Calculating accuracy
- Running a session to train
- The easy way
- Summary
- Restricted Boltzmann Machines and Autoencoders
- What are RBMs?
- The evolution path of RBMs
- RBM architectures and applications
- RBM and their implementation in TensorFlow
- RBMs for movie recommendation
- DBNs and their implementation in TensorFlow
- DBNs for image classification
- What are autoencoders?
- The evolution path of autoencoders
- Autoencoders architectures and applications
- Vanilla autoencoders
- Deep autoencoders
- Sparse autoencoders
- Denoising autoencoders
- Contractive autoencoders
- Summary
- Exercise
- Acknowledgements
- Section 2: Convolutional Neural Networks
- CNN Architecture
- Problem with deep feedforward networks
- Evolution path to CNNs
- Architecture of CNNs
- The input layer
- The convolutional layer
- The maxpooling layer
- The fully connected layer
- Image classification with CNNs
- VGGNet
- InceptionNet
- ResNet
- Building our first CNN
- CIFAR
- Data loading and pre-processing
- Object detection with CNN
- R-CNN
- Faster R-CNN
- You Only Look Once (YOLO)
- Single Shot Multibox Detector
- TensorFlow object detection zoo
- Summary
- Mobile Neural Networks and CNNs
- Evolution path to MobileNets
- Architecture of MobileNets
- Depth-wise separable convolution
- The need for depth-wise separable convolution
- Structure of MobileNet
- MobileNet with Keras
- MobileNetV2
- Motivation behind MobileNetV2
- Structure of MobileNetV2
- Linear bottleneck layer
- Expansion layer
- Inverted residual block
- Overall architecture
- Implementing MobileNetV2
- Comparing the two MobileNets
- SSD MobileNetV2
- Summary
- Section 3: Sequence Modeling
- Recurrent Neural Networks
- What are RNNs?
- The evolution path of RNNs
- RNN architectures and applications
- Architectures by input and output
- Vanilla RNNs
- Vanilla RNNs for text generation
- LSTM RNNs
- LSTM RNNs for text generation
- GRU RNNs
- GRU RNNs for stock price prediction
- Bidirectional RNNs
- Bidirectional RNNs for sentiment classification
- Summary
- Section 4: Generative Adversarial Networks (GANs)
- Generative Adversarial Networks
- What are GANs?
- Generative models
- Adversarial – training in an adversarial manner
- The evolution path of GANs
- GAN architectures and implementations
- Vanilla GANs
- Deep convolutional GANs
- Conditional GANs
- InfoGANs
- Summary
- Section 5: The Future of Deep Learning and Advanced Artificial Intelligence
- New Trends of Deep Learning
- New trends in deep learning
- Bayesian neural networks
- What our deep learning models don't know – uncertainty
- How we can obtain uncertainty information – Bayesian neural networks
- Capsule networks
- What convolutional neural networks fail to do
- Capsule networks – incorporating oriental and relative spatial relationships
- Meta-learning
- One big challenge in deep learning – training data
- Meta-learning – learning to learn
- Metric-based meta-learning
- Summary
- Other Books You May Enjoy
- Leave a review - let other readers know what you think 更新時(shí)間:2021-06-24 14:48:52
推薦閱讀
- 條碼技術(shù)及應(yīng)用
- 計(jì)算機(jī)圖形圖像處理:Photoshop CS3
- Photoshop CS3特效處理融會(huì)貫通
- 大學(xué)計(jì)算機(jī)應(yīng)用基礎(chǔ)
- 計(jì)算機(jī)網(wǎng)絡(luò)安全
- 從零開始學(xué)JavaScript
- 漢字錄入技能訓(xùn)練
- Hands-On Microservices with C#
- Python語(yǔ)言從入門到精通
- Eclipse全程指南
- 微機(jī)組裝與維護(hù)教程
- Spark Streaming實(shí)時(shí)流式大數(shù)據(jù)處理實(shí)戰(zhàn)
- 工業(yè)控制系統(tǒng)安全
- VMware vSphere 6.5 Cookbook(Third Edition)
- Machine Learning for Healthcare Analytics Projects
- 新手學(xué)Illustrator CS6平面廣告設(shè)計(jì)
- INSTANT Oracle GoldenGate
- Spark MLlib機(jī)器學(xué)習(xí)實(shí)踐(第2版)
- Learning OpenStack
- Bash Quick Start Guide
- 51單片機(jī)C語(yǔ)言應(yīng)用開發(fā)三位一體實(shí)戰(zhàn)精講
- ABB工業(yè)機(jī)器人應(yīng)用技術(shù)全集
- AutoCAD 2012中文版完全自學(xué)一本通
- 巧學(xué)活用打印機(jī)維護(hù)
- 計(jì)算機(jī)繪圖
- Salesforce CRM Admin Cookbook
- MySQL 8 Administrator’s Guide
- 數(shù)據(jù)庫(kù)應(yīng)用基礎(chǔ):Access 2007
- Getting Started with Kubernetes(Second Edition)
- 數(shù)據(jù)庫(kù)應(yīng)用基礎(chǔ)