舉報

會員
Hands-On Natural Language Processing with Python
Rajesh Arumugam Rajalingappaa Shanmugamani 著
更新時間:2021-08-13 16:02:28
開會員,本書免費讀 >
Naturallanguageprocessing(NLP)hasfounditsapplicationinvariousdomains,suchaswebsearch,advertisements,andcustomerservices,andwiththehelpofdeeplearning,wecanenhanceitsperformancesintheseareas.Hands-OnNaturalLanguageProcessingwithPythonteachesyouhowtoleveragedeeplearningmodelsforperformingvariousNLPtasks,alongwithbestpracticesindealingwithtoday’sNLPchallenges.Tobeginwith,youwillunderstandthecoreconceptsofNLPanddeeplearning,suchasConvolutionalNeuralNetworks(CNNs),recurrentneuralnetworks(RNNs),semanticembedding,Word2vec,andmore.YouwilllearnhowtoperformeachandeverytaskofNLPusingneuralnetworks,inwhichyouwilltrainanddeployneuralnetworksinyourNLPapplications.YouwillgetaccustomedtousingRNNsandCNNsinvariousapplicationareas,suchastextclassificationandsequencelabeling,whichareessentialintheapplicationofsentimentanalysis,customerservicechatbots,andanomalydetection.YouwillbeequippedwithpracticalknowledgeinordertoimplementdeeplearninginyourlinguisticapplicationsusingPython'spopulardeeplearninglibrary,TensorFlow.Bytheendofthisbook,youwillbewellversedinbuildingdeeplearning-backedNLPapplications,alongwithovercomingNLPchallengeswithbestpracticesdevelopedbydomainexperts.
最新章節
- Leave a review - let other readers know what you think
- Other Books You May Enjoy
- Summary
- Android
- iPhone
- Deploying on mobile devices
品牌:中圖公司
上架時間:2021-08-13 15:17:27
出版社:Packt Publishing
本書數字版權由中圖公司提供,并由其授權上海閱文信息技術有限公司制作發行
- Leave a review - let other readers know what you think 更新時間:2021-08-13 16:02:28
- Other Books You May Enjoy
- Summary
- Android
- iPhone
- Deploying on mobile devices
- Google Cloud Platform
- Amazon Web Services
- Deploying in the cloud
- Serving the exported model
- Exporting the trained model
- TensorFlow Serving
- MobileNets
- Quantizing the weights
- Increasing performance
- Deploying Trained Models
- Summary
- Training and testing
- Full architecture with attention
- The attention mechanism
- Decoder RNN
- Attention RNN
- Encoder and postprocessing CBHG
- Pre-net
- Implementation of the architecture
- Preparation of audio data
- Preparation of text data
- Data preparation
- The dataset
- Implementation of Tacotron with Keras
- Limitations
- Details of the architecture
- The Griffin-Lim-based postprocessing module
- The attention-based decoder
- The encoder
- Tacotron
- WaveNet in brief
- TTS in deep learning
- A few reminders on spectrograms and the mel scale
- Traditional techniques – concatenative and parametric models
- How is the performance of a TTS system evaluated?
- Naturalness versus intelligibility
- Overview of text to speech
- Text-to-Speech Using Tacotron
- Summary
- State-of-the-art in speech recognition
- TensorBoard visualization
- Creating the model
- Preprocessing the audio data
- Speech recordings dataset
- Overview of the DeepSpeech model
- Speech to text using the DeepSpeech architecture
- TensorBoard visualization
- LSTM model for spoken digit recognition
- Audio signal representation
- Building an RNN model for speech recognition
- Overview of speech recognition
- Speech Recognition Using DeepSpeech
- Summary
- TensorBoard visualization
- Inference
- Training
- Building the graph
- Sequence-to-sequence model
- Decoder network
- Encoder network
- Data preparation
- NMT for French to English using attention
- Encoder-decoder with attention
- Encoder-decoder network
- Neural machine translation
- English to French using NLTK SMT models
- Statistical machine translation
- Overview of machine translation
- Machine Translation Using the Attention-Based Model
- Summary
- Literature on and related to memory networks
- Example of an interactive conversation
- Putting it all together
- Interacting with the chatbot
- Evaluating the chatbot on the testing set
- Training the chatbot model
- Building a vocabulary for word embedding lookup
- Class constructor
- Wrapping the memory network model in a chatbot class
- Vectorizing the data
- Loading dialog datasets in the QA format
- Writing a chatbot in TensorFlow
- Raw data format
- The bAbI dialog dataset
- Dialog datasets
- Extending memory networks for dialog modeling
- Putting it together
- Output module
- Memory module
- Question module
- Input module
- Class constructor
- Writing a memory network in TensorFlow
- Memory network pipeline overview
- Memory networks for Question-Answering
- Question-Answering datasets
- The Question-Answering task
- Question-Answering and Chatbots Using Memory Networks
- Summary
- State-of-the-art abstractive text summarization
- TensorBoard visualization
- Inference
- Training
- Building the graph
- Sequence to sequence
- Decoder network
- Encoder network
- Data preparation
- News summarization using GRU
- Decoder
- Encoder
- Encoder-decoder architecture
- Abstractive summarization
- Summarization using gensim
- Extractive summarization
- Text summarization
- Generating Linux kernel code with a GRU
- Generating text using RNNs
- Text Generation and Summarization Using GRUs
- Summary
- Scope for improvement
- The training step
- Decoding predictions
- Neural network architecture
- The effects of different pretrained word embeddings
- Word embedding
- Input
- Walking through the code
- Word embeddings
- Model
- Data
- NER with deep learning
- Named Entity Recognition Using Character LSTM
- Summary
- Inference
- Training
- Modeling with CNN
- Encoding the text
- Training the model
- Data description
- Data
- Searching and DeDuplicating Using CNNs
- Summary
- Attention networks for document classification
- Deep learning for multi-label classification
- Binary relevance
- Multi-label classification
- Transfer learning using GloVe embeddings
- Classifying news articles by topic using a CNN
- Identifying spam in YouTube video comments using RNNs
- Fully connected part
- Deep representation
- Embedding layer
- Deep learning meta architecture for text classification
- Topic modeling versus text classification
- Topic modeling
- Data for text classification
- Text Classification Using LSTM
- Summary
- Visualization of document embeddings
- Doc2vec
- Sentence2vec
- From word to document embeddings
- Visualization of word embeddings
- Building a skip-gram model
- A comparison of skip-gram and CBOW model architectures
- The skip-gram model
- The CBOW model
- Word2vec
- The classical approach
- Word vectors
- Semantic Embedding Using Shallow Models
- Summary
- The Keras library
- TensorBoard
- Adding two numbers
- Hello world!
- Installation
- cuDNN
- CUDA
- General Purpose – Graphics Processing Unit
- TensorFlow
- Long-Short Term Memory
- Recurrent neural network
- Max pooling
- Kernel
- Convolutional Neural Network
- L1 and L2 normalization
- Batch normalization
- Dropout
- Regularization techniques
- Stochastic gradient descent
- Gradient descent
- Backpropagation
- Training neural networks
- Cross-entropy
- Softmax
- One-hot encoding
- Neural network
- Rectified linear unit
- Hyperbolic tangent
- Sigmoid
- Activation functions
- Perceptron
- Deep learning
- Deep Learning and TensorFlow
- Summary
- Training a bag-of-words classifier
- Training a sentiment classifier for movie reviews
- Training a POS tagger
- Applications of POS tagging
- What is POS tagging?
- POS tagging
- Exploratory analysis of text
- Removing stop words
- Stemming
- Tokenization
- Text preprocessing and exploratory analysis
- Installing NLTK and its modules
- Text Classification and POS Tagging Using NLTK
- Summary
- Other applications
- Spoken dialog systems
- Speaker identification
- Converting voice-to-text
- Converting text-to-voice
- Question answering and chatbots
- Searching
- Coreference resolution
- Textual Entailment
- Machine Comprehension
- SQL query generation or semantic parsing
- Relation extraction
- Semantic Role Labeling
- Natural Language Inference
- Translating text
- Linking entities
- Recognizing named entities
- Analyzing sentiment
- Applications of NLP
- Bag-of-words
- N-grams
- Phrases and words
- Sentences
- Paragraph
- Text corpus or corpora
- Basic concepts and terminologies in NLP
- Getting Started
- Reviews
- Get in touch
- Conventions used
- Download the color images
- Download the example code files
- To get the most out of this book
- What this book covers
- Who this book is for
- Preface
- Packt is searching for authors like you
- About the reviewer
- About the authors
- Contributors
- Foreword
- PacktPub.com
- Why subscribe?
- Packt Upsell
- Hands-On Natural Language Processing with Python
- Copyright and Credits
- Title Page
- 封面
- 封面
- Title Page
- Copyright and Credits
- Hands-On Natural Language Processing with Python
- Packt Upsell
- Why subscribe?
- PacktPub.com
- Foreword
- Contributors
- About the authors
- About the reviewer
- Packt is searching for authors like you
- Preface
- Who this book is for
- What this book covers
- To get the most out of this book
- Download the example code files
- Download the color images
- Conventions used
- Get in touch
- Reviews
- Getting Started
- Basic concepts and terminologies in NLP
- Text corpus or corpora
- Paragraph
- Sentences
- Phrases and words
- N-grams
- Bag-of-words
- Applications of NLP
- Analyzing sentiment
- Recognizing named entities
- Linking entities
- Translating text
- Natural Language Inference
- Semantic Role Labeling
- Relation extraction
- SQL query generation or semantic parsing
- Machine Comprehension
- Textual Entailment
- Coreference resolution
- Searching
- Question answering and chatbots
- Converting text-to-voice
- Converting voice-to-text
- Speaker identification
- Spoken dialog systems
- Other applications
- Summary
- Text Classification and POS Tagging Using NLTK
- Installing NLTK and its modules
- Text preprocessing and exploratory analysis
- Tokenization
- Stemming
- Removing stop words
- Exploratory analysis of text
- POS tagging
- What is POS tagging?
- Applications of POS tagging
- Training a POS tagger
- Training a sentiment classifier for movie reviews
- Training a bag-of-words classifier
- Summary
- Deep Learning and TensorFlow
- Deep learning
- Perceptron
- Activation functions
- Sigmoid
- Hyperbolic tangent
- Rectified linear unit
- Neural network
- One-hot encoding
- Softmax
- Cross-entropy
- Training neural networks
- Backpropagation
- Gradient descent
- Stochastic gradient descent
- Regularization techniques
- Dropout
- Batch normalization
- L1 and L2 normalization
- Convolutional Neural Network
- Kernel
- Max pooling
- Recurrent neural network
- Long-Short Term Memory
- TensorFlow
- General Purpose – Graphics Processing Unit
- CUDA
- cuDNN
- Installation
- Hello world!
- Adding two numbers
- TensorBoard
- The Keras library
- Summary
- Semantic Embedding Using Shallow Models
- Word vectors
- The classical approach
- Word2vec
- The CBOW model
- The skip-gram model
- A comparison of skip-gram and CBOW model architectures
- Building a skip-gram model
- Visualization of word embeddings
- From word to document embeddings
- Sentence2vec
- Doc2vec
- Visualization of document embeddings
- Summary
- Text Classification Using LSTM
- Data for text classification
- Topic modeling
- Topic modeling versus text classification
- Deep learning meta architecture for text classification
- Embedding layer
- Deep representation
- Fully connected part
- Identifying spam in YouTube video comments using RNNs
- Classifying news articles by topic using a CNN
- Transfer learning using GloVe embeddings
- Multi-label classification
- Binary relevance
- Deep learning for multi-label classification
- Attention networks for document classification
- Summary
- Searching and DeDuplicating Using CNNs
- Data
- Data description
- Training the model
- Encoding the text
- Modeling with CNN
- Training
- Inference
- Summary
- Named Entity Recognition Using Character LSTM
- NER with deep learning
- Data
- Model
- Word embeddings
- Walking through the code
- Input
- Word embedding
- The effects of different pretrained word embeddings
- Neural network architecture
- Decoding predictions
- The training step
- Scope for improvement
- Summary
- Text Generation and Summarization Using GRUs
- Generating text using RNNs
- Generating Linux kernel code with a GRU
- Text summarization
- Extractive summarization
- Summarization using gensim
- Abstractive summarization
- Encoder-decoder architecture
- Encoder
- Decoder
- News summarization using GRU
- Data preparation
- Encoder network
- Decoder network
- Sequence to sequence
- Building the graph
- Training
- Inference
- TensorBoard visualization
- State-of-the-art abstractive text summarization
- Summary
- Question-Answering and Chatbots Using Memory Networks
- The Question-Answering task
- Question-Answering datasets
- Memory networks for Question-Answering
- Memory network pipeline overview
- Writing a memory network in TensorFlow
- Class constructor
- Input module
- Question module
- Memory module
- Output module
- Putting it together
- Extending memory networks for dialog modeling
- Dialog datasets
- The bAbI dialog dataset
- Raw data format
- Writing a chatbot in TensorFlow
- Loading dialog datasets in the QA format
- Vectorizing the data
- Wrapping the memory network model in a chatbot class
- Class constructor
- Building a vocabulary for word embedding lookup
- Training the chatbot model
- Evaluating the chatbot on the testing set
- Interacting with the chatbot
- Putting it all together
- Example of an interactive conversation
- Literature on and related to memory networks
- Summary
- Machine Translation Using the Attention-Based Model
- Overview of machine translation
- Statistical machine translation
- English to French using NLTK SMT models
- Neural machine translation
- Encoder-decoder network
- Encoder-decoder with attention
- NMT for French to English using attention
- Data preparation
- Encoder network
- Decoder network
- Sequence-to-sequence model
- Building the graph
- Training
- Inference
- TensorBoard visualization
- Summary
- Speech Recognition Using DeepSpeech
- Overview of speech recognition
- Building an RNN model for speech recognition
- Audio signal representation
- LSTM model for spoken digit recognition
- TensorBoard visualization
- Speech to text using the DeepSpeech architecture
- Overview of the DeepSpeech model
- Speech recordings dataset
- Preprocessing the audio data
- Creating the model
- TensorBoard visualization
- State-of-the-art in speech recognition
- Summary
- Text-to-Speech Using Tacotron
- Overview of text to speech
- Naturalness versus intelligibility
- How is the performance of a TTS system evaluated?
- Traditional techniques – concatenative and parametric models
- A few reminders on spectrograms and the mel scale
- TTS in deep learning
- WaveNet in brief
- Tacotron
- The encoder
- The attention-based decoder
- The Griffin-Lim-based postprocessing module
- Details of the architecture
- Limitations
- Implementation of Tacotron with Keras
- The dataset
- Data preparation
- Preparation of text data
- Preparation of audio data
- Implementation of the architecture
- Pre-net
- Encoder and postprocessing CBHG
- Attention RNN
- Decoder RNN
- The attention mechanism
- Full architecture with attention
- Training and testing
- Summary
- Deploying Trained Models
- Increasing performance
- Quantizing the weights
- MobileNets
- TensorFlow Serving
- Exporting the trained model
- Serving the exported model
- Deploying in the cloud
- Amazon Web Services
- Google Cloud Platform
- Deploying on mobile devices
- iPhone
- Android
- Summary
- Other Books You May Enjoy
- Leave a review - let other readers know what you think 更新時間:2021-08-13 16:02:28