最新章節(jié)
- Summary
- Where to go from here?
- Writing your own scikit-learn-based classifier in Python
- Writing your own OpenCV-based classifier in C++
- Building your own estimator
- Approaching a machine learning problem
品牌:中圖公司
上架時(shí)間:2021-07-02 18:25:20
出版社:Packt Publishing
本書數(shù)字版權(quán)由中圖公司提供,并由其授權(quán)上海閱文信息技術(shù)有限公司制作發(fā)行
- Summary 更新時(shí)間:2021-07-02 19:48:05
- Where to go from here?
- Writing your own scikit-learn-based classifier in Python
- Writing your own OpenCV-based classifier in C++
- Building your own estimator
- Approaching a machine learning problem
- Wrapping Up
- Summary
- Using pipelines in grid searches
- Implementing pipelines in scikit-learn
- Chaining algorithms together to form a pipeline
- Choosing the right regression metric
- Choosing the right classification metric
- Scoring models using different evaluation metrics
- Combining grid search with nested cross-validation
- Combining grid search with cross-validation
- Understanding the value of a validation set
- Implementing a simple grid search
- Tuning hyperparameters with grid search
- Implementing McNemar's test
- Implementing Student's t-test
- Assessing the significance of our results
- Manually implementing bootstrapping in OpenCV
- Estimating robustness using bootstrapping
- Implementing leave-one-out cross-validation
- Using scikit-learn for k-fold cross-validation
- Manually implementing cross-validation in OpenCV
- Understanding cross-validation
- Selecting the best model
- Evaluating a model in the right way
- Evaluating a model the wrong way
- Evaluating a model
- Selecting the Right Model with Hyperparameter Tuning
- Summary
- Implementing a voting classifier
- Understanding different voting schemes
- Combining different models into a voting classifier
- Implementing AdaBoost in scikit-learn
- Implementing AdaBoost in OpenCV
- Implementing AdaBoost
- Training and testing the random forest
- Preprocessing the dataset
- Loading the dataset
- Using random forests for face recognition
- Implementing extremely randomized trees
- Implementing a random forest with scikit-learn
- Implementing our first random forest
- Understanding the shortcomings of decision trees
- Combining decision trees into a random forest
- Understanding stacking ensembles
- Implementing a boosting regressor
- Implementing a boosting classifier
- Understanding boosting ensembles
- Implementing a bagging regressor
- Implementing a bagging classifier
- Understanding averaging ensembles
- Understanding ensemble methods
- Combining Different Algorithms into an Ensemble
- Summary
- Fitting the model
- Creating a convolutional neural network
- Preprocessing the MNIST dataset
- Training a deep neural net using Keras
- Training an MLP using OpenCV
- Preprocessing the MNIST dataset
- Loading the MNIST dataset
- Classifying handwritten digits
- Getting acquainted with Keras
- Getting acquainted with deep learning
- Training and testing the MLP classifier
- Customizing the MLP classifier
- Creating an MLP classifier in OpenCV
- Preprocessing the data
- Implementing a multilayer perceptron in OpenCV
- Training multi-layer perceptrons with backpropagation
- Understanding gradient descent
- Understanding multilayer perceptrons
- Applying the perceptron to data that is not linearly separable
- Evaluating the perceptron classifier
- Fitting the perceptron to data
- Generating a toy dataset
- Implementing your first perceptron
- Understanding the perceptron
- Understanding the McCulloch-Pitts neuron
- Using Deep Learning to Classify Handwritten Digits
- Summary
- Implementing agglomerative hierarchical clustering
- Understanding hierarchical clustering
- Organizing clusters as a hierarchical tree
- Running k-means
- Loading the dataset
- Classifying handwritten digits using k-means
- Reducing the color palette using k-means
- Visualizing the true-color palette
- Compressing color spaces using k-means
- Fourth caveat: k-means is slow for a large number of samples
- Third caveat: Cluster boundaries are linear
- Second caveat: We must select the number of clusters beforehand
- First caveat: No guarantee of finding the global optimum
- Knowing the limitations of expectation-maximization
- Implementing our own expectation-maximization solution
- Understanding expectation-maximization
- Implementing our first k-means example
- Understanding k-means clustering
- Understanding unsupervised learning
- Discovering Hidden Structures with Unsupervised Learning
- Summary
- Using tf-idf to improve the result
- Using n-grams to improve the result
- Training on the full dataset
- Training a normal Bayes classifier
- Preprocessing the data
- Building a data matrix using Pandas
- Loading the dataset
- Classifying emails using the naive Bayes classifier
- Visualizing conditional probabilities
- Classifying the data with a naive Bayes classifier
- Classifying the data with a normal Bayes classifier
- Creating a toy dataset
- Implementing your first Bayesian classifier
- Understanding the naive Bayes classifier
- Understanding Bayes' theorem
- Taking a short detour on probability theory
- Understanding Bayesian inference
- Implementing a Spam Filter with Bayesian Learning
- Summary
- Further improving the model
- Detecting pedestrians in a larger image
- Bootstrapping the model
- Implementing the support vector machine
- Generating negatives
- Taking a glimpse at the histogram of oriented gradients (HOG)
- Obtaining the dataset
- Detecting pedestrians in the wild
- Implementing nonlinear support vector machines
- Knowing our kernels
- Understanding the kernel trick
- Dealing with nonlinear decision boundaries
- Visualizing the decision boundary
- Building the support vector machine
- Preprocessing the dataset
- Visualizing the dataset
- Generating the dataset
- Implementing our first support vector machine
- Learning optimal decision boundaries
- Understanding linear support vector machines
- Detecting Pedestrians with Support Vector Machines
- Summary
- Using decision trees for regression
- Building the decision tree
- Loading the dataset
- Using decision trees to diagnose breast cancer
- Controlling the complexity of decision trees
- Understanding the decision rules
- Rating the importance of features
- Investigating the inner workings of a decision tree
- Visualizing a trained decision tree
- Constructing the tree
- Preprocessing the data
- Understanding the task by understanding the data
- Building our first decision tree
- Understanding decision trees
- Using Decision Trees to Make a Medical Diagnosis
- Summary
- Using Speeded Up Robust Features (SURF)
- Using the Scale-Invariant Feature Transform (SIFT)
- Detecting corners in images
- Encoding images in HSV and HLS space
- Encoding images in RGB space
- Using color spaces
- Representing images
- Representing text features
- Representing categorical variables
- Implementing Non-negative Matrix Factorization (NMF)
- Implementing Independent Component Analysis (ICA)
- Implementing Principal Component Analysis (PCA) in OpenCV
- Understanding dimensionality reduction
- Handling the missing data
- Binarizing features
- Scaling features to a range
- Normalizing features
- Standardizing features
- Preprocessing data
- Understanding feature engineering
- Representing Data and Engineering Features
- Summary
- Testing the classifier
- Training the classifier
- Splitting the data into training and test sets
- Inspecting the data
- Making it a binary classification problem
- Loading the training data
- Understanding logistic regression
- Classifying iris species using logistic regression
- Applying Lasso and ridge regression
- Testing the model
- Training the model
- Loading the dataset
- Using linear regression to predict Boston housing prices
- Understanding linear regression
- Using regression models to predict continuous outcomes
- Predicting the label of a new data point
- Training the classifier
- Generating the training data
- Implementing k-NN in OpenCV
- Understanding the k-NN algorithm
- Using classification models to predict class labels
- Scoring regressors using mean squared error explained variance and R squared
- Scoring classifiers using accuracy precision and recall
- Measuring model performance with scoring functions
- Having a look at supervised learning in OpenCV
- Understanding supervised learning
- First Steps in Supervised Learning
- Summary
- Dealing with data using OpenCV's TrainData container in C++
- Visualizing data from an external dataset
- Producing a simple plot
- Importing Matplotlib
- Visualizing the data using Matplotlib
- Loading external datasets in Python
- Creating multidimensional arrays
- Accessing single array elements by indexing
- Understanding NumPy arrays
- Importing NumPy
- Dealing with data using Python's NumPy package
- Starting a new IPython or Jupyter session
- Dealing with data using OpenCV and Python
- Understanding the machine learning workflow
- Working with Data in OpenCV and Python
- Summary
- Getting a glimpse of OpenCV's ML module
- Verifying the installation
- Installing OpenCV in a conda environment
- Getting to grips with Python's Anaconda distribution
- Getting the latest code for this book
- Installation
- Getting started with OpenCV
- Getting started with Python
- Problems that machine learning can solve
- Getting started with machine learning
- A Taste of Machine Learning
- Questions
- Piracy
- Errata
- Downloading the example code
- Customer support
- Reader feedback
- Conventions
- Who this book is for
- What you need for this book
- What this book covers
- Preface
- Dedication
- Customer Feedback
- Why subscribe?
- www.PacktPub.com
- About the Reviewers
- About the Author
- Foreword
- Credits
- Title Page
- coverpage
- coverpage
- Title Page
- Credits
- Foreword
- About the Author
- About the Reviewers
- www.PacktPub.com
- Why subscribe?
- Customer Feedback
- Dedication
- Preface
- What this book covers
- What you need for this book
- Who this book is for
- Conventions
- Reader feedback
- Customer support
- Downloading the example code
- Errata
- Piracy
- Questions
- A Taste of Machine Learning
- Getting started with machine learning
- Problems that machine learning can solve
- Getting started with Python
- Getting started with OpenCV
- Installation
- Getting the latest code for this book
- Getting to grips with Python's Anaconda distribution
- Installing OpenCV in a conda environment
- Verifying the installation
- Getting a glimpse of OpenCV's ML module
- Summary
- Working with Data in OpenCV and Python
- Understanding the machine learning workflow
- Dealing with data using OpenCV and Python
- Starting a new IPython or Jupyter session
- Dealing with data using Python's NumPy package
- Importing NumPy
- Understanding NumPy arrays
- Accessing single array elements by indexing
- Creating multidimensional arrays
- Loading external datasets in Python
- Visualizing the data using Matplotlib
- Importing Matplotlib
- Producing a simple plot
- Visualizing data from an external dataset
- Dealing with data using OpenCV's TrainData container in C++
- Summary
- First Steps in Supervised Learning
- Understanding supervised learning
- Having a look at supervised learning in OpenCV
- Measuring model performance with scoring functions
- Scoring classifiers using accuracy precision and recall
- Scoring regressors using mean squared error explained variance and R squared
- Using classification models to predict class labels
- Understanding the k-NN algorithm
- Implementing k-NN in OpenCV
- Generating the training data
- Training the classifier
- Predicting the label of a new data point
- Using regression models to predict continuous outcomes
- Understanding linear regression
- Using linear regression to predict Boston housing prices
- Loading the dataset
- Training the model
- Testing the model
- Applying Lasso and ridge regression
- Classifying iris species using logistic regression
- Understanding logistic regression
- Loading the training data
- Making it a binary classification problem
- Inspecting the data
- Splitting the data into training and test sets
- Training the classifier
- Testing the classifier
- Summary
- Representing Data and Engineering Features
- Understanding feature engineering
- Preprocessing data
- Standardizing features
- Normalizing features
- Scaling features to a range
- Binarizing features
- Handling the missing data
- Understanding dimensionality reduction
- Implementing Principal Component Analysis (PCA) in OpenCV
- Implementing Independent Component Analysis (ICA)
- Implementing Non-negative Matrix Factorization (NMF)
- Representing categorical variables
- Representing text features
- Representing images
- Using color spaces
- Encoding images in RGB space
- Encoding images in HSV and HLS space
- Detecting corners in images
- Using the Scale-Invariant Feature Transform (SIFT)
- Using Speeded Up Robust Features (SURF)
- Summary
- Using Decision Trees to Make a Medical Diagnosis
- Understanding decision trees
- Building our first decision tree
- Understanding the task by understanding the data
- Preprocessing the data
- Constructing the tree
- Visualizing a trained decision tree
- Investigating the inner workings of a decision tree
- Rating the importance of features
- Understanding the decision rules
- Controlling the complexity of decision trees
- Using decision trees to diagnose breast cancer
- Loading the dataset
- Building the decision tree
- Using decision trees for regression
- Summary
- Detecting Pedestrians with Support Vector Machines
- Understanding linear support vector machines
- Learning optimal decision boundaries
- Implementing our first support vector machine
- Generating the dataset
- Visualizing the dataset
- Preprocessing the dataset
- Building the support vector machine
- Visualizing the decision boundary
- Dealing with nonlinear decision boundaries
- Understanding the kernel trick
- Knowing our kernels
- Implementing nonlinear support vector machines
- Detecting pedestrians in the wild
- Obtaining the dataset
- Taking a glimpse at the histogram of oriented gradients (HOG)
- Generating negatives
- Implementing the support vector machine
- Bootstrapping the model
- Detecting pedestrians in a larger image
- Further improving the model
- Summary
- Implementing a Spam Filter with Bayesian Learning
- Understanding Bayesian inference
- Taking a short detour on probability theory
- Understanding Bayes' theorem
- Understanding the naive Bayes classifier
- Implementing your first Bayesian classifier
- Creating a toy dataset
- Classifying the data with a normal Bayes classifier
- Classifying the data with a naive Bayes classifier
- Visualizing conditional probabilities
- Classifying emails using the naive Bayes classifier
- Loading the dataset
- Building a data matrix using Pandas
- Preprocessing the data
- Training a normal Bayes classifier
- Training on the full dataset
- Using n-grams to improve the result
- Using tf-idf to improve the result
- Summary
- Discovering Hidden Structures with Unsupervised Learning
- Understanding unsupervised learning
- Understanding k-means clustering
- Implementing our first k-means example
- Understanding expectation-maximization
- Implementing our own expectation-maximization solution
- Knowing the limitations of expectation-maximization
- First caveat: No guarantee of finding the global optimum
- Second caveat: We must select the number of clusters beforehand
- Third caveat: Cluster boundaries are linear
- Fourth caveat: k-means is slow for a large number of samples
- Compressing color spaces using k-means
- Visualizing the true-color palette
- Reducing the color palette using k-means
- Classifying handwritten digits using k-means
- Loading the dataset
- Running k-means
- Organizing clusters as a hierarchical tree
- Understanding hierarchical clustering
- Implementing agglomerative hierarchical clustering
- Summary
- Using Deep Learning to Classify Handwritten Digits
- Understanding the McCulloch-Pitts neuron
- Understanding the perceptron
- Implementing your first perceptron
- Generating a toy dataset
- Fitting the perceptron to data
- Evaluating the perceptron classifier
- Applying the perceptron to data that is not linearly separable
- Understanding multilayer perceptrons
- Understanding gradient descent
- Training multi-layer perceptrons with backpropagation
- Implementing a multilayer perceptron in OpenCV
- Preprocessing the data
- Creating an MLP classifier in OpenCV
- Customizing the MLP classifier
- Training and testing the MLP classifier
- Getting acquainted with deep learning
- Getting acquainted with Keras
- Classifying handwritten digits
- Loading the MNIST dataset
- Preprocessing the MNIST dataset
- Training an MLP using OpenCV
- Training a deep neural net using Keras
- Preprocessing the MNIST dataset
- Creating a convolutional neural network
- Fitting the model
- Summary
- Combining Different Algorithms into an Ensemble
- Understanding ensemble methods
- Understanding averaging ensembles
- Implementing a bagging classifier
- Implementing a bagging regressor
- Understanding boosting ensembles
- Implementing a boosting classifier
- Implementing a boosting regressor
- Understanding stacking ensembles
- Combining decision trees into a random forest
- Understanding the shortcomings of decision trees
- Implementing our first random forest
- Implementing a random forest with scikit-learn
- Implementing extremely randomized trees
- Using random forests for face recognition
- Loading the dataset
- Preprocessing the dataset
- Training and testing the random forest
- Implementing AdaBoost
- Implementing AdaBoost in OpenCV
- Implementing AdaBoost in scikit-learn
- Combining different models into a voting classifier
- Understanding different voting schemes
- Implementing a voting classifier
- Summary
- Selecting the Right Model with Hyperparameter Tuning
- Evaluating a model
- Evaluating a model the wrong way
- Evaluating a model in the right way
- Selecting the best model
- Understanding cross-validation
- Manually implementing cross-validation in OpenCV
- Using scikit-learn for k-fold cross-validation
- Implementing leave-one-out cross-validation
- Estimating robustness using bootstrapping
- Manually implementing bootstrapping in OpenCV
- Assessing the significance of our results
- Implementing Student's t-test
- Implementing McNemar's test
- Tuning hyperparameters with grid search
- Implementing a simple grid search
- Understanding the value of a validation set
- Combining grid search with cross-validation
- Combining grid search with nested cross-validation
- Scoring models using different evaluation metrics
- Choosing the right classification metric
- Choosing the right regression metric
- Chaining algorithms together to form a pipeline
- Implementing pipelines in scikit-learn
- Using pipelines in grid searches
- Summary
- Wrapping Up
- Approaching a machine learning problem
- Building your own estimator
- Writing your own OpenCV-based classifier in C++
- Writing your own scikit-learn-based classifier in Python
- Where to go from here?
- Summary 更新時(shí)間:2021-07-02 19:48:05