舉報

會員
Machine Learning With Go
最新章節:
Backpropagation
ThisbookisforGodeveloperswhoarefamiliarwiththeGosyntaxandcandevelop,build,andrunbasicGoprograms.IfyouwanttoexplorethefieldofmachinelearningandyouloveGo,thenthisbookisforyou!MachineLearningwithGowillgivereadersthepracticalskillstoperformthemostcommonmachinelearningtaskswithGo.Familiaritywithsomestatisticsandmathtopicsisnecessary.
最新章節
- Backpropagation
- Entropy information gain and related methods
- Gradient descent
- Algorithms/Techniques Related to Machine Learning
- Summary
- References
品牌:中圖公司
上架時間:2021-07-08 09:23:52
出版社:Packt Publishing
本書數字版權由中圖公司提供,并由其授權上海閱文信息技術有限公司制作發行
- Backpropagation 更新時間:2021-07-08 10:38:01
- Entropy information gain and related methods
- Gradient descent
- Algorithms/Techniques Related to Machine Learning
- Summary
- References
- Scaling pipeline stages
- Updating pipelines and examining provenance
- Creating and running the processing stages
- Creating and filling the input repositories
- Building a Pachyderm machine learning pipeline
- Setting up a Pachyderm and Kubernetes cluster
- Building a scalable and reproducible machine learning pipeline
- Running the Docker images on remote machines
- Testing the Docker images locally
- Docker-izing model predictions
- Docker-izing the model training and export
- Docker-izing a machine learning application
- A brief introduction to Docker and Docker jargon
- Running models reliably on remote machines
- Deploying and Distributing Analyses and Models
- Summary
- References
- Object detection using TensorFlow from Go
- Retrieving and calling a pretrained TensorFlow model
- Setting up TensorFlow for use with Go
- Deep learning with Go
- What is a deep learning model?
- Introducing deep learning
- Evaluating the neural network
- Training the neural network on real data
- Utilizing the simple neural network
- Training our neural network
- Why do we expect this architecture to work?
- Network architecture
- Nodes in the network
- Building a simple neural network
- Understanding neural net jargon
- Neural Networks and Deep Learning
- Summary
- References
- Anomaly detection
- Auto-regressive moving averages and other time series models
- Fitting and evaluating an AR(2) model
- Analyzing the ACF and choosing an AR order
- Transforming to a stationary series
- Auto-regressive model example
- Auto-regressive model assumptions and pitfalls
- Auto-regressive model overview
- Auto-regressive models for forecasting
- Partial autocorrelation
- Autocorrelation
- Statistics related to time series
- Understanding time series jargon
- Representing time series data in Go
- Time Series and Anomaly Detection
- Summary
- References
- Other clustering techniques
- Evaluating the generated clusters
- Generating clusters with k-means
- Profiling the data
- k-means clustering example
- k-means assumptions and pitfalls
- Overview of k-means clustering
- k-means clustering
- External clustering evaluation
- Internal clustering evaluation
- Evaluating clustering techniques
- Measuring Distance or Similarity
- Understanding clustering model jargon
- Clustering
- Summary
- References
- Naive bayes example
- Overview of naive bayes and its big assumption
- Naive bayes
- Random forest example
- Decision tree example
- Decision tree and random forest assumptions and pitfalls
- Overview of decision trees and random forests
- Decision trees and random forests
- kNN example
- kNN assumptions and pitfalls
- Overview of kNN
- k-nearest neighbors
- Training and testing the logistic regression model
- Creating our training and test sets
- Cleaning and profiling the data
- Logistic regression example
- Logistic regression assumptions and pitfalls
- Overview of logistic regression
- Logistic regression
- Understanding classification model jargon
- Classification
- Summary
- References
- Nonlinear and other types of regression
- Multiple linear regression
- Evaluating the trained model
- Training our model
- Creating our training and test sets
- Choosing our independent variable
- Profiling the data
- Linear regression example
- Linear regression assumptions and pitfalls
- Overview of linear regression
- Linear regression
- Understanding regression model jargon
- Regression
- Summary
- References
- Cross validation
- Holdout set
- Training and test sets
- Validation
- Confusion matrices AUC and ROC
- Individual evaluation metrics for categorical variables
- Categorical metrics
- Continuous metrics
- Evaluation
- Evaluation and Validation
- Summary
- References
- Calculating p-values
- Test statistics
- Hypothesis testing
- Independent and conditional probability
- Probability measures
- Random variables
- Probability
- Box plots
- Histograms
- Visualizing distributions
- Measures of spread or dispersion
- Measures of central tendency
- Statistical measures
- Distributions
- Statistics
- Matrix operations
- Matrices
- Vector operations
- Vectors
- Matrices and vectors
- Matrices Probability and Statistics
- Summary
- References
- Getting data out of versioned data repositories
- Putting data into data repositories
- Creating data repositories for data versioning
- Deploying/installing Pachyderm
- Pachyderm jargon
- Data versioning
- Caching data locally on disk
- Caching data in memory
- Caching
- Modifying the database
- Querying the database
- Connecting to an SQL database
- SQL-like databases
- JSON output
- Parsing JSON
- JSON
- Manipulating CSV data with data frames
- Handling unexpected types
- Handling unexpected fields
- Reading in CSV data from a file
- CSV files
- Best practices for gathering and organizing data with Go
- Handling data - Gopher style
- Gathering and Organizing Data
- Questions
- Piracy
- Errata
- Downloading the color images of this book
- Downloading the example code
- Customer support
- Reader feedback
- Conventions
- Who this book is for
- What you need for this book
- What this book covers
- Preface
- Customer Feedback
- Why subscribe?
- www.PacktPub.com
- About the Reviewers
- About the Author
- Credits
- Machine Learning With Go
- Copyright
- Title Page
- coverpage
- coverpage
- Title Page
- Copyright
- Machine Learning With Go
- Credits
- About the Author
- About the Reviewers
- www.PacktPub.com
- Why subscribe?
- Customer Feedback
- Preface
- What this book covers
- What you need for this book
- Who this book is for
- Conventions
- Reader feedback
- Customer support
- Downloading the example code
- Downloading the color images of this book
- Errata
- Piracy
- Questions
- Gathering and Organizing Data
- Handling data - Gopher style
- Best practices for gathering and organizing data with Go
- CSV files
- Reading in CSV data from a file
- Handling unexpected fields
- Handling unexpected types
- Manipulating CSV data with data frames
- JSON
- Parsing JSON
- JSON output
- SQL-like databases
- Connecting to an SQL database
- Querying the database
- Modifying the database
- Caching
- Caching data in memory
- Caching data locally on disk
- Data versioning
- Pachyderm jargon
- Deploying/installing Pachyderm
- Creating data repositories for data versioning
- Putting data into data repositories
- Getting data out of versioned data repositories
- References
- Summary
- Matrices Probability and Statistics
- Matrices and vectors
- Vectors
- Vector operations
- Matrices
- Matrix operations
- Statistics
- Distributions
- Statistical measures
- Measures of central tendency
- Measures of spread or dispersion
- Visualizing distributions
- Histograms
- Box plots
- Probability
- Random variables
- Probability measures
- Independent and conditional probability
- Hypothesis testing
- Test statistics
- Calculating p-values
- References
- Summary
- Evaluation and Validation
- Evaluation
- Continuous metrics
- Categorical metrics
- Individual evaluation metrics for categorical variables
- Confusion matrices AUC and ROC
- Validation
- Training and test sets
- Holdout set
- Cross validation
- References
- Summary
- Regression
- Understanding regression model jargon
- Linear regression
- Overview of linear regression
- Linear regression assumptions and pitfalls
- Linear regression example
- Profiling the data
- Choosing our independent variable
- Creating our training and test sets
- Training our model
- Evaluating the trained model
- Multiple linear regression
- Nonlinear and other types of regression
- References
- Summary
- Classification
- Understanding classification model jargon
- Logistic regression
- Overview of logistic regression
- Logistic regression assumptions and pitfalls
- Logistic regression example
- Cleaning and profiling the data
- Creating our training and test sets
- Training and testing the logistic regression model
- k-nearest neighbors
- Overview of kNN
- kNN assumptions and pitfalls
- kNN example
- Decision trees and random forests
- Overview of decision trees and random forests
- Decision tree and random forest assumptions and pitfalls
- Decision tree example
- Random forest example
- Naive bayes
- Overview of naive bayes and its big assumption
- Naive bayes example
- References
- Summary
- Clustering
- Understanding clustering model jargon
- Measuring Distance or Similarity
- Evaluating clustering techniques
- Internal clustering evaluation
- External clustering evaluation
- k-means clustering
- Overview of k-means clustering
- k-means assumptions and pitfalls
- k-means clustering example
- Profiling the data
- Generating clusters with k-means
- Evaluating the generated clusters
- Other clustering techniques
- References
- Summary
- Time Series and Anomaly Detection
- Representing time series data in Go
- Understanding time series jargon
- Statistics related to time series
- Autocorrelation
- Partial autocorrelation
- Auto-regressive models for forecasting
- Auto-regressive model overview
- Auto-regressive model assumptions and pitfalls
- Auto-regressive model example
- Transforming to a stationary series
- Analyzing the ACF and choosing an AR order
- Fitting and evaluating an AR(2) model
- Auto-regressive moving averages and other time series models
- Anomaly detection
- References
- Summary
- Neural Networks and Deep Learning
- Understanding neural net jargon
- Building a simple neural network
- Nodes in the network
- Network architecture
- Why do we expect this architecture to work?
- Training our neural network
- Utilizing the simple neural network
- Training the neural network on real data
- Evaluating the neural network
- Introducing deep learning
- What is a deep learning model?
- Deep learning with Go
- Setting up TensorFlow for use with Go
- Retrieving and calling a pretrained TensorFlow model
- Object detection using TensorFlow from Go
- References
- Summary
- Deploying and Distributing Analyses and Models
- Running models reliably on remote machines
- A brief introduction to Docker and Docker jargon
- Docker-izing a machine learning application
- Docker-izing the model training and export
- Docker-izing model predictions
- Testing the Docker images locally
- Running the Docker images on remote machines
- Building a scalable and reproducible machine learning pipeline
- Setting up a Pachyderm and Kubernetes cluster
- Building a Pachyderm machine learning pipeline
- Creating and filling the input repositories
- Creating and running the processing stages
- Updating pipelines and examining provenance
- Scaling pipeline stages
- References
- Summary
- Algorithms/Techniques Related to Machine Learning
- Gradient descent
- Entropy information gain and related methods
- Backpropagation 更新時間:2021-07-08 10:38:01