- Hands-On Machine Learning with scikit:learn and Scientific Python Toolkits
- Tarek Amr
- 471字
- 2021-06-18 18:24:27
What this book covers
Chapter 1, Introduction to Machine Learning, will introduce you to the different machine learning paradigms, using examples from industry. You will also learn how to use data to evaluate the models you build.
Chapter 2, Making Decisions with Trees, will explain how decision trees work and teach you how to use them for classification as well as regression. You will also learn how to derive business rules from the trees you build.
Chapter 3, Making Decisions with Linear Equations, will introduce you to linear regression. After understanding its modus operandi, we will learn about related models such as ridge, lasso, and logistic regression. This chapter will also pave the way toward understanding neural networks later on in this book.
Chapter 4, Preparing Your Data, will cover how to deal with missing data using the impute functionality. We will then use scikit-learn, as well as an external library called categorical-encoding, to prepare the categorical data for the algorithms that we are going to use later on in the book.
Chapter 5, Image Processing with Nearest Neighbors, will explain the k-Nearest Neighbors algorithms and their hyperparameters. We will also learn how to prepare images for the nearest neighbors classifier.
Chapter 6, Classifying Text Using Naive Bayes, will teach you how to convert textual data into numbers and use machine learning algorithms to classify it. We will also learn about techniques to deal with synonyms and high data dimensionality.
Chapter 7, Neural Networks – Here Comes the Deep Learning, will dive into how to use neural networks for classification and regression. We will also learn about data scaling since it is a requirement for quicker convergence.
Chapter 8, Ensembles – When One Model Is Not Enough, will cover how to reduce the bias or variance of algorithms by combining them into an ensemble. We will also learn about the different ensemble methods, from bagging to boosting, and when to use each of them.
Chapter 9, The Y is as Important as the X, will teach you how to build multilabel classifiers. We will also learn how to enforce dependencies between your model outputs and make a classifier's probabilities more reliable with calibration.
Chapter 10, Imbalanced Learning – Not Even 1% Win the Lottery, will introduce the use of an imbalanced learning helper library and explore different ways for over- and under-sampling. We will also learn how to use the sampling methods with the ensemble models.
Chapter 11, Clustering – Making Sense of Unlabeled Data, will cover clustering as an unsupervised learning algorithm for making sense of unlabeled data.
Chapter 12, Anomaly Detection – Finding Outliers in Data, will explore the different types of anomaly detection algorithms.
Chapter 13, Recommender Systems – Get to Know Their Taste, will teach you how to build a recommendation system and deploy it in production.
- Twilio Best Practices
- Responsive Web Design with HTML5 and CSS3
- 實戰Java程序設計
- Python高級編程
- oreilly精品圖書:軟件開發者路線圖叢書(共8冊)
- Hands-On C++ Game Animation Programming
- C語言程序設計學習指導與習題解答
- INSTANT Passbook App Development for iOS How-to
- Hands-On Automation Testing with Java for Beginners
- Node.js Design Patterns
- Unity UI Cookbook
- Python語言實用教程
- 智能手機APP UI設計與應用任務教程
- 用案例學Java Web整合開發
- 軟件供應鏈安全:源代碼缺陷實例剖析