- Hands-On Artificial Intelligence for IoT
- Amita Kapoor
- 268字
- 2021-07-02 14:02:04
Logistic regression for classification
In the previous section, we learned how to predict. There's another common task in ML: the task of classification. Separating dogs from cats and spam from not spam, or even identifying the different objects in a room or scene—all of these are classification tasks.
Logistic regression is an old classification technique. It provides the probability of an event taking place, given an input value. The events are represented as categorical dependent variables, and the probability of a particular dependent variable being 1 is given using the logit function:

Before going into the details of how we can use logistic regression for classification, let's examine the logit function (also called the sigmoid function because of its S-shaped curve). The following diagram shows the logit function and its derivative varies with respect to the input X, the Sigmoidal function (blue) and its derivative (orange):

A few important things to note from this diagram are the following:
- The value of sigmoid (and hence Ypred) lies between (0, 1)
- The derivative of the sigmoid is highest when WTX + b = 0.0 and the highest value of the derivative is just 0.25 (the sigmoid at same place has a value 0.5)
- The slope by which the sigmoid varies depends on the weights, and the position where we'll have the peak of derivative depends on the bias
I would suggest you play around with the Sigmoid_function.ipynb program available at this book's GitHub repository, to get a feel of how the sigmoid function changes as the weight and bias changes.
- Mastering Spark for Data Science
- Cinema 4D R13 Cookbook
- JavaScript實例自學手冊
- 控制與決策系統仿真
- 一本書玩轉數據分析(雙色圖解版)
- 流處理器研究與設計
- 深度學習中的圖像分類與對抗技術
- AWS Administration Cookbook
- 基于單片機的嵌入式工程開發詳解
- 電子設備及系統人機工程設計(第2版)
- Citrix? XenDesktop? 7 Cookbook
- 格蠹匯編
- Hands-On Data Warehousing with Azure Data Factory
- 網絡脆弱性掃描產品原理及應用
- IBM? SmartCloud? Essentials