- Deep Learning By Example
- Ahmed Menshawy
- 411字
- 2021-06-24 18:52:41
Linear models for classification
In this section, we are going to go through logistic regression, which is one of the widely used algorithms for classification.
What's logistic regression? The simple definition of logistic regression is that it's a type of classification algorithm involving a linear discriminant.
We are going to clarify this definition in two points:
Unlike linear regression, logistic regression doesn't try to estimate/predict the value of the numeric variable given a set of features or input variables. Instead, the output of the logistic regression algorithm is the probability that the given sample/observation belongs to a specific class. In simpler words, let's assume that we have a binary classification problem. In this type of problem, we have only two classes in the output variable, for example, diseased or not diseased. So, the probability that a certain sample belongs to the diseased class is P0 and the probability that a certain sample belongs to the not diseased class is P1 = 1 - P0. Thus, the output of the logistic regression algorithm is always between 0 and 1.
As you probably know, there are a lot of learning algorithms for regression or classification, and each learning algorithm has its own assumption about the data samples. The ability to choose the learning algorithm that fits your data will come gradually with practice and good understanding of the subject. Thus, the central assumption of the logistic regression algorithm is that our input/feature space could be separated into two regions (one for each class) by a linear surface, which could be a line if we only have two features or a plane if we have three, and so on. The position and orientation of this boundary will be determined by your data. If your data satisfies this constraint that is separating them into regions corresponding to each class with a linear surface, then your data is said to be linearly separable. The following figure illustrates this assumption. In Figure 5, we have three dimensions, inputs, or features and two possible classes: diseased (red) and not diseased (blue). The dividing place that separates the two regions from each other is called a linear discriminant, and that’s because it's linear and it helps the model to discriminate between samples belonging to different classes:
If your data samples aren't linearly separable, you can make them so by transforming your data into higher dimensional space, by adding more features.
- Julia 1.0 Programming
- 空間傳感器網絡復雜區域智能監測技術
- 大數據挑戰與NoSQL數據庫技術
- STM32G4入門與電機控制實戰:基于X-CUBE-MCSDK的無刷直流電機與永磁同步電機控制實現
- 大數據技術與應用
- 水晶石精粹:3ds max & ZBrush三維數字靜幀藝術
- INSTANT Autodesk Revit 2013 Customization with .NET How-to
- 可編程序控制器應用實訓(三菱機型)
- 電氣控制與PLC技術應用
- Godot Engine Game Development Projects
- 人工智能技術入門
- 基于ARM9的小型機器人制作
- Drupal高手建站技術手冊
- Microsoft Dynamics CRM 2013 Marketing Automation
- 智能小車機器人制作大全(第2版)