官术网_书友最值得收藏!

The cost function

The cost function is a metric that determines how well or poorly a machine learning algorithm performed with regards to the actual training output and the predicted output. If you remember linear regression, where the sum of squares of errors was used as the loss function, that is, . This works better in a convex curve, but in the case of classification, the curve is non convex; as a result, the gradient descent doesn't work well and doesn't tend to global optimum. Therefore, we use cross-entropy loss which fits better in classification tasks as the cost function.

Cross entropy as loss function (for input data), that is, , where C refers to different output classes.
Thus, cost function = Average cross entropy loss (for the whole dataset), that is, .

In case of binary logistic regression, output classes are only two, that is, 0 and 1, since the sum of class values will always be 1. Therefore (for input data), if one class is , the other will be . Similarly, since the probability of class is (prediction), then the probability of the other class, that is, , will be .

Therefore, the loss function modifies to , where:

  • If , that is, = - . Therefore, to minimize , should be large, that is, closer to 1.

  • If , that is, = - . Therefore, to minimize , should be small, that is, closer to 0.

Loss function applies to a single example whereas cost function applies on the whole training lot. Thus, the cost function for this case will be:

主站蜘蛛池模板: 金秀| 景谷| 谢通门县| 米林县| 余姚市| 浪卡子县| 恩平市| 澄城县| 任丘市| 明光市| 佳木斯市| 新乡县| 福建省| 栖霞市| 平邑县| 论坛| 华阴市| 宜丰县| 额济纳旗| 高雄县| 正阳县| 新疆| 灌云县| 正镶白旗| 合作市| 宁陵县| 仪陇县| 万山特区| 吉林省| 吉隆县| 天长市| 海南省| 甘南县| 德庆县| 葵青区| 南城县| 万盛区| 金秀| 永和县| 开封市| 资阳市|