官术网_书友最值得收藏!

Hypothesis

X denotes the input variables, also called input features, and y denotes the output or target variable that we are trying to predict. The pair (x, y) is called a training example, and the dataset used to learn is a list of m training examples, where {(x, y)} is a training set. We will also use X to denote the space of input values, and Y to denote the space of output values. For a training set, to learn a function, h: X → Y so that h(x) is a predictor for the value of y. Function h is called a hypothesis.

When the target variable to be predicted is continuous, we call the learning problem a regression problem. When y can take a small number of discrete values, we call it a classification problem.

Let's say we choose to approximate y as a linear function of x.

The hypothesis function is as follows:

In this last hypothesis function, the θi 's are parameters, also known as weights, which parameterize the space of linear functions mapping from X to Y. To simplify the notation, we also introduce the convention of letting x0 = 1 (this is the intercept term), such that:

On the RHS, we view θ and x both as vectors, and n is the number of input variables.

Now before we proceed any further, it's important to note that we will now be transitioning from mathematical fundamentals to learning algorithms. Optimizing the cost function and learning θ will lay the foundation to understand machine learning algorithms.

Given a training set, how do we learn the parameters θ? One method that looks possible is to get h(x) close to y for the given training examples. We shall define a function that measures, for each value of the θs, how close the h(x(i))s are to the corresponding y (i) s. We define this as a cost function.

主站蜘蛛池模板: 宁强县| 齐齐哈尔市| 靖安县| 抚宁县| 同德县| 遂宁市| 石首市| 颍上县| 永济市| 五峰| 井研县| 汶上县| 乌兰察布市| 张家川| 集贤县| 若羌县| 县级市| 桦甸市| 高陵县| 慈利县| 阿勒泰市| 九江县| 合阳县| 淮北市| 苍溪县| 剑河县| 抚宁县| 长宁区| 五大连池市| 永定县| 万州区| 兴义市| 黔西县| 甘孜| 新竹市| 西青区| 明星| 洞头县| 梁山县| 连城县| 手游|