官术网_书友最值得收藏!

Hypothesis

X denotes the input variables, also called input features, and y denotes the output or target variable that we are trying to predict. The pair (x, y) is called a training example, and the dataset used to learn is a list of m training examples, where {(x, y)} is a training set. We will also use X to denote the space of input values, and Y to denote the space of output values. For a training set, to learn a function, h: X → Y so that h(x) is a predictor for the value of y. Function h is called a hypothesis.

When the target variable to be predicted is continuous, we call the learning problem a regression problem. When y can take a small number of discrete values, we call it a classification problem.

Let's say we choose to approximate y as a linear function of x.

The hypothesis function is as follows:

In this last hypothesis function, the θi 's are parameters, also known as weights, which parameterize the space of linear functions mapping from X to Y. To simplify the notation, we also introduce the convention of letting x0 = 1 (this is the intercept term), such that:

On the RHS, we view θ and x both as vectors, and n is the number of input variables.

Now before we proceed any further, it's important to note that we will now be transitioning from mathematical fundamentals to learning algorithms. Optimizing the cost function and learning θ will lay the foundation to understand machine learning algorithms.

Given a training set, how do we learn the parameters θ? One method that looks possible is to get h(x) close to y for the given training examples. We shall define a function that measures, for each value of the θs, how close the h(x(i))s are to the corresponding y (i) s. We define this as a cost function.

主站蜘蛛池模板: 阆中市| 潼南县| 张北县| 扶绥县| 绥中县| 伊宁市| 揭西县| 洛宁县| 麻江县| 大宁县| 崇州市| 西昌市| 桃园县| 黔东| 阿坝县| 盐津县| 互助| 子长县| 合水县| 合阳县| 西乡县| 宁波市| 新宁县| 沂水县| 田阳县| 旬阳县| 彝良县| 嘉黎县| 佛冈县| 武山县| 永顺县| 古交市| 南木林县| 昔阳县| 吴堡县| 原平市| 大姚县| 鹤庆县| 时尚| 曲阳县| 大兴区|