官术网_书友最值得收藏!

Hypothesis

X denotes the input variables, also called input features, and y denotes the output or target variable that we are trying to predict. The pair (x, y) is called a training example, and the dataset used to learn is a list of m training examples, where {(x, y)} is a training set. We will also use X to denote the space of input values, and Y to denote the space of output values. For a training set, to learn a function, h: X → Y so that h(x) is a predictor for the value of y. Function h is called a hypothesis.

When the target variable to be predicted is continuous, we call the learning problem a regression problem. When y can take a small number of discrete values, we call it a classification problem.

Let's say we choose to approximate y as a linear function of x.

The hypothesis function is as follows:

In this last hypothesis function, the θi 's are parameters, also known as weights, which parameterize the space of linear functions mapping from X to Y. To simplify the notation, we also introduce the convention of letting x0 = 1 (this is the intercept term), such that:

On the RHS, we view θ and x both as vectors, and n is the number of input variables.

Now before we proceed any further, it's important to note that we will now be transitioning from mathematical fundamentals to learning algorithms. Optimizing the cost function and learning θ will lay the foundation to understand machine learning algorithms.

Given a training set, how do we learn the parameters θ? One method that looks possible is to get h(x) close to y for the given training examples. We shall define a function that measures, for each value of the θs, how close the h(x(i))s are to the corresponding y (i) s. We define this as a cost function.

主站蜘蛛池模板: 阳朔县| 谢通门县| 晋州市| 阿鲁科尔沁旗| 囊谦县| 清徐县| 黔南| 新乐市| 白水县| 沁阳市| 措勤县| 青田县| 建水县| 平原县| 奉新县| 华宁县| 泰兴市| 特克斯县| 嘉黎县| 泗洪县| 邹平县| 封开县| 朝阳市| 淮北市| 永登县| 中宁县| 翁源县| 台湾省| 盐山县| 安国市| 五河县| 东海县| 株洲市| 达日县| 东安县| 葵青区| 北安市| 安图县| 金平| 鸡西市| 长沙县|