官术网_书友最值得收藏!

Solution concepts

In the last 50 years, many great algorithms have been developed for numerical optimization and these algorithms work well, especially in case of quadratic functions. As we have seen in the previous section, we only have quadratic functions and constraints; so these methods (that are implemented in R as well) can be used in the worst case scenarios (if there is nothing better).

However, a detailed discussion of numerical optimization is out of the scope of this book. Fortunately, in the special case of linear and quadratic functions and constraints, these methods are unnecessary; we can use the Lagrange theorem from the 18th century.

Theorem (Lagrange)

If Theorem (Lagrange) and Theorem (Lagrange), (where Theorem (Lagrange)) have continuous partial derivatives and Theorem (Lagrange) is a relative extreme point of f(x) subject to the Theorem (Lagrange) constraint where Theorem (Lagrange).

Then, there exist the coefficients Theorem (Lagrange) such that Theorem (Lagrange)

In other words, all of the partial derivatives of the function Theorem (Lagrange) are 0 (Bertsekas Dimitri P. (1999)).

In our case, the condition is also sufficient. The partial derivative of a quadratic function is linear, so the optimization leads to the problem of solving a linear system of equations, which is a high school task (unlike numerical methods).

Let's see, how this can be used to solve the third problem:

Theorem (Lagrange)

It can be shown that this problem is equivalent to the following system of linear equations:

Theorem (Lagrange)

(Two rows and two columns are added to the covariance matrix, so we have conditions to determine the two Lagrange multipliers as well.) We can expect a unique solution for this system.

It is worth emphasizing that what we get with the Lagrange theorem is not an optimization problem anymore. Just as in one dimension, minimizing a quadratic function leads to taking a derivative and a linear system of equations, which is trivial from the point of complexity. Now let's see what to do with the return maximization problem:

Theorem (Lagrange)

It's easy to see that the derivative of the Lagrange function subject to λ is the constraint itself.

To see this, take the derivative of L:

  • Theorem (Lagrange)
  • Theorem (Lagrange)

So this leads to non-linear equations, which is more of an art than a science.

主站蜘蛛池模板: 华阴市| 北川| 沙坪坝区| 万山特区| 保靖县| 中牟县| 三门县| 宾川县| 伊川县| 锡林浩特市| 利津县| 扎兰屯市| 枣强县| 年辖:市辖区| 镇平县| 溧阳市| 凤冈县| 怀柔区| 武穴市| 斗六市| 渑池县| 柞水县| 乌拉特后旗| 乐山市| 龙州县| 巩留县| 鲁甸县| 蕲春县| 通州区| 建瓯市| 秀山| 太仆寺旗| 东海县| 瓦房店市| 福清市| 张家界市| 凤山县| 祥云县| 思茅市| 黔江区| 沿河|