官术网_书友最值得收藏!

Solution concepts

In the last 50 years, many great algorithms have been developed for numerical optimization and these algorithms work well, especially in case of quadratic functions. As we have seen in the previous section, we only have quadratic functions and constraints; so these methods (that are implemented in R as well) can be used in the worst case scenarios (if there is nothing better).

However, a detailed discussion of numerical optimization is out of the scope of this book. Fortunately, in the special case of linear and quadratic functions and constraints, these methods are unnecessary; we can use the Lagrange theorem from the 18th century.

Theorem (Lagrange)

If Theorem (Lagrange) and Theorem (Lagrange), (where Theorem (Lagrange)) have continuous partial derivatives and Theorem (Lagrange) is a relative extreme point of f(x) subject to the Theorem (Lagrange) constraint where Theorem (Lagrange).

Then, there exist the coefficients Theorem (Lagrange) such that Theorem (Lagrange)

In other words, all of the partial derivatives of the function Theorem (Lagrange) are 0 (Bertsekas Dimitri P. (1999)).

In our case, the condition is also sufficient. The partial derivative of a quadratic function is linear, so the optimization leads to the problem of solving a linear system of equations, which is a high school task (unlike numerical methods).

Let's see, how this can be used to solve the third problem:

Theorem (Lagrange)

It can be shown that this problem is equivalent to the following system of linear equations:

Theorem (Lagrange)

(Two rows and two columns are added to the covariance matrix, so we have conditions to determine the two Lagrange multipliers as well.) We can expect a unique solution for this system.

It is worth emphasizing that what we get with the Lagrange theorem is not an optimization problem anymore. Just as in one dimension, minimizing a quadratic function leads to taking a derivative and a linear system of equations, which is trivial from the point of complexity. Now let's see what to do with the return maximization problem:

Theorem (Lagrange)

It's easy to see that the derivative of the Lagrange function subject to λ is the constraint itself.

To see this, take the derivative of L:

  • Theorem (Lagrange)
  • Theorem (Lagrange)

So this leads to non-linear equations, which is more of an art than a science.

主站蜘蛛池模板: 咸宁市| 江陵县| 塘沽区| 伊吾县| 望奎县| 宁国市| 阿勒泰市| 博乐市| 碌曲县| 加查县| 枣强县| 巨鹿县| 沙田区| 比如县| 马龙县| 南开区| 乌兰浩特市| 竹山县| 普安县| 定兴县| 通州市| 安图县| 衡阳市| 宜春市| 永顺县| 常德市| 安阳市| 沈丘县| 曲阜市| 建昌县| 石河子市| 连云港市| 承德县| 郴州市| 南皮县| 马鞍山市| 绍兴县| 永兴县| 安平县| 敦煌市| 尼木县|