[试题] 109-2 苏柏青 凸函数最佳化 期末考

楼主: unmolk (UJ)   2021-06-25 02:04:13
课程名称︰凸函数最佳化
课程性质︰电机所选修
课程教师︰苏柏青
开课学院:电资学院
开课系所︰电机所
考试日期(年月日)︰110.06.24
考试时限(分钟):100
试题 :
注:以下部分数学符号与式子以LaTeX语法表示。
Exam policy: Open book. You can bring any books, handouts, and any kinds of pa-
per-based notes with you. Use of electronic devices (including cellphones, lap-
tops, tablets, etc.), however, is strictly prohibited, with the only exception
being:
1. Reading materials on the course website NTU Cool.
Note: the total score of this exam is 150 points.
1. (60%) For each of the following optimization problems, find (i) the Lagrang-
ian L(x, \lambda, \nu), (ii) dual function g(\lambda, \nu) along with its doma-
in dom g, (iii) the dual problem, (iv) the KKT conditions for the optimal prim-
al and dual variables (x^*, \lambda^*, \nu^*).
(a) (5%+5%+5%+5%)
minimize c^Tx
subject to Ax \preceq b
where A \in R^{m\times n}, b \in R^m, and c \in R^n.
(b) (5%+5%+5%+5%)
minimize x^TP_0x
subject to x^TP_1x <= 1
Ax = b
where P_0, P_1 \in S^n_{++}.
(c) (5%+5%+5%+5%)
minimize 2x_1-3x_2+2x_3
subject to x_1[1 0 \\ 0 1] + x_2[1 -1 \\ -1 1]
+ x_3[2 -1 \\ -1 1]
\preceq _{S^2_+} [0 1 \\ 1 0].
Hint: the Lagrange multiplier for this problem is in the form of a symmetr-
ic matrix. You can use the notation Z, as in, e.g., L(x, Z) and g(Z), etc.
2. (16%) For the following pairs of proper cones K \subseteq R^q and function
\psi: R^q -> R, determine whether \psi is a generalized logarithm for K. If so,
find the degree of the generalized logarithm. Justify your answers.
(a) (8%) K = R^3_+, \psi(x) = \logx_1 + 2\logx_2 + \logx_3.
(b) (8%) K = R^3_+, \psi(x) = \log(x_1 + x_2 + x_3).
3. (14%) Consider the inequality constrained problem
minimize c^Tx
subject to Ax \preceq b
and its approximated problem
minimize tc^Tx + \phi(x)
where \phi(x) = \sum_{i=1}^m-\log(b_i - a_i^Tx) with a_i^T being the ith row of
A. We assume A \in R^{m\times n}, b \in R^m, and c \in R^n. Let f_0 : R^n -> R
be defined as f_0(x) = tc^Tx + \phi(x) where t>0.
(a) (14%) Derive \nabla f_0(x) and \nabla^2 f_0(x). Write the answers in terms
of A, b, c, x, and t.
4. (60%) True and false. There are 20 questions in this section. For each ques-
tion, you can choose to write down your answer (T of F) or leave it blank. Sup-
pose the numbers of correct answers, wrong answers are n_c and n_w. Then the s-
core you get from this section is \max{3\cdot n_c - 3\cdot n_w, 0}. Note that
n_c + n_w + n_b = 20 where n_b is the number of problems left blank. You don't
have to justify your answer.
1. A cone K \subseteq R^n is called a proper cone if it is convex, open, solid,
and pointed.
2. A cone K is said to be solid if int K \neq \empty.
3. A cone K is said to be pointed if for any x, x \in K \ {0} => -x \in K.
4. Given a generalized inequality on R^n, the minimal element of a subset S of
R^n either is unique or does not exist.
5. The dual function of an optimization problem is concave if and only if the
primal problem is a convex optimization problem.
6. The dual function g(\lambda, \nu) gives a lower bound of the optimal value
of the primal problem whenever g(\lambda, \nu) > -\inf and \lambda \geq 0.
7. The weak duality (d^* \leq p^*, where p^* and d^* are the optimal values of
the primal and dual problems, respectively) holds only when the primal prob-
lem satisfies the Slater's conditions.
8. The strong duality (d^* = p^*) holds only when the primal problem is convex
and satisfies the Slater's conditions.
9. Let E be the ellipsoid
E = {x | (x-x_0)^TA^{-1}(x-x_0) \leq 1},
where A \in S^n_{++}. THen, the condition number of E is cond(E) =
\lambda_{max}(A) / \lambda_{min}(A).
10. The steepest descent method coincides with the gradient descent method when
the norm associated with the steepest descent method is chosen as the quad-
ratic norm determined by the Hessian of the objective function at the opti-
mal point (i.e., \nabla^2f(x^*)).
11. An important feature of the Newton step is that it is independent of linear
(or affine) changes of coordinates.
12. Let ||\cdot|| be any norm on R^n. The normalized steepest descent direction
with respect to the norm ||\cdot|| is defined as
\Delta x_{nsd} = arg min_v {\nabla f(x)^Tv | ||v|| = 1}.
13. Let P \in S^n_{++} and consider the quadratic norm ||\cdot||_P. The steepe-
st descent step with respect to ||\cdot||_P is given by
\Delta_{sd} = -P^{-1}\nabla f(x) where f is the objective function of an u-
nconstrained minimization problem of interest.
14. If \psi : R^n -> R is a generalized logarithm for a proper cone K \subseteq
R^n, then y^T\nabla\psi(y) = \theta for some \theta.
15. Suppose K \subseteq R^n is a proper cone and K^* is its dual cone. If
\psi : R^n -> R is a generalized logarithm for K^*, then y \prec _{K^*} 0.
16. Consider the positive semidefinite cone S^p_+ and a function f : S^n -> R,
dom f = S^p_{++}, f(X) = \log detX^2. Then f is a generalized logarithm wi-
th degree 2p.
17. The KKT conditions for the problem with generalized inequalities
minimize f_0(x)
subject to f_i(x) \preceq _{K_i} 0
where f_0 : R^n -> R is convex, f_i : R^n -> R^k are K_i-convex, and
K_i \subseteq R^{k_i} are proper cones, are f_i(x^*) \preceq _{K_i} 0,
\lambda_i \succeq _{K_i^*}, (\lambda_i^*)^Tf_i(x^*) = 0, \forall i = 1,..,m
, and \nabla f_0(x^*) + \sum_{i=1}^mDf_i(x^*)^T\lambda_i^* = 0.
18. Let K \subseteq R^q be a proper cone. If f : R^n -> R^q is K-convex, then f
must also be a convex function.
19. Consider the problem with generalized inequalities
minimize f_0(x)
subject to f_i(x) \preceq _{K_i} 0, i = 1,...,m
where f_0 : R^n -> R is convex, f_i : R^n -> R^k are K_i-convex, and
K_i \subseteq R^{k_i} are proper cones, and let \psi_i, i = 1,...,m be gen-
eralized logarithms for K_1,...,K_m, respectively. Then, the central points
of the problem are characterized by
t\nabla f_0(x) + \sum_{i=1}^mDf_i(x)^T\nabla\phi_i(-f_i(x)) = 0.
20. The phase I problem defined as
minimize s
subject to f_i(x) \leq s, i = 1,...,m
Ax = b
is always strictly feasible, given that b is in the column space of A and
f_i is convex for all i = 1,...,m.

Links booklink

Contact Us: admin [ a t ] ucptt.com