\( \def\vepsi{\varepsilon} \def\bold#1{{\bf #1}} \) \( \def\Hcal{{\mathcal{H}}} \def\bold#1{{\bf #1}} \) \( \def\RR{{\bf R}} \def\bold#1{{\bf #1}} \) \( \def\LL{{\bf L}} \def\bold#1{{\bf #1}} \) \( \def\xv{{\bf x}} \def\bold#1{{\bf #1}} \) \( \def\uv{{\bf u}} \def\bold#1{{\bf #1}} \) \( \def\qv{{\bf q}} \def\bold#1{{\bf #1}} \) \( \def\vv{{\bf v}} \def\bold#1{{\bf #1}} \) \( \def\Xv{{\bf X}} \def\bold#1{{\bf #1}} \) \( \def\cv{{\bf c}} \def\bold#1{{\bf #1}} \) \( \def\gv{{\bf g}} \def\bold#1{{\bf #1}} \) \( \def\wv{{\bf w}} \def\bold#1{{\bf #1}} \) \( \def\Omegamat{{\bf \Omega}} \def\bold#1{{\bf #1}} \) \( \def\Smat{{\bf S}} \def\bold#1{{\bf #1}} \) \( \def\Pmat{{\bf P}} \def\bold#1{{\bf #1}} \) \( \def\Hmat{{\bf H}} \def\bold#1{{\bf #1}} \) \( \def\Imat{{\bf I}} \def\bold#1{{\bf #1}} \) \( \def\Gmat{{\bf G}} \def\bold#1{{\bf #1}} \) \( \def\Gammamat{{\bf \Gamma}} \def\bold#1{{\bf #1}} \) \( \def\Qmat{{\bf Q}} \def\bold#1{{\bf #1}} \) \( \def\Rmat{{\bf R}} \def\bold#1{{\bf #1}} \) \( \def\omegas{{\bf \omega^{\star}}} \def\bold#1{{\bf #1}} \) \( \def\hf{{\frac{1}{2}}} \) \( \def\nablau{\nabla_{\uv}^{2}} \def\bold#1{{\bf #1}} \) \( \def\nablav{\nabla_{\vv}^{2}} \def\bold#1{{\bf #1}} \) \( \def\nablaw{\nabla_{\wv}^{2}} \def\bold#1{{\bf #1}} \) \( \def\nablavv{\nabla_{\vv\vv}^{2}} \def\bold#1{{\bf #1}} \) \( \def\xbar{\overset{-}{\xv}} \def\bold#1{{\bf #1}} \) \( \def\ubar{\overset{-}{\uv}} \def\bold#1{{\bf #1}} \) \( \def\vbar{\overset{-}{\vv}} \def\bold#1{{\bf #1}} \)

MGDA Platform

Multiple Gradient Descent Algorithm for Multi Objective Differentiable Optimization.

Solving a constrained problem

Abstract

In multiobjective differentiable optimization under constraints, we choose to formulate all type of contraint as an equality constraint, usually nonlinear, possibly by the introduction of a slack variable. Then a predictor-corrector method is proposed. At the predictor, the descent direction is determined by the Multiple-Gradient Descent Algorithm (MGDA) applied to the costfunction gradients projected onto the subspace locally tangent to all constraint surfaces. The stepsize is controled to limit the violation of the nonlinear constraints and insure that all cost functions diminish. The corrector permits to restore the nonlinear constraints by a quasi-Newton-type method applied to a function agglomerating all the contraints in which the Hessian is approximated by the sole terms in constraint gradients. This corrector constitutes a quasi-Riemannian approach that reveals very efficient. Thus the predictor-corrector sequence constitutes one iteration of a reduced-gradient descent method for constrained multiobjective optimization. Three classical testcases are solved for illustration by means of the Inria MGDA software Platform.

Bibliography

Jean-Antoine Désidéri. Quasi-Riemannian Multiple Gradient Descent Algorithm for constrained multiobjective differential optimization. [Research Report] RR-9159, Inria Sophia-Antipolis; Project-Team Acumes. 2018, pp.1-41. <hal-01740075>