### Abstract

In multiobjective differentiable optimization under constraints, we choose to formulate all type of contraint as an **equality constraint**, usually nonlinear, possibly by the introduction of a slack variable.
Then a **predictor-corrector method** is proposed. At the predictor, the descent direction is determined by the Multiple-Gradient Descent Algorithm (MGDA) applied to the costfunction gradients projected onto the subspace locally tangent to all constraint surfaces.
The stepsize is controled to limit the violation of the nonlinear constraints and insure that all cost functions diminish. The corrector permits to restore the nonlinear constraints by a quasi-Newton-type method applied to a function agglomerating all the contraints
in which the Hessian is approximated by the sole terms in constraint gradients. This corrector constitutes a **quasi-Riemannian approach** that reveals very efficient.
Thus the predictor-corrector sequence constitutes one iteration of a reduced-gradient descent method for **constrained multiobjective optimization**. Three classical testcases are solved for illustration by means of the Inria MGDA software Platform.

### Bibliography

Jean-Antoine Désidéri. Quasi-Riemannian Multiple Gradient Descent Algorithm for constrained multiobjective differential optimization. [Research Report] RR-9159, Inria Sophia-Antipolis; Project-Team Acumes. 2018, pp.1-41. <hal-01740075>