Multiple Gradient Descent Algorithm for Multi Objective Differentiable Optimization.
A multi-objective differentiable optimization algorithm had been proposed to solve problems presenting a hierarchy in the cost functions, {f_{j}(x)} (j = 1, … ,M ≥ 2; x ∈ Ω_{a} ⊆ R^{n}) The first cost functions for which j ∈ {1, … , m} (1 ≤ m < M ) are considered to be of preponderant importance; they are referred to as the “primary cost functions” and are subject to a “prioritized” treatment, in contrast with the tail ones, for which j ∈ {m+1, … , M} referred to as the “secondary cost functions”. The problem is subject to the nonlinear constraints, c_{k}(x) = 0 (k = 1, … , K). The cost functions {fj(x)} and the constraint functions {c_{k}(x)} are all smooth, say C^{2} (Ω_{a}). The algorithm was first introduced in the case of two disciplines (m = 1, M = 2), and successfully applied to optimum shape design optimization in compressible aerodynamics concurrently with a secondary discipline. More recently, the theory has been enhanced in both framework and established results, and the new developments will be presented elsewhere in details. In short, an initial admissible point x^{∗}_{A} that is Pareto-optimal with respect to the sole primary cost functions (subject to the constraints) is assumed to be known. Subsequently, a small parameter ε ∈ [0,1] is introduced, and it is established that a continuum of Nash equilibria {x_{ε}} exists for all small enough ε. The continuum originates from x^{∗}_{A} (x_{0} = x^{∗}_{A}). Along the continuum: (i) the Pareto- stationarity condition exactly satisfied by the primary cost functions at x^{∗}_{A} is degraded by a term O(ε^{2}) only, whereas (ii) the secondary cost functions initially decrease, at least linearly with ε with a negative derivative provided by the theory. Thus, the secondary cost functions are reduced while the primary cost functions are maintained to quasi Pareto-optimality. In this report, we firstly recall the definition of the different steps in the computational Nash-game algorithm assuming the functions all have known first and second derivatives (here without proofs). Then we show how, in the absence of explicitly known derivatives, the continuum of Nash equilibria can be calculated approximately via the construction of quadratic surrogate functions. Numerical examples are provided and commented
my_functions.f
contains the subroutines defining in Fortran (F77 or F90)
the primary, secondary and cost functions. The nomenclature of the Fortran procedures is the following:
xv
: vector \(\xv \in \RR^n\) of optimization variables;
ndim
: dimension \)n\) of vector \)\xv\);
fun
: vector of cost functions \(\{ f_j \}\) (\(j=1,\dots,m\) for primary cost functionns and
\(j=m+1,\dots,M\)) for secondary cost functions);
mfun
: number \(m\) of primary cost functions;
mtot
: total number \(M\) of cost functions;
cfun
: vector of constraint functions \(\{ c_k \}\) (\(k=1,\dots,K\)).
kc
: number \(K\) of constraints;
c ------------------------------------------------------------------
SUBROUTINE PRIME_FUNCTIONS (xv, ndim, fun, mfun)
implicit double precision (a-h,o-z)
dimension xv(ndim), fun(mfun)
c primary cost function generic values
c TEST CASE TC4
sum = 0.d0
do i = 1,4
sum = sum + xv(i)**2
end do
fun(1) = 3.d0 - (sum+xv(1))
RETURN
END
c ------------------------------------------------------------------
SUBROUTINE SECOND_FUNCTIONS ( xv, ndim, fun, mfun, mtot)
implicit double precision (a-h,o-z)
dimension xv(ndim), fun(mtot)
c secondary cost function generic values
c TEST CASE TC4
fun(2) = (xv(3)-1.d0)**2 + (xv(4)-1.d0)**2 - 1.d0
~ + 0.2d0 * (1.d0-xv(1))
fun(3) = -4.d0 * (xv(3)-1.d0)**2 + (xv(4)-1.d0)**2 + 5.d0 -xv(1)
RETURN
END
c ------------------------------------------------------------------
SUBROUTINE CONSTRAINTS ( xv, ndim, cfun, kc)
implicit double precision (a-h,o-z)
dimension xv(ndim), cfun(kc)
c TC4 : unit sphere
sum = 0.d0
do i = 1,4
sum = sum + xv(i)**2
end do
cfun(1) = sum - 1.d0
RETURN
END
my_testcase.dat
provides parameters necessary to the compilation of the program and its
execution. This file contains a series of elements each of which is prescribed by three lines:
testcase (character*80) :
Test-case TC4 : one prime and two secondary cost functions, and one constraint
ndim
4
np
2
mfun
1
mtot
3
kc
1
xa_star
1.d0
0.d0
0.d0
0.d0
hfdiff
1.d-4
hbox
1.d-3
Bkappa
10.d0
lstepmax
1000
TOL
1.d-4
Lambdamax
10
mumax
5
Before running the code online, please register for the Platform.
The computational core of this platform was originally developed by Jean-Antoine Desideri, implementing the algorithms described in the references cited in the synopsis. The web interface and remote execution system were designed and realized by the SED team of Inria Sophia Antipolis.