MGDA Platform

Multiple Gradient Descent Algorithm for Multi Objective Differentiable Optimization.

Synopsis

Generalities and principal notations

Are given:

- $m$ and $n$, two arbitrary strictly-positive integers (the case $m>n$, or even $m \gg n$, is possible as well as the more usual case $m<n$);
- $m$ smooth cost-functions $\{ f_j({\mathbf x}) \}$ ($j=1,\dots,m$) of the variable ${\mathbf x}\in \omega_a$ (at least $C^1(\Omega_a)$ and usually $C^2(\Omega_a)$);
- $\Omega_a \subseteq \mathbb{R}^n$: the admissible domain (an open subset of $\mathbb{R}^n$);
- the values and gradients-vectors of the cost-functions at some admissible point ${\mathbf x}_0 \in \Omega_a$:
\begin{displaymath} \{ f_j({\mathbf x}_0) \},\quad \{ g_j({\mathbf x}_0) \}, \end{displaymath}

where .

The procedure computes a vector ${\mathbf d}\in \mathbb{R}^n$ such that the directional derivatives

\begin{displaymath} f_j' = {\mathbf g}_j. {\mathbf d} \end{displaymath}

(where $.$ stands for the usual Euclidean scalar product) are all strictly-positive. The procedure fails whenever no solution exists, that is, when the following three equivalent statements hold:
  1. There exists a convex combination of the gradient-vectors that is equal to 0.
  2. The null vector belongs to the convex hull of the gradient-vectors.
  3. The minimum Euclidean norm element in the convex hull of the gradient-vectors is the null vector.

In such a case, the point ${\mathbf x}_0$ is said to be Pareto-stationary (or weakly Pareto optimal). The following development applies to the inverse case, in which infinitely-many solutions exist, and the procedure provides one.

When the solution exists, the direction of $-{\mathbf d}$ is said to be a descent direction common to all cost-functions.

The MGDA procedure operates in ``basic mode'' if the parameter `logmode' is set to 0, or in ``logarithmic mode'' if the parameter `logmode' is set to 1. One defines the functions

\begin{displaymath} \phi_j({\mathbf x}) = \left\{ \begin{aligned} & f_j({\mathbf x}) \bigr) & \hbox{if } {\sf logmode} = 1 \end{aligned}\right. \end{displaymath}

and their gradient-vectors

and the corresponding directional derivatives:

The user's choice of the option logmode=1 presupposes that the problem has been formulated in such a way that all functions admit 0 as absolute minimum (or infimum). This can be achieved by subtracting the original infimum when it is finite, or by applying some exponential transform to them otherwise. In this way, the operational domain in function space is .

The procedure can be direct (or finite) or iterative depending on the input data. For example, if the gradient-vectors are linearly-independent, the procedure is finite and the computed vector ${\mathbf d}$ is such that the directional derivatives $\{ \phi_j'\}$ associated with nonzero coefficients $\alpha_j$ are equal to the same strictly-positive constant $\sigma$. This result ($\phi_j' = \sigma, \forall j$) is not true in general, but in many cases, satisfied approximately. In such a case, if a sufficiently small step-size is defined in the subsequent descent step, all cost-functions will diminish of nearly-equal amounts. This observation should lead the user to the choice of the parameter logmode. The user should choose the basic mode (logmode=0) whenever ideally, the descent step should reduce the cost functions of the same amount. Inversely, the logarithmic mode (logmode=1) should be chosen whenever ideally, the descent step should reduce the cost functions in the same proportion, or same percentage.

The determination of vector ${\mathbf d}$ may be iterative particularly when the number $m$ of cost-functions is in large excess of the problem dimension $n$. Then, a Quadratic-Programming (QP) problem is solved using the QuadProgpp C++ Library ((647) 259-3667). This QP-problem is formulated in a special basis of the spanned vector subspace; this basis is identified by the procedure according to a hierarchical principle: in short, and so-to-speak, the principle was devised in such a way that the vector directions associated with the convex hull of the gradient-vectors are usually almost all in the convex hull of the basis.

Component-wise scaling

When the optimization variables, or components of vector ${\mathbf x}_0$, are not of the same physical dimension, so are not the components of the gradient-vectors, and certain computed scalar products no longer make physical sense, since they are not invariant through changes in units. This makes the value of the numerical result questionable. To avoid this, prior component-wise scaling can be performed. This scaling is also highly recommended, to reduce numerical ill-conditioning, when these components are very disparate in magnitude. This is done by the procedure by setting the parameter `iscale' to 1, and 0 otherwise.

For iscale=1, a scale $s_i$ is associated with each component $i$ ($1 \leq i \leq n$):

\begin{displaymath} s_i = \max_{j=1}^m \left\vert \nabla \phi_{i,j}(\mathbf{x}_0) \right\vert \end{displaymath}

where the double subscript $i,j$ refers to the $i$th component of vector $\nabla \phi_j(\mathbf{x}_0)$. If logarithmic scaling has been applied prior to component-wise scaling, the scales also reflect this choice. The $n \times n$ diagonal matrix ${\mathbf S}$ is then defined:

\begin{displaymath} {\mathbf S}= \mathbf{Diag} (s_i) \end{displaymath}

and the vectors \{$\nabla \phi_j(\mathbf{x}_0)\}$ are replaced by ${\mathbf S}^{-1} {\mathbf g}_j$. In this way, each scaled gradient-vector component falls in the interval [-1,1] and attains either 1 or -1 at least once.

Conclusions

After the user's choice of the parameters logmode and iscale, the vectors that are processed by MGDA are the following

${\mathbf u}_j=\mathbf{S}^{-1}\nabla \phi_j(\mathbf{x}_0)\quad(j=1,\dots,m)$

where if iscale=0. To these vectors is applied a Gram-Schmidt orthogonalization process (with special prioritization and normalization of vectors) to get the orthogonal vectors

where ${\sf r}\leq \min(m,n) $ is the rank or dimension of the subspace spanned by the gradient-vectors. The procedure provides the value of ${\sf r}$ and the value of an integer $\mu$ (${\sf r}\leq \mu \leq m$) that is such that a direction based on $\{ {\mathbf v}_j \}  (j=1,\dots,{\sf r})$ is a descent direction common to the subfamily $\{ {\mathbf u}_j \}$ ($j=1,\dots,\mu$), but perhaps not to the entire family $\{ {\mathbf u}_j \}$ ($j=1,\dots,m$).

When $\mu=m$, the common descent direction has been found after a finite number of steps, and the procedure stops. Otherwise, one proceeds with the formulation of the QP-problem and its solution. The descent direction ${\mathbf d}$ is then calculated by the what we refer to as the dual method.

Guidance to step-size selection

The step-size in the descent iteration is the user's responsibility. However the procedure provides some statistical information to guide this choice.

We define the average function value, $\bar \phi$, and standard deviation $\bar \sigma$:

If $\bar \sigma$ is nonzero, it is considered as a pertinent scale for variations in $\phi_j$. Then the procedure provides a step vector ${\mathbf d}'$ (in "physical" or "unscaled" units) proportional to vector ${\mathbf d}$, for which the first variation of the average function value, $D \bar \phi$, or value of the differential for $d{\mathbf x}={\mathbf d}'$, is equal to $\bar \sigma$. in practice, a fraction $\rho$ ($0 < \rho < 1$) of this step vector can be used as a descent step in a basic update of the optimization variable:

The vector ${\mathbf d}'$ is stored in solution.txt upon exit.

Such a reference step-sizing was used successfully in basic mode in a problem where one had

where $f(t,{\mathbf x})$ was smooth in $t$, and $\{ t_j \}$ was a regular discretization of a $t$ interval. More generally, one expects this step-sizing to be adequate in multi-point optimization.

Inversely, if $\bar \sigma$ is equal to zero, the ascent vector ${\mathbf d}$ is stored in the solution file and no guidance to step size selection is provided.

More sophisticated step-sizing can be elaborated when second derivatives can be estimated, for instance via meta-modeling. Then a Newton-method-like iteration can be implemented.

Accuracy

The authors of the numerical software have tested cases corresponding to $n \leq 6$ and $m \leq 25$ for which a regularization constant of the order of $10^{-10}$ was sufficient for the formulation of the QP-problem. The accuracy of the results was of the same order. Cases corresponding to larger dimensions can certainly be handled by the software. However, increasing these dimensions will inevitably result in worse-conditioned systems for which the solution accuracy cannot be guaranteed, since this effect has not been thoroughly examined yet.

Bibliography

1
J.-A. Désidéri.
Multiple-gradient descent algorithm (MGDA).
Research Report 6953, INRIA, 2009 (revised version November 5, 2012).
/hal.inria.fr/inria-00389811/fr/.

2
Jean-Antoine Désidéri.
Multiple-gradient descent algorithm (MGDA) for multiobjective optimization.
Comptes Rendus de l'Académie des Sciences Paris, 350:313-318, 2012.
9059794889.

3
J.-A. Désidéri.
Numerical Methods for Differential Equations, Optimization, and Technological Problems, volume 34 of Modeling, Simulation and Optimization for Science and Technology, Fitzgibbon, W.; Kuznetsov, Y.A.; Neittaanmäki, P.; Pironneau, O. Eds., chapter Multiple-Gradient Descent Algorithm (MGDA) for Pareto-Front Identification.
Springer-Verlag, 2014.
J. Périaux and R. Glowinski Jubilees.
(336) 633-2921

4
Jean-Antoine Désidéri.
Révision de l'algorithme de descente à gradients multiples (MGDA) par orthogonalisation hiérarchique.
Research Report 8710, INRIA, April 2015.
/hal.inria.fr/hal-01139994.

5
Jean-Antoine Desideri and Régis Duvigneau.
Parametric optimization of pulsating jets in unsteady flow by Multiple-Gradient Descent Algorithm (MGDA).
In J. Périaux, W. Fitzgibbon, B. Chetverushkin, and O. Pironneau, editors, Numerical Methods for Differential Equations, Optimization, and Technological Problems, Modeling, Simulation and Optimization for Science and Technology. 2017.
/hal.inria.fr/hal-01414741.

Running the code

The MGDA code can be run in a one-time-pass mode using the current web interface or in an iterative mode after downloading the binary.

One-time-pass mode

In the one-time-pass mode, the user is able to upload the input file to provide function and gradient values according to the following format:

  • title: a user's datafile identification string (char)
  • nvec = number $m$ of specified gradients (int)
  • ndim = number $n$ of variables (int)
  • nvec datasets formatted as follows:
    • one blank line
    • index of the dataset (int)
    • function value (double)
    • first component of the gradient (double)
    • ...
    • last (ndim) component of the gradient (double)
One can refer to the example section for typical input files.

The user also provides the logical parameters logmode and iscale as well as the regularisation constant eps_Hdiag unless default value of 1E-10 is adequate.

After execution, three formatted files are returned:

  • run_report.txt: this file contains the description of the execution flow.
  • solution.txt: this file usually contains the components of vector ${\mathbf d}'$ defined in the synopsis (see section guidance to step size selection); exceptionally, when , ascent vector is returned instead.
  • data.json: this file contains the elements of the array Phi(i, j) (i=1..ndim) (j=1..nvec) (standing for ) and array u{i,j} (standing for ) for plotting purposes (see section statistics)

One-time-pass run

Input file
Name



eps_Hdiag

Output files
Name

Name

Name

Iterative mode

This mode is devised to include the MGDA process within an iterative optimization loop by a descent method. This mode is made freely available for academic usage according to the terms of the (908) 338-4252.

In the procedure, the user

  • sends an email from a valid academic address to mgda-contact@inria.fr indicating:
    • affiliation,
    • position,
    • usage context,
    • kind of operating system (Ubuntu, Fedora, MacOs).
    This email should also contain the following phrase: I accept the terms of the MGDA Platform License.
  • receives the corresponding binary.

Use of MGDA in Fortran code

The MGDA library features the following three optional configuration procedures and one execution procedure.

Optional configuration procedures

Subroutine mgdaSetVerboseLevel enables to set the level of verbosity into the console. verbose_level = 1 leads to print screen the whole content of the file run_report.txt.


subroutine mgdaSetVerboseLevel(verbose_level)
    implicit none
    integer(kind=C_INT), intent(in) :: verbose_level
end subroutine mgdaSetVerboseLevel
          

Subroutine mgdaSetInputFile opens the input file into unit 99 and reads the test-case title.


subroutine mgdaSetInputFile(in_size, in_file)
    implicit none
    integer(kind=C_INT), intent(in) :: in_size
    character(kind=C_CHAR), dimension(in_size), intent(in) :: in_file
end subroutine mgdaSetInputFile
          

Subroutine mgdaSetOutputDir enables to set the path to the output directory in which output files are to be written.


subroutine mgdaSetOutputDir(out_size, out_dir)
    implicit none
    integer(kind=C_INT), intent(in) :: out_size
    character(kind=C_CHAR), dimension(out_size), intent(in) :: out_dir
end subroutine mgdaSetOutputDir
          

Execution procedure

Subroutine mgdaProcess is the core of the library. It takes as input :

  • ndim the number $n$ of variables
  • nvec the number $m$ of specified gradients
  • fun the array of size nvec containing function values $f_j$
  • gvect the array of size (ndim, nvec) containing the ndim components of the nvec gradients $g_ij$
  • logmode the parameter activating the logarithmic mode
  • iscale the parameter activating the component-wise scaling of the gradients
  • eps_Hdiag the parameter ensuring the positive-definite nature of matrix $\mathbf{H}}$

The subroutine returns the following output:

  • desc the vector of size ndim containing the scaled ascent direction

subroutine mgdaProcess(ndim, nvec, fun, gvect, logmode, iscale, eps_Hdiag, desc)
    implicit double precision (a-h,o-z)
    integer(kind=C_INT), intent(in) :: ndim, nvec, logmode, iscale
    real(kind=C_DOUBLE), intent(in) :: eps_Hdiag
    real(kind=C_DOUBLE), dimension(*), intent(in) :: fun
    real(kind=C_DOUBLE), dimension(ndim,*), intent(in) :: gvect
    real(kind=C_DOUBLE), dimension(*), intent(inout) :: desc
end subroutine mgdaProcess
          

Example of program

In order to use the above subroutines in a Fortran program, one just needs to use the module mgdaPilot.
As far as the compilation is concerned, one has to link to libmgdaCore.so.

The following snippet shows how to use MGDA in practice.


program UseOfMGDA
    use mgdaPilot
    implicit none

    !===========================================
    ! nvec : number of specified vectors defining the convex hull
    ! ndim : dimension of these vectors
    integer nvec, ndim

    ! Loop variables
    integer i, j, jvect

    ! Options
    integer logmode, iscale
    double precision eps_Hdiag

    ! Function and gradient arrays
    double precision, allocatable :: fun(:)
    double precision, allocatable :: gvect(:,:)

    ! descent direction array
    double precision, allocatable :: desc(:)

    ! Input file and output dir names
    character(len=32), parameter :: input_file_name = "./input_file.txt"
    character(len=32), parameter :: output_dir_name = "./"

    !===========================================
    ! Open input file in unit 99
    call mgdaSetInputFile(LEN(TRIM(input_file_name)), input_file_name)

    ! Read dimensions
    read (99,*) nvec
    read (99,*) ndim

    ! Allocation of all arrays
    allocate(fun(nvec))
    allocate(gvect(ndim, nvec))
    allocate(desc(ndim))

    ! Read specified function values and gradients (in physical dimensions)
    do jvect = 1,nvec
        read (99,*) j
        read (99,*) fun(j)
        do i = 1,ndim
            read (99,*) gvect(i,j)
        end do
    end do

    ! Options
    logmode = 0
    iscale = 1
    eps_Hdiag = 1.d-10

    !===========================================
    ! Set Verbose Level
    call mgdaSetVerboseLevel(1)

    !===========================================
    ! Set path to output directory
    call mgdaSetOutputDir(LEN(TRIM(output_dir_name)), output_dir_name)

    !===========================================
    ! Apply MGDA
    call mgdaProcess(ndim, nvec, fun, gvect, logmode, iscale, eps_Hdiag, desc)

    !===========================================
    ! Free memory
    deallocate(fun)
    deallocate(gvect)
    deallocate(desc)

    stop
end program

Example runs

Examples 1 to 6 were drawn from Appendix A of the report

- Révision de l'algorithme de descente à gradients multiples (MGDA) par orthogonalisation hiérarchique, J.-A. Désidéri, Inria Report No. 8710, Sept. 2016, /hal.inria.fr/hal-01139994

in which they are more extensively commented (in French), and Example 7 corresponds to a particular dataset used in the chapter

- Parametric optimization of pulsating jets in unsteady flow by Multiple-Gradient Descent Algorithm (MGDA), J.-A. Désidéri and R. Duvigneau, Numerical Methods for Differential Equations, Optimization, and Technological Problems, Modeling, Simulation and Optimization for Science and Technology, J. Périaux; W. Fitzgibbon; B. Chetverushkin; O. Pironneau Eds., 2017.

In each case, are provided:

  1. the input file input_file.txt,
  2. the formatted output file run_report.txt,
  3. when applicable, that is when the solution exists, the output file solution.txt providing an ascent step-vector according to the guidance to the step-size selection described in the synopsis.

Example 1

This example involves $m=5$ gradient-vectors of dimension $n=2$. Function values are arbitrarily set to $f_j=j$.

The code is run in basic mode (logarithmic=0), without component scaling (iscale=0), and in the QP problem, the regularization parameter is not modified (eps_Hdiag $= 10^{-10}$).

Reading through the formatted output run_report.txt, the main observed facts are:

  1. After Gram-Schmidt orthogonalization, the rank is found equal to 2 (full rank).
  2. The permutation (2,5,3,4,1) indicates that the originally-indexed vectors 2 and 5 are chosen to form the new basis, anticipating the formulation of a QP-problem.
  3. $\mu=4$: the direction is search based on $\{ {\mathbf{u}}_2,{\mathbf{u}}_5 \}$ is a common descent direction for the first $\mu=4$ vectors, and therefore not for the last vector (originally indexed ${\mathbf{u}}_1$). Therefore further investigation is necessary to conclude.
  4. The examination of the Etav matrix, that is here, more precisely its last column since $m-\mu=1$, excludes an immediate conclusion: the components of the last vector are not all negative (the situation is not one of Pareto-stationarity) and do not sum up to a positive number (last vector not in convex hull of the new basis): the QP-problem must be solved for a definite conclusion.
  5. Upon return from the QuadProg procedure, the common descent direction for the whole family is found and written in solution.txt.

EXAMPLE 1 with f_j=j (j=1,...,5) logmode=0 iscale=0
5
2

1
1.d0
1.d0
0.d0

2
2.d0
1.d0
1.4d0

3
3.d0
0.3d0
1.5d0

4
4.d0
-0.6d0
2.4d0

5
5.d0
-1.d0
1.d0

                  
======================================================================
          Listing Input Parameters and Data
======================================================================

Test-case specification :
EXAMPLE 1 with f_j=j (j=1,...,5) logmode=0 iscale=

 Number of vectors (m) : nvec =            5
 Space dimension (n)   : ndim =            2

 logmode = 0 ; log-gradients not considered

 First function value =    1.0000000000000000
 Last function value =     5.0000000000000000
 Mean function value, PHIbar =     3.0000000000000000
 Standard deviation, SIGMAbar  =     1.4142135623730951

 iscale = 0 ; gradients are not scaled component-wise

Specified gradient vectors (specified Gvect matrix) :
   1.0000   1.0000   0.3000  -0.6000  -1.0000
   0.0000   1.4000   1.5000   2.4000   1.0000

======================================================================
          Scaling the Gradients
======================================================================

Component-wise gradient scales :
 scale(           1 ) =    1.0000000000000000
 scale(           2 ) =    1.0000000000000000

Matrix U to be processed by MGDA and duplicate to be saved :
   1.0000   1.0000   0.3000  -0.6000  -1.0000
   0.0000   1.4000   1.5000   2.4000   1.0000

   1.0000   1.0000   0.3000  -0.6000  -1.0000
   0.0000   1.4000   1.5000   2.4000   1.0000

======================================================================
          Directional derivatives associated with average gradient
======================================================================

 Average gradient, u_mean (dimensionless and in phys. dims.):
  0.13999999999999996       0.13999999999999996
   1.2600000000000000        1.2600000000000000

 Associated directional derivatives, u_mean^t u_j, j=1..nvec :
           1  0.13999999999999996
           2   1.9039999999999997
           3   1.9320000000000002
           4   2.9399999999999999
           5   1.1200000000000001

 Normalized gradient of variance, gradvar
 -0.37333333333333341
  0.20000000000000001

 Associated directional derivatives, gradvar^t u_j, j=1..nvec :
           1 -0.37333333333333341
           2  -9.3333333333333435E-002
           3  0.18800000000000003
           4  0.70399999999999996
           5  0.57333333333333347

======================================================================
          Gram-Schmidt Orthogonalization
======================================================================

 Parameter r_max (upper bound on rank) =            2
 TOL =    1.0000000000000000E-002
 TOL only serves whenever 1-TOL < C_{L,L} <= 1 does occur

Returning from Gram-Schmidt :

    Parameter r (lower bound on rank) =    2
    Number of vectors admitting a known common descent direction, mu =    4
    Defect, m-mu =    1

Permutation of u-vectors =
   2   5   3   4   1

Reordered u-vectors (U matrix) :
   1.0000  -1.0000   0.3000  -0.6000   1.0000
   1.4000   1.0000   1.5000   2.4000   0.0000


Orthogonal basis (V matrix) :
   1.0000  -1.3125
   1.4000   0.9375


Coefficient-vector beta =
   0.4678   0.5322

Coefficient-vector alfa (w.r.t. reordered U matrix) =
   0.3846   0.6154

Coefficient-vector alpha (w.r.t. original U0 matrix) =
   0.0000   0.3846   0.0000   0.0000   0.6154

Provisional direction of search
 omega_1 = V * beta = U * alfa = U0 * alpha =
  -0.2308
   1.1538

Directional derivatives = dot products of omega_1
 with column-vectors of reordered U matrix :
   1.3846   1.3846   1.6615   2.9077  -0.2308
Constant a =   1.2000

======================================================================
          Preparation of QP formulation
======================================================================

UtU matrix :
   2.9600   0.4000
   0.4000   2.0000

Factorization : upon exit from DPOTRF, info =
   0

Inversion : upon exit from DPOTRS, info =
   0

Matrix W :
   0.4167   0.4167
  -0.5833   0.4167

Eta-vectors (U-tilde matrix) :
   1.0000   0.0000   0.7500   0.7500   0.4167
   0.0000   1.0000   0.4500   1.3500  -0.5833


 Regularisation constant:
 EPS_HDIAG =    1.0000000000000000E-010

Matrix H :

    1.00    0.00    0.75    0.75    0.42

    0.00    1.00    0.45    1.35   -0.58

    0.75    0.45    0.77    1.17    0.05

    0.75    1.35    1.17    2.39   -0.47

    0.42   -0.58    0.05   -0.47    0.51

======================================================================
          Solution of QP problem
======================================================================

 Alpha_qp =
   0.0000000000000000
  0.40932642487562865
   0.0000000000000000
   0.0000000000000000
  0.59067357512182672
 MODULE OF LARGEST NEGLECTED ALPHA_QP COMPONENT (XDUMP) :    4.0526984452610356E-012

 Omega-tilde :
  0.24611398963409445
   6.4766839387896458E-002

======================================================================
          Ascent direction d
======================================================================

 Ascent direction d = A_n * omega (scaled and raw) :
   6.4766839371266427E-002   6.4766839371266427E-002
  0.12953367875916288       0.12953367875916288

 Directional derivatives = dot products of d
 with column-vectors of reordered U matrix :
  0.24611398963409445
   6.4766839387896458E-002
  0.21373056995012427
  0.27202072539923106
   6.4766839371266427E-002

======================================================================
          Guidelines to Step-Size selection
======================================================================

 STATISTICS :
 Concerning (transformed) input function values :
    - average, PHIbar :    3.0000000000000000
    - standard deviation, SIGMAbar :    1.4142135623730951
 Changes in input physical variables for which
 the expected INCREASE in average function value
 is equal to the standard deviation
  0.53165923391018377
   1.0633184679568803

 FOR AN APPROPRIATE DESCENT STEP, IT IS SUGGESTED TO SUBTRACT
 TO VECTOR X A FRACTION OF THE ABOVE VECTOR STORED IN FILE
 ./solution.txt

                  
  0.53165923391018377
   1.0633184679568803
                  

Example 2

This example involves $m=5$ gradient-vectors of dimension $n=2$. Function values are arbitrarily set to $f_j=j$.

The code is run in basic mode (logarithmic=0), without component scaling (iscale=0), and in the QP problem, the regularization parameter is not modified (eps_Hdiag $= 10^{-10}$).

Reading through the formatted output run_report.txt, the main observed facts are:

  1. After Gram-Schmidt orthogonalization, the rank is found equal to 2 (full rank).
  2. The permutation (4,1,3,2,5) indicates that the originally-indexed vectors 4 and 1 are chosen to form the new basis, anticipating the formulation of a QP -problem.
  3. $\mu=4$: the direction is search based on $\{ {\mathbf{u}}_4,{\mathbf{u}}_1 \}$ is a common descent direction for the first $\mu=4$ vectors, and therefore not for the last vector (originally indexed ${\mathbf{u}}_5$). Therefore further investigation is necessary to conclude.
  4. The examination of the Etav matrix, that is here, more precisely its last column since $m-\mu=1$, indicates that the last vector only has negative components: Pareto-stationarity is detected, and the program quits: no solution exists.

EXAMPLE 2 with f_j=j (j=1,...,5) logmode=0 iscale=0
5
2

1
1.d0
1.d0
0.d0

2
2.d0
1.d0
1.4d0

3
3.d0
0.24d0
1.2d0

4
4.d0
-0.45d0
1.8d0

5
5.d0
-1.d0
-0.4d0

                  
======================================================================
          Listing Input Parameters and Data
======================================================================

Test-case specification :
EXAMPLE 2 with f_j=j (j=1,...,5) logmode=0 iscale=

 Number of vectors (m) : nvec =            5
 Space dimension (n)   : ndim =            2

 logmode = 0 ; log-gradients not considered

 First function value =    1.0000000000000000
 Last function value =     1.0000000000000000
 Mean function value, PHIbar =     1.0000000000000000
 Standard deviation, SIGMAbar  =     0.0000000000000000

 iscale = 0 ; gradients are not scaled component-wise

Specified gradient vectors (specified Gvect matrix) :
   1.0000   1.0000   0.2400  -0.4500  -1.0000
   0.0000   1.4000   1.2000   1.8000  -0.4000

======================================================================
          Scaling the Gradients
======================================================================

Component-wise gradient scales :
 scale(           1 ) =    1.0000000000000000
 scale(           2 ) =    1.0000000000000000

Matrix U to be processed by MGDA and duplicate to be saved :
   1.0000   1.0000   0.2400  -0.4500  -1.0000
   0.0000   1.4000   1.2000   1.8000  -0.4000

   1.0000   1.0000   0.2400  -0.4500  -1.0000
   0.0000   1.4000   1.2000   1.8000  -0.4000

======================================================================
          Directional derivatives associated with average gradient
======================================================================

 Average gradient, u_mean (dimensionless and in phys. dims.):
  0.15800000000000006       0.15800000000000006
  0.79999999999999993       0.79999999999999993

 Associated directional derivatives, u_mean^t u_j, j=1..nvec :
           1  0.15800000000000006
           2   1.2780000000000000
           3  0.99791999999999992
           4   1.3689000000000000
           5 -0.47800000000000009

 Normalized gradient of variance, gradvar
   0.0000000000000000
   0.0000000000000000

 Associated directional derivatives, gradvar^t u_j, j=1..nvec :
           1   0.0000000000000000
           2   0.0000000000000000
           3   0.0000000000000000
           4   0.0000000000000000
           5   0.0000000000000000

======================================================================
          Gram-Schmidt Orthogonalization
======================================================================

 Parameter r_max (upper bound on rank) =            2
 TOL =    1.0000000000000000E-002
 TOL only serves whenever 1-TOL < C_{L,L} <= 1 does occur

Returning from Gram-Schmidt :

    Parameter r (lower bound on rank) =    2
    Number of vectors admitting a known common descent direction, mu =    4
    Defect, m-mu =    1

Permutation of u-vectors =
   4   1   3   2   5

Reordered u-vectors (U matrix) :
  -0.4500   1.0000   0.2400   1.0000  -1.0000
   1.8000   0.0000   1.2000   1.4000  -0.4000


Orthogonal basis (V matrix) :
  -0.4500   0.8324
   1.8000   0.2081


Coefficient-vector beta =
   0.1762   0.8238

Coefficient-vector alfa (w.r.t. reordered U matrix) =
   0.2714   0.7286

Coefficient-vector alpha (w.r.t. original U0 matrix) =
   0.7286   0.0000   0.0000   0.2714   0.0000

Provisional direction of search
 omega_1 = V * beta = U * alfa = U0 * alpha =
   0.6065
   0.4885

Directional derivatives = dot products of omega_1
 with column-vectors of reordered U matrix :
   0.6065   0.6065   0.7318   1.2904  -0.8019
Constant a =   1.2067

======================================================================
          Preparation of QP formulation
======================================================================

UtU matrix :
   3.4425  -0.4500
  -0.4500   1.0000

Factorization : upon exit from DPOTRF, info =
   0

Inversion : upon exit from DPOTRS, info =
   0

Matrix W :
  -0.0000   0.5556
   1.0000   0.2500

Eta-vectors (U-tilde matrix) :
   1.0000   0.0000   0.6667   0.7778  -0.2222
   0.0000   1.0000   0.5400   1.3500  -1.1000


 TEST OF PARETO STATIONARITY FULFILLED : NO SOLUTIONS EXIST
                  

                  

Example 3

This example involves $m=5$ gradient-vectors of dimension $n=8$, whose components were generated by random draw. Function values are arbitrarily set to $f_j=j$.

The code is run in basic mode (logarithmic=0), without component scaling (iscale=0), and in the QP problem, the regularization parameter is not modified (eps_Hdiag $= 10^{-10}$).

Reading through the formatted output run_report.txt, the main observed facts are:

  1. After Gram-Schmidt orthogonalization, the rank is found equal to 5 (full rank).
  2. The gradient-vectors are linearly-independent and $\mu=m=5$: the so-called ``provisional'' direction of search is indeed a common ascent direction associated with the same directional derivative 0.1048, and the program quits successfully.

EXAMPLE 3 with f_j=j (j=1,...,5), gradients by random row, logmode=0 iscale=0
           5
           8

           1
       1.d0
 -0.99998474121093750
 -0.73692464828491211
  0.51121044158935547
 -8.26997756958007813E-002
  6.55341148376464844E-002
 -0.56208181381225586
 -0.90591096878051758
  0.35772895812988281

           2
       2.d0
  0.35859251022338867
  0.86938571929931641
 -0.23299598693847656
  3.88326644897460938E-002
  0.66193056106567383
 -0.93085622787475586
 -0.89307689666748047
  5.94000816345214844E-002

           3
       3.d0
  0.34229850769042969
 -0.98460388183593750
 -0.23316907882690430
 -0.86631584167480469
 -0.16502809524536133
  0.37354516983032227
  0.17795324325561523
  0.86087274551391602

           4
       4.d0
  0.69233369827270508
  5.38573265075683594E-002
 -0.81607055664062500
  0.30783748626708984
 -0.16800165176391602
  0.40238094329833984
  0.82064151763916016
  0.52439594268798828

           5
       5.d0
 -0.47509431838989258
 -0.90507125854492188
  0.47216367721557617
 -0.34353160858154297
  0.26527690887451172
  0.51282072067260742
  0.98207473754882813
 -0.26932287216186523

                  
======================================================================
          Listing Input Parameters and Data
======================================================================

Test-case specification :
EXAMPLE 3 with f_j=j (j=1,...,5), gradients by ran

 Number of vectors (m) : nvec =            5
 Space dimension (n)   : ndim =            8

 logmode = 0 ; log-gradients not considered

 First function value =    1.0000000000000000
 Last function value =     5.0000000000000000
 Mean function value, PHIbar =     3.0000000000000000
 Standard deviation, SIGMAbar  =     1.4142135623730951

 iscale = 0 ; gradients are not scaled component-wise

Specified gradient vectors (specified Gvect matrix) :
  -1.0000   0.3586   0.3423   0.6923  -0.4751
  -0.7369   0.8694  -0.9846   0.0539  -0.9051
   0.5112  -0.2330  -0.2332  -0.8161   0.4722
  -0.0827   0.0388  -0.8663   0.3078  -0.3435
   0.0655   0.6619  -0.1650  -0.1680   0.2653
  -0.5621  -0.9309   0.3735   0.4024   0.5128
  -0.9059  -0.8931   0.1780   0.8206   0.9821
   0.3577   0.0594   0.8609   0.5244  -0.2693

======================================================================
          Scaling the Gradients
======================================================================

Component-wise gradient scales :
 scale(           1 ) =    1.0000000000000000
 scale(           2 ) =    1.0000000000000000
 scale(           3 ) =    1.0000000000000000
 scale(           4 ) =    1.0000000000000000
 scale(           5 ) =    1.0000000000000000
 scale(           6 ) =    1.0000000000000000
 scale(           7 ) =    1.0000000000000000
 scale(           8 ) =    1.0000000000000000

Matrix U to be processed by MGDA and duplicate to be saved :
  -1.0000   0.3586   0.3423   0.6923  -0.4751
  -0.7369   0.8694  -0.9846   0.0539  -0.9051
   0.5112  -0.2330  -0.2332  -0.8161   0.4722
  -0.0827   0.0388  -0.8663   0.3078  -0.3435
   0.0655   0.6619  -0.1650  -0.1680   0.2653
  -0.5621  -0.9309   0.3735   0.4024   0.5128
  -0.9059  -0.8931   0.1780   0.8206   0.9821
   0.3577   0.0594   0.8609   0.5244  -0.2693

  -1.0000   0.3586   0.3423   0.6923  -0.4751
  -0.7369   0.8694  -0.9846   0.0539  -0.9051
   0.5112  -0.2330  -0.2332  -0.8161   0.4722
  -0.0827   0.0388  -0.8663   0.3078  -0.3435
   0.0655   0.6619  -0.1650  -0.1680   0.2653
  -0.5621  -0.9309   0.3735   0.4024   0.5128
  -0.9059  -0.8931   0.1780   0.8206   0.9821
   0.3577   0.0594   0.8609   0.5244  -0.2693

======================================================================
          Directional derivatives associated with average gradient
======================================================================

 Average gradient, u_mean (dimensionless and in phys. dims.):
  -1.6370868682861327E-002  -1.6370868682861327E-002
 -0.34067134857177733      -0.34067134857177733
  -5.9772300720214847E-002  -5.9772300720214847E-002
 -0.18917541503906249      -0.18917541503906249
  0.13194236755371094       0.13194236755371094
  -4.0838241577148438E-002  -4.0838241577148438E-002
   3.6336326599121095E-002   3.6336326599121095E-002
  0.30661497116088865       0.30661497116088865

 Associated directional derivatives, u_mean^t u_j, j=1..nvec :
           1  0.36087700927000699
           2 -0.18435171812971021
           3  0.74103883323655284
           4  0.11286905757510796
           5  0.32004010889422718

 Normalized gradient of variance, gradvar
   9.2234802246093733E-002
  -7.6788107554117843E-002
  -4.4077873229980448E-002
  -1.6843922932942724E-002
  -2.8696441650390619E-002
  0.23220281600952147
  0.36597932179768888
  -5.2607186635335292E-002

 Associated directional derivatives, gradvar^t u_j, j=1..nvec :
           1 -0.53954779801882369
           2 -0.58918293895576423
           3  0.24336053184612566
           4  0.46151299179208161
           5  0.49570620498746704

======================================================================
          Gram-Schmidt Orthogonalization
======================================================================

 Parameter r_max (upper bound on rank) =            5
 TOL =    1.0000000000000000E-002
 TOL only serves whenever 1-TOL < C_{L,L} <= 1 does occur

Returning from Gram-Schmidt :

    Parameter r (lower bound on rank) =    5
    Number of vectors admitting a known common descent direction, mu =    5
    Defect, m-mu =    0

Permutation of u-vectors =
   3   2   5   4   1

Reordered u-vectors (U matrix) :
   0.3423   0.3586  -0.4751   0.6923  -1.0000
  -0.9846   0.8694  -0.9051   0.0539  -0.7369
  -0.2332  -0.2330   0.4722  -0.8161   0.5112
  -0.8663   0.0388  -0.3435   0.3078  -0.0827
  -0.1650   0.6619   0.2653  -0.1680   0.0655
   0.3735  -0.9309   0.5128   0.4024  -0.5621
   0.1780  -0.8931   0.9821   0.8206  -0.9059
   0.8609   0.0594  -0.2693   0.5244   0.3577


Orthogonal basis (V matrix) :
   0.3423   0.3535  -0.1274   0.2952  -0.2560
  -0.9846   0.2930  -0.1408   0.1568  -0.1773
  -0.2332  -0.2330   0.1783  -0.3454  -0.0775
  -0.8663  -0.2426  -0.1721   0.2106   0.2774
  -0.1650   0.4048   0.4416   0.2030   0.0556
   0.3735  -0.5253  -0.1062   0.0543  -0.1618
   0.1780  -0.5601   0.1863   0.4709  -0.0172
   0.8609   0.3086  -0.1430   0.0984   0.2416


Coefficient-vector beta =
   0.0370   0.0894   0.2920   0.1903   0.3913

Coefficient-vector alfa (w.r.t. reordered U matrix) =
  -0.0556   0.2679   0.2729   0.3134   0.2013

Coefficient-vector alpha (w.r.t. original U0 matrix) =
   0.2013   0.2679  -0.0556   0.3134   0.2729

Provisional direction of search
 omega_1 = V * beta = U * alfa = U0 * alpha =
  -0.0370
  -0.0909
  -0.0735
   0.0446
   0.2194
  -0.1172
   0.0938
   0.1310

Directional derivatives = dot products of omega_1
 with column-vectors of reordered U matrix :
   0.1048   0.1048   0.1048   0.1048   0.1048
 PROVISIONAL DIRECTION OMEGA_1 IS A COMMON DESCENT DIRECTION
 QP-PROBLEM ELUDED

======================================================================
          Guidelines to Step-Size selection
======================================================================

 STATISTICS :
 Concerning (transformed) input function values :
    - average, PHIbar :    3.0000000000000000
    - standard deviation, SIGMAbar :    1.4142135623730951
 Changes in input physical variables for which
 the expected INCREASE in average function value
 is equal to the standard deviation
 -0.49848547185295566
  -1.2267900849909423
 -0.99085457253373088
  0.60168839470418201
   2.9599257872675295
  -1.5803909252403672
   1.2653192674215041
   1.7665889733529032

 FOR AN APPROPRIATE DESCENT STEP, IT IS SUGGESTED TO SUBTRACT
 TO VECTOR X A FRACTION OF THE ABOVE VECTOR STORED IN FILE
 ./solution.txt

                  
 -0.49848547185295566
  -1.2267900849909423
 -0.99085457253373088
  0.60168839470418201
   2.9599257872675295
  -1.5803909252403672
   1.2653192674215041
   1.7665889733529032
                  

Example 4

This example involves $m=8$ gradient-vectors of dimension $n=5$, whose components were generated by random draw. Function values are arbitrarily set to $f_j=j$.

The code is run in basic mode (logarithmic=0), without component scaling (iscale=0), and in the QP problem, the regularization parameter is not modified (eps_Hdiag $= 10^{-10}$).

Reading through the formatted output run_report.txt, the main observed facts are:

  1. After Gram-Schmidt orthogonalization, the rank is found equal to 5 (full rank).
  2. $\mu=7$ and the permutation is (3,5,4,6,7,1,2,8): the new basis is made of the originally indexed vectors $\{{\mathbf{u}}_3,{\mathbf{u}}_5,{\mathbf{u}}_4, {\mathbf{u}}_6,{\mathbf{u}}_7\}$; the direction of search based on these vectors is a common descent direction for all $\mu=7$ vectors, that is not for the last one, ${\mathbf{u}}_8$. Therefore further investigation is necessary to conclude.
  3. The directional derivatives corresponding to the direction of search associated with the new basis vectors are equal for the first 5, greater for the subsequent 2 (unused vectors in the hierarchical orthogonalization process are redundant), and negative for the last (at this stage) uncertain case.
  4. All 5 components of the last column vector of matrix Etav are negative: Pareto stationarity is detected and the program quits without formulating the QP-problem: no solution.

EXAMPLE 4 with f_j=j (j=1,...,8), gradients by random row, logmode=0 iscale=0
           8
           5

           1
       1.d0
 -0.99998474121093750
 -0.73692464828491211
  0.51121044158935547
 -8.26997756958007813E-002
  6.55341148376464844E-002

           2
       2.d0
 -0.56208181381225586
 -0.90591096878051758
  0.35772895812988281
  0.35859251022338867
  0.86938571929931641

           3
       3.d0
 -0.23299598693847656
  3.88326644897460938E-002
  0.66193056106567383
 -0.93085622787475586
 -0.89307689666748047

           4
       4.d0
  5.94000816345214844E-002
  0.34229850769042969
 -0.98460388183593750
 -0.23316907882690430
 -0.86631584167480469

           5
       5.d0
 -0.16502809524536133
  0.37354516983032227
  0.17795324325561523
  0.86087274551391602
  0.69233369827270508

           6
       6.d0
  5.38573265075683594E-002
 -0.81607055664062500
  0.30783748626708984
 -0.16800165176391602
  0.40238094329833984

           7
       7.d0
  0.82064151763916016
  0.52439594268798828
 -0.47509431838989258
 -0.90507125854492188
  0.47216367721557617

           8
       8.d0
 -0.34353160858154297
  0.26527690887451172
  0.51282072067260742
  0.98207473754882813
 -0.26932287216186523
                  
======================================================================
          Listing Input Parameters and Data
======================================================================

Test-case specification :
EXAMPLE 4 with f_j=j (j=1,...,8), gradients by ran

 Number of vectors (m) : nvec =            8
 Space dimension (n)   : ndim =            5

 logmode = 0 ; log-gradients not considered

 First function value =    1.0000000000000000
 Last function value =     8.0000000000000000
 Mean function value, PHIbar =     4.5000000000000000
 Standard deviation, SIGMAbar  =     2.2912878474779199

 iscale = 0 ; gradients are not scaled component-wise

Specified gradient vectors (specified Gvect matrix) :
  -1.0000  -0.5621  -0.2330   0.0594  -0.1650   0.0539   0.8206  -0.3435
  -0.7369  -0.9059   0.0388   0.3423   0.3735  -0.8161   0.5244   0.2653
   0.5112   0.3577   0.6619  -0.9846   0.1780   0.3078  -0.4751   0.5128
  -0.0827   0.3586  -0.9309  -0.2332   0.8609  -0.1680  -0.9051   0.9821
   0.0655   0.8694  -0.8931  -0.8663   0.6923   0.4024   0.4722  -0.2693

======================================================================
          Scaling the Gradients
======================================================================

Component-wise gradient scales :
 scale(           1 ) =    1.0000000000000000
 scale(           2 ) =    1.0000000000000000
 scale(           3 ) =    1.0000000000000000
 scale(           4 ) =    1.0000000000000000
 scale(           5 ) =    1.0000000000000000

Matrix U to be processed by MGDA and duplicate to be saved :
  -1.0000  -0.5621  -0.2330   0.0594  -0.1650   0.0539   0.8206  -0.3435
  -0.7369  -0.9059   0.0388   0.3423   0.3735  -0.8161   0.5244   0.2653
   0.5112   0.3577   0.6619  -0.9846   0.1780   0.3078  -0.4751   0.5128
  -0.0827   0.3586  -0.9309  -0.2332   0.8609  -0.1680  -0.9051   0.9821
   0.0655   0.8694  -0.8931  -0.8663   0.6923   0.4024   0.4722  -0.2693

  -1.0000  -0.5621  -0.2330   0.0594  -0.1650   0.0539   0.8206  -0.3435
  -0.7369  -0.9059   0.0388   0.3423   0.3735  -0.8161   0.5244   0.2653
   0.5112   0.3577   0.6619  -0.9846   0.1780   0.3078  -0.4751   0.5128
  -0.0827   0.3586  -0.9309  -0.2332   0.8609  -0.1680  -0.9051   0.9821
   0.0655   0.8694  -0.8931  -0.8663   0.6923   0.4024   0.4722  -0.2693

======================================================================
          Directional derivatives associated with average gradient
======================================================================

 Average gradient, u_mean (dimensionless and in phys. dims.):
 -0.17121541500091553      -0.17121541500091553
 -0.11431962251663208      -0.11431962251663208
  0.13372290134429932       0.13372290134429932
  -1.4782249927520752E-002  -1.4782249927520752E-002
   5.9135317802429199E-002   5.9135317802429199E-002

 Associated directional derivatives, u_mean^t u_j, j=1..nvec :
           1  0.32891616298408621
           2  0.29374762187239867
           3   8.4916207466193327E-002
           4 -0.22874883258540990
           5   3.7563972190810091E-002
           6  0.15151496264363118
           7 -0.22268567624126945
           8   6.6623657627417288E-002

 Normalized gradient of variance, gradvar
  0.16867944929334852
  0.16157614522510105
  -5.6285646226671010E-002
   6.2745948632558196E-002
   1.5484915839301221E-002

 Associated directional derivatives, gradvar^t u_j, j=1..nvec :
           1 -0.32069441512799951
           2 -0.22535766672494773
           3 -0.14252106955548055
           4   9.2700669272078234E-002
           5   8.7239933209597609E-002
           6 -0.14441033041477330
           7  0.20041808508962811
           8   1.3502021167842795E-002

======================================================================
          Gram-Schmidt Orthogonalization
======================================================================

 Parameter r_max (upper bound on rank) =            5
 TOL =    1.0000000000000000E-002
 TOL only serves whenever 1-TOL < C_{L,L} <= 1 does occur

Returning from Gram-Schmidt :

    Parameter r (lower bound on rank) =    5
    Number of vectors admitting a known common descent direction, mu =    7
    Defect, m-mu =    1

Permutation of u-vectors =
   3   5   4   6   7   1   2   8

Reordered u-vectors (U matrix) :
  -0.2330  -0.1650   0.0594   0.0539   0.8206  -1.0000  -0.5621  -0.3435
   0.0388   0.3735   0.3423  -0.8161   0.5244  -0.7369  -0.9059   0.2653
   0.6619   0.1780  -0.9846   0.3078  -0.4751   0.5112   0.3577   0.5128
  -0.9309   0.8609  -0.2332  -0.1680  -0.9051  -0.0827   0.3586   0.9821
  -0.8931   0.6923  -0.8663   0.4024   0.4722   0.0655   0.8694  -0.2693


Orthogonal basis (V matrix) :
  -0.2330  -0.1899  -0.0807  -0.0525  -0.0159
   0.0388   0.2508   0.3040  -0.0382   0.0076
   0.6619   0.3554  -0.2368  -0.0080  -0.0088
  -0.9309   0.2041   0.0935   0.0203  -0.0180
  -0.8931   0.1112  -0.2386  -0.0151   0.0167


Coefficient-vector beta =
   0.0004   0.0029   0.0037   0.1665   0.8265

Coefficient-vector alfa (w.r.t. reordered U matrix) =
   0.1500   0.3089   0.2287   0.2697   0.0426

Coefficient-vector alpha (w.r.t. original U0 matrix) =
   0.0000   0.0000   0.1500   0.2287   0.3089   0.2697   0.0426   0.0000

Provisional direction of search
 omega_1 = V * beta = U * alfa = U0 * alpha =
  -0.0229
   0.0018
  -0.0082
  -0.0109
   0.0104

Directional derivatives = dot products of omega_1
 with column-vectors of reordered U matrix :
   0.0008   0.0008   0.0008   0.0008   0.0008   0.0189   0.0135  -0.0094
Constant a =  16.4348

======================================================================
          Preparation of QP formulation
======================================================================

UtU matrix :
   2.1580  -1.2489   0.3384  -0.0434  -0.0645
  -1.2489   1.4189  -0.8577  -0.1250  -0.4763
   0.3384  -0.8577   1.8950  -0.8887   0.4980
  -0.0434  -0.1250  -0.8887   0.9538  -0.1880
  -0.0645  -0.4763   0.4980  -0.1880   2.2163

Factorization : upon exit from DPOTRF, info =
   0

Inversion : upon exit from DPOTRS, info =
   0

Matrix W :
  -4.3588   0.6167  -0.9699  -2.2268   1.6465
  -8.7919   1.1151  -2.8564  -3.6980   4.0795
  -6.4555   0.3346  -2.8526  -2.8364   2.5408
  -7.4724  -0.2919  -2.8436  -3.5892   3.5702
  -0.8293   0.3943  -0.4567  -0.9350   0.8696

Eta-vectors (U-tilde matrix) :
   1.0000   0.0000   0.0000   0.0000   0.0000   3.7005   2.1773  -1.4667
   0.0000   1.0000   0.0000   0.0000   0.0000   7.0829   5.1304  -2.8792
   0.0000   0.0000   1.0000   0.0000   0.0000   5.1515   3.4967  -2.6264
   0.0000   0.0000   0.0000   1.0000   0.0000   6.7645   5.2642  -3.4552
   0.0000   0.0000   0.0000   0.0000   1.0000   0.4396   0.3664  -0.9972


 TEST OF PARETO STATIONARITY FULFILLED : NO SOLUTIONS EXIST
                  

                  

Example 5

This example involves $m=15$ gradient-vectors of dimension $n=5$, whose components were generated by random draw. Function values are arbitrarily set to $f_j=j$.

The code is run in basic mode (logarithmic=0), without component scaling (iscale=0), and in the QP problem, the regularization parameter is not modified (eps_Hdiag $= 10^{-10}$).

Reading through the formatted output run_report.txt, the main observed facts are:

  1. After Gram-Schmidt orthogonalization, the rank is found equal to 5 (full rank).
  2. $\mu=10$ and the permutation is (14,13,10,4,9,2,15,1,5,6,7,11,8,12): the new basis is made of the originally indexed vectors $\{{\mathbf{u}}_{14},{\mathbf{u}}_{13},{\mathbf{u}}_{10}, {\mathbf{u}}_4,{\mathbf{u}}_9\}$; the direction of search based on these vectors is a common descent direction for all $\mu=10$ vectors, that is not for the last five. Therefore further investigation is necessary to conclude.
  3. The directional derivatives corresponding to the direction of search associated with the new basis vectors are equal for the first 5, greater for the subsequent 5 (unused vectors in the hierarchical orthogonalization process are redundant), and negative for the last (at this stage) uncertain cases.
  4. All 5 components of the last two column vectors of matrix Etav are negative: Pareto stationarity is detected and the program quits without formulating the QP-problem: no solution.

EXAMPLE 5 with f_j=j (j=1,...,15), gradients by random row, logmode=0 iscale=0
          15
           5

           1
       1.d0
 -0.99998474121093750
 -0.73692464828491211
  0.51121044158935547
 -8.26997756958007813E-002
  6.55341148376464844E-002

           2
       2.d0
 -0.56208181381225586
 -0.90591096878051758
  0.35772895812988281
  0.35859251022338867
  0.86938571929931641

           3
       3.d0
 -0.23299598693847656
  3.88326644897460938E-002
  0.66193056106567383
 -0.93085622787475586
 -0.89307689666748047

           4
       4.d0
  5.94000816345214844E-002
  0.34229850769042969
 -0.98460388183593750
 -0.23316907882690430
 -0.86631584167480469

           5
       5.d0
 -0.16502809524536133
  0.37354516983032227
  0.17795324325561523
  0.86087274551391602
  0.69233369827270508

           6
       6.d0
  5.38573265075683594E-002
 -0.81607055664062500
  0.30783748626708984
 -0.16800165176391602
  0.40238094329833984

           7
       7.d0
  0.82064151763916016
  0.52439594268798828
 -0.47509431838989258
 -0.90507125854492188
  0.47216367721557617

           8
       8.d0
 -0.34353160858154297
  0.26527690887451172
  0.51282072067260742
  0.98207473754882813
 -0.26932287216186523

           9
       9.d0
 -0.50592231750488281
  0.96510028839111328
  0.44532060623168945
  0.50671148300170898
  0.30303668975830078

          10
      10.d0
 -0.85462856292724609
  0.26326942443847656
  0.76941394805908203
 -0.45458030700683594
 -0.12717723846435547

          11
      11.d0
  0.53298950195312500
 -4.45365905761718750E-002
 -0.52445125579833984
 -0.45018672943115234
 -0.28147029876708984

          12
      12.d0
 -0.66698598861694336
 -2.69656181335449219E-002
  0.79531240463256836
  0.81841611862182617
 -0.87887144088745117

          13
      13.d0
  0.80930614471435547
  9.04560089111328125E-003
  3.25837135314941406E-002
 -0.36193418502807617
  0.97328376770019531

          14
      14.d0
 -1.20468139648437500E-002
 -0.46771144866943359
 -0.81853437423706055
  0.89552831649780273
 -0.85250186920166016

          15
      15.d0
  1.41382217407226563E-003
 -0.23171615600585938
 -0.44583654403686523
  0.82763481140136719
  5.94944953918457031E-002
                  
======================================================================
          Listing Input Parameters and Data
======================================================================

Test-case specification :
EXAMPLE 5 with f_j=j (j=1,...,15), gradients by ra

 Number of vectors (m) : nvec =           15
 Space dimension (n)   : ndim =            5

 logmode = 0 ; log-gradients not considered

 First function value =    1.0000000000000000
 Last function value =     15.000000000000000
 Mean function value, PHIbar =     8.0000000000000000
 Standard deviation, SIGMAbar  =     4.3204937989385739

 iscale = 0 ; gradients are not scaled component-wise

Specified gradient vectors (specified Gvect matrix) :
  -1.0000  -0.5621  -0.2330   0.0594  -0.1650   0.0539   0.8206  -0.3435
  -0.7369  -0.9059   0.0388   0.3423   0.3735  -0.8161   0.5244   0.2653
   0.5112   0.3577   0.6619  -0.9846   0.1780   0.3078  -0.4751   0.5128
  -0.0827   0.3586  -0.9309  -0.2332   0.8609  -0.1680  -0.9051   0.9821
   0.0655   0.8694  -0.8931  -0.8663   0.6923   0.4024   0.4722  -0.2693

  -0.5059  -0.8546   0.5330  -0.6670   0.8093  -0.0120   0.0014
   0.9651   0.2633  -0.0445  -0.0270   0.0090  -0.4677  -0.2317
   0.4453   0.7694  -0.5245   0.7953   0.0326  -0.8185  -0.4458
   0.5067  -0.4546  -0.4502   0.8184  -0.3619   0.8955   0.8276
   0.3030  -0.1272  -0.2815  -0.8789   0.9733  -0.8525   0.0595

======================================================================
          Scaling the Gradients
======================================================================

Component-wise gradient scales :
 scale(           1 ) =    1.0000000000000000
 scale(           2 ) =    1.0000000000000000
 scale(           3 ) =    1.0000000000000000
 scale(           4 ) =    1.0000000000000000
 scale(           5 ) =    1.0000000000000000

Matrix U to be processed by MGDA and duplicate to be saved :
  -1.0000  -0.5621  -0.2330   0.0594  -0.1650   0.0539   0.8206  -0.3435
  -0.7369  -0.9059   0.0388   0.3423   0.3735  -0.8161   0.5244   0.2653
   0.5112   0.3577   0.6619  -0.9846   0.1780   0.3078  -0.4751   0.5128
  -0.0827   0.3586  -0.9309  -0.2332   0.8609  -0.1680  -0.9051   0.9821
   0.0655   0.8694  -0.8931  -0.8663   0.6923   0.4024   0.4722  -0.2693

  -1.0000  -0.5621  -0.2330   0.0594  -0.1650   0.0539   0.8206  -0.3435
  -0.7369  -0.9059   0.0388   0.3423   0.3735  -0.8161   0.5244   0.2653
   0.5112   0.3577   0.6619  -0.9846   0.1780   0.3078  -0.4751   0.5128
  -0.0827   0.3586  -0.9309  -0.2332   0.8609  -0.1680  -0.9051   0.9821
   0.0655   0.8694  -0.8931  -0.8663   0.6923   0.4024   0.4722  -0.2693

  -0.5059  -0.8546   0.5330  -0.6670   0.8093  -0.0120   0.0014
   0.9651   0.2633  -0.0445  -0.0270   0.0090  -0.4677  -0.2317
   0.4453   0.7694  -0.5245   0.7953   0.0326  -0.8185  -0.4458
   0.5067  -0.4546  -0.4502   0.8184  -0.3619   0.8955   0.8276
   0.3030  -0.1272  -0.2815  -0.8789   0.9733  -0.8525   0.0595

  -0.5059  -0.8546   0.5330  -0.6670   0.8093  -0.0120   0.0014
   0.9651   0.2633  -0.0445  -0.0270   0.0090  -0.4677  -0.2317
   0.4453   0.7694  -0.5245   0.7953   0.0326  -0.8185  -0.4458
   0.5067  -0.4546  -0.4502   0.8184  -0.3619   0.8955   0.8276
   0.3030  -0.1272  -0.2815  -0.8789   0.9733  -0.8525   0.0595

======================================================================
          Directional derivatives associated with average gradient
======================================================================

 Average gradient, u_mean (dimensionless and in phys. dims.):
 -0.13770650227864584      -0.13770650227864584
  -2.9871431986490886E-002  -2.9871431986490886E-002
   8.8239447275797522E-002   8.8239447275797522E-002
  0.11088876724243164       0.11088876724243164
  -2.2074890136718751E-002  -2.2074890136718751E-002

 Associated directional derivatives, u_mean^t u_j, j=1..nvec :
           1  0.19420918779866647
           2  0.15660127116526992
           3   5.8265367420972207E-003
           4 -0.11201763106881418
           5  0.10744753564061588
           6   1.6612090537197817E-002
           7 -0.28137938824100578
           8  0.19947968184361950
           9  0.12963381936172785
          10  0.13011591065524045
          11 -0.16205026279712911
          12  0.27298588117291728
          13 -0.17046132248530435
          14   6.1526030735118795E-002
          15   5.7848700180405409E-002

 Normalized gradient of variance, gradvar
   9.6387362480163594E-002
   4.9040031433105466E-002
  -8.3730196952819830E-002
  0.11291962464650469
  -4.3680644035339354E-002

 Associated directional derivatives, gradvar^t u_j, j=1..nvec :
           1 -0.18752945057238249
           2 -0.12603939851461143
           3 -0.14207885164133435
           4  0.11646469276520294
           5   5.4479829392069676E-002
           6  -9.7151195659588047E-002
           7   2.1770885485617977E-002
           8   5.9618209574841749E-002
           9   5.2580812973701315E-003
          10 -0.17966368759631074
          11   5.4561670880010169E-002
          12  -1.3981678121430183E-003
          13  -7.6608935938035083E-003
          14  0.18279885164609669
          15  0.11696034302981008

======================================================================
          Gram-Schmidt Orthogonalization
======================================================================

 Parameter r_max (upper bound on rank) =            5
 TOL =    1.0000000000000000E-002
 TOL only serves whenever 1-TOL < C_{L,L} <= 1 does occur

Returning from Gram-Schmidt :

    Parameter r (lower bound on rank) =    5
    Number of vectors admitting a known common descent direction, mu =   10
    Defect, m-mu =    5

Permutation of u-vectors =
  14  13  10   4   9   2  15   1
   5   6   7  11   8  12   3

Reordered u-vectors (U matrix) :
  -0.0120   0.8093  -0.8546   0.0594  -0.5059  -0.5621   0.0014  -1.0000
  -0.4677   0.0090   0.2633   0.3423   0.9651  -0.9059  -0.2317  -0.7369
  -0.8185   0.0326   0.7694  -0.9846   0.4453   0.3577  -0.4458   0.5112
   0.8955  -0.3619  -0.4546  -0.2332   0.5067   0.3586   0.8276  -0.0827
  -0.8525   0.9733  -0.1272  -0.8663   0.3030   0.8694   0.0595   0.0655

  -0.1650   0.0539   0.8206   0.5330  -0.3435  -0.6670  -0.2330
   0.3735  -0.8161   0.5244  -0.0445   0.2653  -0.0270   0.0388
   0.1780   0.3078  -0.4751  -0.5245   0.5128   0.7953   0.6619
   0.8609  -0.1680  -0.9051  -0.4502   0.9821   0.8184  -0.9309
   0.6923   0.4024   0.4722  -0.2815  -0.2693  -0.8789  -0.8931


Orthogonal basis (V matrix) :
  -0.0120   0.5377  -0.0209  -0.0070  -0.0147
  -0.4677  -0.1486  -0.0545  -0.0128   0.0074
  -0.8185  -0.2489   0.0161  -0.0308  -0.0128
   0.8955   0.0539   0.0038  -0.0438   0.0061
  -0.8525   0.3695   0.0188  -0.0093   0.0148


Coefficient-vector beta =
   0.0002   0.0010   0.1230   0.1565   0.7194

Coefficient-vector alfa (w.r.t. reordered U matrix) =
   0.2839   0.3311   0.3107   0.0441   0.0303

Coefficient-vector alpha (w.r.t. original U0 matrix) =
   0.0000   0.0000   0.0000   0.0441   0.0000   0.0000   0.0000   0.0000
   0.0303   0.3107   0.0000   0.0000   0.3311   0.2839   0.0000

Provisional direction of search
 omega_1 = V * beta = U * alfa = U0 * alpha =
  -0.0137
  -0.0036
  -0.0125
  -0.0018
   0.0117

Directional derivatives = dot products of omega_1
 with column-vectors of reordered U matrix :
   0.0005   0.0005   0.0005   0.0005   0.0005   0.0161   0.0056   0.0109
   0.0052   0.0034  -0.0001  -0.0031  -0.0076  -0.0124  -0.0140
Constant a =   6.8714

======================================================================
          Preparation of QP formulation
======================================================================

UtU matrix :
   2.4176  -1.1945  -1.0413   1.1748  -0.6144
  -1.1945   1.7344  -0.6235  -0.7397  -0.2747
  -1.0413  -0.6235   1.6145  -0.5020   0.7602
   1.1748  -0.7397  -0.5020   1.8950  -0.5188
  -0.6144  -0.2747   0.7602  -0.5188   1.7343

Factorization : upon exit from DPOTRF, info =
   0

Inversion : upon exit from DPOTRS, info =
   0

Matrix W :
  -7.8733  -2.6467  -7.0742  -0.5518   6.6030
  -8.7901  -2.6384  -8.2994  -1.3284   8.1449
  -8.9311  -2.7133  -7.4047  -1.5104   7.1378
  -1.1300   0.1957  -1.5490  -0.5985   0.7671
  -0.8961   0.4490  -0.7812   0.3694   0.9045

Eta-vectors (U-tilde matrix) :
   1.0000   0.0000   0.0000   0.0000   0.0000   9.8352   3.6923   6.6855
   0.0000   1.0000   0.0000   0.0000   0.0000  10.9667   3.6842   7.1351
   0.0000   0.0000   1.0000   0.0000   0.0000  10.4931   3.0919   7.7378
   0.0000   0.0000   0.0000   1.0000   0.0000   0.3560   0.1939   0.2937
   0.0000   0.0000   0.0000   0.0000   1.0000   0.7362   0.6025   0.1946

   3.1483   2.3078  -0.8710  -1.9786  -3.9454  -6.5583  -8.3344
   3.4836   2.6254   0.3940  -1.9094  -5.4345  -8.9122  -9.5855
   2.7841   2.5797  -0.4969  -2.0850  -4.8547  -7.3684  -7.8945
  -0.0002  -0.2882   0.8151   0.2549  -1.1486  -1.6475  -0.8824
   1.1207  -0.3533  -0.0360  -0.5087   0.1455  -0.5284  -1.4425


 TEST OF PARETO STATIONARITY FULFILLED : NO SOLUTIONS EXIST
                  

                  

Example 6

This example involves $m=15$ gradient-vectors of dimension $n=5$. Function values are arbitrarily set to $f_j=j$.

The code is run in basic mode (logarithmic=0), without component scaling (iscale=0), and in the QP problem, the regularization parameter is not modified (eps_Hdiag $= 10^{-10}$).

This example is meant to mimic a system in which the gradient-vectors are the instantiations at different times of a smooth periodic function. More precisely, let

with $a_\varphi = 1$ and $b_\varphi = 0.1$. The gradient-vectors are specifically the following:
where $t_j = \frac{j-1}{m}$ ($j=1,\dots,m$).

Reading through the formatted output run_report.txt, the main observed facts are:

  1. In this very special example, due to the many symmetries, the components of the average gradient, or gradient of the average, are equal and associated with even directional derivatives. This is therefore an evident common ascent direction.
  2. In the Gram-Schmidt orthogonalization process, ${\mathbf{u}}_4$ was chosen to be the first element of the orthogonal basis. Then the test $c_{1,1} \geq 1$ was relaxed to $c_{1,1} \geq 0.99$, since $c_{1,1}$ was found to be strictly less than 1 but very close to it. Thus, the orthogonalization process was immediately interrupted since ${\mathbf{u}}_4$ was then detected as an acceptable common ascent direction. However, the average gradient was a better solution.

EXAMPLE 6 with f_j=j (j=1,...,15), periodic data in gradients (a_phi=1; b_phi=0.1), logmode=0 iscale=0
          15
           5

           1
       1.d0
   1.1000000000000001
   1.0309016994374947
  0.91909830056250530
  0.91909830056250530
   1.0309016994374947

           2
       2d0
   1.0913545457642602
   1.0669130606358859
  0.94999999999999996
  0.90218523992661948
  0.98954715367323465

           3
       3.d0
   1.0669130606358859
   1.0913545457642602
  0.98954715367323465
  0.90218523992661948
  0.95000000000000007

           4
       4.d0
   1.0309016994374947
   1.1000000000000001
   1.0309016994374947
  0.91909830056250519
  0.91909830056250530

           5
       5.d0
  0.98954715367323465
   1.0913545457642602
   1.0669130606358859
  0.94999999999999996
  0.90218523992661948

           6
       6.d0
  0.95000000000000007
   1.0669130606358859
   1.0913545457642602
  0.98954715367323465
  0.90218523992661948

           7
       7.d0
  0.91909830056250530
   1.0309016994374947
   1.1000000000000001
   1.0309016994374947
  0.91909830056250530

           8
       8.d0
  0.90218523992661948
  0.98954715367323465
   1.0913545457642602
   1.0669130606358859
  0.94999999999999996

           9
       9.d0
  0.90218523992661948
  0.95000000000000007
   1.0669130606358859
   1.0913545457642602
  0.98954715367323465

          10
      10.0d0
  0.91909830056250530
  0.91909830056250530
   1.0309016994374947
   1.1000000000000001
   1.0309016994374947

          11
      11.d0
  0.94999999999999996
  0.90218523992661948
  0.98954715367323465
   1.0913545457642602
   1.0669130606358859

          12
      12.d0
  0.98954715367323454
  0.90218523992661948
  0.95000000000000007
   1.0669130606358859
   1.0913545457642602

          13
      13.d0
   1.0309016994374947
  0.91909830056250530
  0.91909830056250530
   1.0309016994374947
   1.1000000000000001

          14
      14.d0
   1.0669130606358859
  0.95000000000000007
  0.90218523992661948
  0.98954715367323465
   1.0913545457642602

          15
      15.d0
   1.0913545457642602
  0.98954715367323465
  0.90218523992661948
  0.95000000000000007
   1.0669130606358859
                  
======================================================================
          Listing Input Parameters and Data
======================================================================

Test-case specification :
EXAMPLE 6 with f_j=j (j=1,...,15), periodic data i

 Number of vectors (m) : nvec =           15
 Space dimension (n)   : ndim =            5

 logmode = 0 ; log-gradients not considered

 First function value =    1.0000000000000000
 Last function value =     15.000000000000000
 Mean function value, PHIbar =     8.0000000000000000
 Standard deviation, SIGMAbar  =     4.3204937989385739

 iscale = 0 ; gradients are not scaled component-wise

Specified gradient vectors (specified Gvect matrix) :
   1.1000   1.0914   1.0669   1.0309   0.9895   0.9500   0.9191   0.9022
   1.0309   1.0669   1.0914   1.1000   1.0914   1.0669   1.0309   0.9895
   0.9191   0.9500   0.9895   1.0309   1.0669   1.0914   1.1000   1.0914
   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669
   1.0309   0.9895   0.9500   0.9191   0.9022   0.9022   0.9191   0.9500

   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669   1.0914
   0.9500   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895
   1.0669   1.0309   0.9895   0.9500   0.9191   0.9022   0.9022
   1.0914   1.1000   1.0914   1.0669   1.0309   0.9895   0.9500
   0.9895   1.0309   1.0669   1.0914   1.1000   1.0914   1.0669

======================================================================
          Scaling the Gradients
======================================================================

Component-wise gradient scales :
 scale(           1 ) =    1.0000000000000000
 scale(           2 ) =    1.0000000000000000
 scale(           3 ) =    1.0000000000000000
 scale(           4 ) =    1.0000000000000000
 scale(           5 ) =    1.0000000000000000

Matrix U to be processed by MGDA and duplicate to be saved :
   1.1000   1.0914   1.0669   1.0309   0.9895   0.9500   0.9191   0.9022
   1.0309   1.0669   1.0914   1.1000   1.0914   1.0669   1.0309   0.9895
   0.9191   0.9500   0.9895   1.0309   1.0669   1.0914   1.1000   1.0914
   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669
   1.0309   0.9895   0.9500   0.9191   0.9022   0.9022   0.9191   0.9500

   1.1000   1.0914   1.0669   1.0309   0.9895   0.9500   0.9191   0.9022
   1.0309   1.0669   1.0914   1.1000   1.0914   1.0669   1.0309   0.9895
   0.9191   0.9500   0.9895   1.0309   1.0669   1.0914   1.1000   1.0914
   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669
   1.0309   0.9895   0.9500   0.9191   0.9022   0.9022   0.9191   0.9500

   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669   1.0914
   0.9500   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895
   1.0669   1.0309   0.9895   0.9500   0.9191   0.9022   0.9022
   1.0914   1.1000   1.0914   1.0669   1.0309   0.9895   0.9500
   0.9895   1.0309   1.0669   1.0914   1.1000   1.0914   1.0669

   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669   1.0914
   0.9500   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895
   1.0669   1.0309   0.9895   0.9500   0.9191   0.9022   0.9022
   1.0914   1.1000   1.0914   1.0669   1.0309   0.9895   0.9500
   0.9895   1.0309   1.0669   1.0914   1.1000   1.0914   1.0669

======================================================================
          Directional derivatives associated with average gradient
======================================================================

 Average gradient, u_mean (dimensionless and in phys. dims.):
  0.99999999999999989       0.99999999999999989
  0.99999999999999989       0.99999999999999989
  0.99999999999999989       0.99999999999999989
  0.99999999999999978       0.99999999999999978
  0.99999999999999978       0.99999999999999978

 Associated directional derivatives, u_mean^t u_j, j=1..nvec :
           1   4.9999999999999991
           2   4.9999999999999982
           3   4.9999999999999991
           4   4.9999999999999982
           5   4.9999999999999991
           6   4.9999999999999991
           7   5.0000000000000000
           8   4.9999999999999982
           9   4.9999999999999991
          10   4.9999999999999991
          11   4.9999999999999991
          12   4.9999999999999991
          13   4.9999999999999991
          14   4.9999999999999991
          15   4.9999999999999991

 Normalized gradient of variance, gradvar
  -6.2500000000000082E-003
  -2.9896163229705081E-002
  -1.2226845009172564E-002
   2.2339557438859417E-002
   2.6033450800018249E-002

 Associated directional derivatives, gradvar^t u_j, j=1..nvec :
           1  -1.5624999999999875E-003
           2  -4.4173295551331246E-003
           3  -6.5083627000045510E-003
           4  -7.4740408074262625E-003
           5  -7.1473893597148452E-003
           6  -5.5848893597148404E-003
           7  -3.0567112522931310E-003
           8   1.3877787807814457E-017
           9   3.0567112522931553E-003
          10   5.5848893597148647E-003
          11   7.1473893597148661E-003
          12   7.4740408074262799E-003
          13   6.5083627000045718E-003
          14   4.4173295551331385E-003
          15   1.5625000000000118E-003

======================================================================
          Gram-Schmidt Orthogonalization
======================================================================

 Parameter r_max (upper bound on rank) =            5
 TOL =    1.0000000000000000E-002
 TOL only serves whenever 1-TOL < C_{L,L} <= 1 does occur
 In GS, test on C_{L,L} relaxed

Returning from Gram-Schmidt :

    Parameter r (lower bound on rank) =    1
    Number of vectors admitting a known common descent direction, mu =   15
    Defect, m-mu =    0

Permutation of u-vectors =
   4   2   3   1   5   6   7   8
   9  10  11  12  13  14  15

Reordered u-vectors (U matrix) :
   1.0309   1.0914   1.0669   1.1000   0.9895   0.9500   0.9191   0.9022
   1.1000   1.0669   1.0914   1.0309   1.0914   1.0669   1.0309   0.9895
   1.0309   0.9500   0.9895   0.9191   1.0669   1.0914   1.1000   1.0914
   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669
   0.9191   0.9895   0.9500   1.0309   0.9022   0.9022   0.9191   0.9500

   0.9022   0.9191   0.9500   0.9895   1.0309   1.0669   1.0914
   0.9500   0.9191   0.9022   0.9022   0.9191   0.9500   0.9895
   1.0669   1.0309   0.9895   0.9500   0.9191   0.9022   0.9022
   1.0914   1.1000   1.0914   1.0669   1.0309   0.9895   0.9500
   0.9895   1.0309   1.0669   1.0914   1.1000   1.0914   1.0669


Orthogonal basis (V matrix) :
   1.0309
   1.1000
   1.0309
   0.9191
   0.9191


Coefficient-vector beta =
   1.0000

Coefficient-vector alfa (w.r.t. reordered U matrix) =
   1.0000

Coefficient-vector alpha (w.r.t. original U0 matrix) =
   0.0000   0.0000   0.0000   1.0000   0.0000   0.0000   0.0000   0.0000
   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000

Provisional direction of search
 omega_1 = V * beta = U * alfa = U0 * alpha =
   1.0309
   1.1000
   1.0309
   0.9191
   0.9191

Directional derivatives = dot products of omega_1
 with column-vectors of reordered U matrix :
   5.0250   5.0167   5.0228   5.0077   5.0228   5.0167   5.0077   4.9974
   4.9875   4.9798   4.9755   4.9755   4.9798   4.9875   4.9974
Constant a =   0.9803
 PROVISIONAL DIRECTION OMEGA_1 IS A COMMON DESCENT DIRECTION
 QP-PROBLEM ELUDED

======================================================================
          Guidelines to Step-Size selection
======================================================================

 STATISTICS :
 Concerning (transformed) input function values :
    - average, PHIbar :    8.0000000000000000
    - standard deviation, SIGMAbar :    4.3204937989385739
 Changes in input physical variables for which
 the expected INCREASE in average function value
 is equal to the standard deviation
  0.89080087994698698
  0.95050863576648670
  0.89080087994698698
  0.79419170163905739
  0.79419170163905750

 FOR AN APPROPRIATE DESCENT STEP, IT IS SUGGESTED TO SUBTRACT
 TO VECTOR X A FRACTION OF THE ABOVE VECTOR STORED IN FILE
 ./solution.txt
                  
  0.89080087994698698
  0.95050863576648670
  0.89080087994698698
  0.79419170163905739
  0.79419170163905750
                  

Example 7

This example involves $m=20$ gradient-vectors of dimension $n=6$.

The function values are those of the drag of a flow above a flat plate subject to a system of three pulsating jets (see cited-above reference for details). The jets pulsate at frequencies $f$, $2f$ and $f/2$. The three amplitudes and three phases of these pulsations are the optimization variables defining a vector . The compressible flow is computed by the two-dimensional time-dependent Navier-Stokes equations in laminar regime. The flow is periodic in time (frequency $f/2$). By the sensitivity-equation approach, as the time-integration of the flow proceeds, the flow gradient with respect to ${\mathbf{x}}$ is also calculated. Once the time integration achieves a permanent periodic regime one disposes of the values of drag at all 800 time-steps, as well as 800 gradient-vectors of dimension 6. To simplify, these gradients are averaged over time intervals of 40 time-steps, yielding 20 average gradients of dimension 6 that are then processed by MGDA, and the design vector ${\mathbf{x}}$ is then updated according to the rule proposed in the synopsis with $\rho=0.1$ (for details, see Ref.5).

This process was repeated several times to perform an actual optimization of the jet system. Here, only the results concerning gradients obtained with an initial dataset are commented.

The code was run in basic mode (logarithmic=0), but with component scaling (iscale=1) since the optimization variables (amplitudes and phases) are not of same physical dimension. In the QP problem, the regularization parameter was not modified (eps_Hdiag $= 10^{-10}$).

Reading through the formatted output run_report.txt, the main observed facts are:

  1. For each component an adequate scale was identified, all six scales being close to $10^{-2}$. After scaling, each component of the gradient-vector varies in the range [-1,1] attaining 1 or -1 at least once.
  2. The average gradient, or gradient of the average, was calculated and the associated directional derivatives also. All are positive except for component 16. Therefore the optimization of the average function (average drag) would improve drag at all times except in the zone of the 16th time-interval.
  3. After Gram-Schmidt orthogonalization, the rank is found equal to 6 (full rank).
  4. $\mu=12$ and the permutation is (3,16,9,5,19,7,20,2,1,10,11,6,12,17,4,8,18,15,13,14) the new basis is made of the originally indexed vectors ; the direction of search based on these vectors is a common descent direction for all $\mu=12$ vectors, that is not for the last 8. Therefore further investigation is necessary to conclude.
  5. The directional derivatives corresponding to the direction of search associated with the new basis vectors are equal for the first 6, greater for the subsequent 6 (unused vectors in the hierarchical orthogonalization process are redundant), and negative for the last 8 (at this stage) uncertain cases.
  6. The examination of the Etav matrix does not permit an immediate conclusion. No vectors only have negative components: Pareto-stationarity is not detected. The components of some column vectors sum up to a negative number: the provisional direction of search is not satisfactory to all. Therefore the QP-problem must be formulated and solved.
  7. Upon return from the QuadProg procedure only 6 coefficients $\alpha_j$ are nonzero. The corresponding ascent direction is calculated, as well as the corresponding directional derivatives. The ratio of the largest derivative (0.1525) to the smallest () is moderate (< 10).
  8. Some statistics are provided and a reference is calculated according to the rule described in the synopsis.


 unsteady case 3 jets
          20
           6

           1
  4.45965209030997969E-002
 -9.40109487864667845E-003
  2.30415057933417225E-003
 -1.08210168430560721E-003
 -2.82086849955839999E-004
 -3.27678485597300909E-003
 -1.22736254367574286E-002

           2
  3.48804926673434920E-002
 -7.66949963157812376E-003
 -1.46370150054201840E-003
 -3.84047965693594655E-003
  2.61901425239480842E-003
 -4.66654090195841448E-003
 -1.30584048974527918E-002

           3
  2.98818771037282875E-002
 -5.64969809571461319E-003
 -4.75358325356000332E-003
 -6.12109026744784082E-003
  4.67532234247717768E-003
  6.14044016897305174E-004
 -1.14022483453925620E-002

           4
  3.52483684641713216E-002
 -3.03368774652562579E-003
 -2.36357847181656528E-003
 -8.06571525441299522E-003
  6.01775960154472215E-003
  4.58286784634366704E-003
 -8.50235025910345971E-003

           5
  4.13266743300223505E-002
 -6.20031833371678849E-005
 -6.30558188078180881E-004
 -9.39888615743446690E-003
  4.85019220979640651E-003
  1.54399326176705575E-003
 -4.49275646014585337E-003

           6
  4.09460681134463864E-002
  1.59335801304335872E-003
 -1.63897496848350701E-003
 -9.19312131467119389E-003
  1.91719595177377677E-003
 -3.60627545701760945E-003
  5.33371900208469548E-004

           7
  3.51227803393296262E-002
  5.68970423175630194E-004
 -4.60266082341753615E-003
 -7.47894583258841041E-003
 -1.65436498331628513E-003
 -4.35274968088353569E-003
  4.96821646441576985E-003

           8
  3.20740870764128910E-002
 -1.31911132677307906E-003
 -6.15231353243615720E-003
 -5.53022087013964499E-003
 -4.08401013117820795E-003
  7.36288249577082506E-004
  7.44845576917735341E-003

           9
  3.59720826393693516E-002
 -4.64254639933409617E-003
 -2.52829548427366702E-003
 -4.00203792484792563E-003
 -6.22306431968448672E-003
  4.16852825891672260E-003
  8.89282431632044609E-003

          10
  3.91248628111657004E-002
 -8.76819440981470646E-003
 -1.50630034532936826E-004
 -2.70136232453762172E-003
 -4.89417154257407779E-003
  1.36103494651562034E-003
  9.15869329280236469E-003

          11
  3.88416924411812206E-002
 -9.77350722952891743E-003
 -7.42228139753924629E-004
 -9.57879811474913545E-004
 -8.50467958656843078E-004
 -3.47685672071341165E-003
  9.76051850978393309E-003

          12
  3.65496905081896944E-002
 -8.15687514528845928E-003
 -4.21252513115212470E-003
  1.69776689439694395E-003
  2.33400811257094502E-003
 -4.62821428244877559E-003
  1.02994827145925262E-002

          13
  3.90007839668358031E-002
 -5.99116508998220193E-003
 -7.18145352442053474E-003
  3.84454883722719701E-003
  4.74521886079781741E-003
  7.90895009051057222E-004
  9.28658954984359387E-003

          14
  5.13811993087112673E-002
 -3.48668870650844457E-003
 -4.31149083215403954E-003
  4.72864984364041591E-003
  6.15275558796849874E-003
  5.75679050080676619E-003
  6.74708031758876510E-003

          15
  6.34498481453451280E-002
 -1.20514664377607069E-003
  1.04291246286969315E-004
  5.56589266830381357E-003
  4.89246625324777328E-003
  2.61942880448019517E-003
  5.44153887726428302E-003

          16
  6.61415623452396989E-002
 -7.14757755928332870E-005
  1.80143301242892645E-003
  5.91840026602042518E-003
  2.29123987029222547E-003
 -3.79193569558945471E-003
  3.23800764053148657E-003

          17
  5.87784686475967613E-002
  8.72180646097729640E-004
 -2.15035587108213747E-003
  4.86278006840443720E-003
 -1.13487988890391498E-003
 -5.87398817971604252E-003
 -3.77920954481840855E-004

          18
  5.14184461321082406E-002
  3.55382241451047368E-004
 -5.91639639096637124E-003
  3.57756117122808286E-003
 -4.03234326311404344E-003
  2.65839228391083729E-005
 -3.25978197389318334E-003

          19
  5.11056215583588908E-002
 -3.86277379196767754E-003
 -2.00341622860333250E-003
  2.59630768336268540E-003
 -6.00995059831289317E-003
  4.90107322647327301E-003
 -6.56928349579170671E-003

          20
  5.02196322942607964E-002
 -8.59732284698235330E-003
  2.04023760022390170E-003
  1.17773203933812143E-003
 -4.26731680730030923E-003
  2.45821317765516304E-003
 -1.00358943621070319E-002
                  
Test-case specification :
 unsteady case 3 jets

 Number of vectors (m) : nvec =           20
 Space dimension (n)   : ndim =            6

 logmode = 0 ; log-gradients not considered

Specified gradient vectors (specified Gvect matrix) :
  -0.0094  -0.0077  -0.0056  -0.0030  -0.0001   0.0016   0.0006  -0.0013
   0.0023  -0.0015  -0.0048  -0.0024  -0.0006  -0.0016  -0.0046  -0.0062
  -0.0011  -0.0038  -0.0061  -0.0081  -0.0094  -0.0092  -0.0075  -0.0055
  -0.0003   0.0026   0.0047   0.0060   0.0049   0.0019  -0.0017  -0.0041
  -0.0033  -0.0047   0.0006   0.0046   0.0015  -0.0036  -0.0044   0.0007
  -0.0123  -0.0131  -0.0114  -0.0085  -0.0045   0.0005   0.0050   0.0074

  -0.0046  -0.0088  -0.0098  -0.0082  -0.0060  -0.0035  -0.0012  -0.0001
  -0.0025  -0.0002  -0.0007  -0.0042  -0.0072  -0.0043   0.0001   0.0018
  -0.0040  -0.0027  -0.0010   0.0017   0.0038   0.0047   0.0056   0.0059
  -0.0062  -0.0049  -0.0009   0.0023   0.0047   0.0062   0.0049   0.0023
   0.0042   0.0014  -0.0035  -0.0046   0.0008   0.0058   0.0026  -0.0038
   0.0089   0.0092   0.0098   0.0103   0.0093   0.0067   0.0054   0.0032

   0.0009   0.0004  -0.0039  -0.0086
  -0.0022  -0.0059  -0.0020   0.0020
   0.0049   0.0036   0.0026   0.0012
  -0.0011  -0.0040  -0.0060  -0.0043
  -0.0059   0.0000   0.0049   0.0025
  -0.0004  -0.0033  -0.0066  -0.0100


 First function value =    4.4596520903099797E-002
 Last function value =     5.0219632294260796E-002
 Mean function value, PHIbar =     4.3803037989795832E-002
 Standard deviation, SIGMAbar  =     1.0190983484733520E-002

 iscale = 1 ; gradients will be scaled component-wise

Component-wise gradient scales :
 scale(           1 ) =    9.7735072295289174E-003
 scale(           2 ) =    7.1814535244205347E-003
 scale(           3 ) =    9.3988861574344669E-003
 scale(           4 ) =    6.2230643196844867E-003
 scale(           5 ) =    5.8739881797160425E-003
 scale(           6 ) =    1.3058404897452792E-002

Matrix U to be processed by MGDA and duplicate to be saved :
  -0.9619  -0.7847  -0.5781  -0.3104  -0.0063   0.1630   0.0582  -0.1350
   0.3208  -0.2038  -0.6619  -0.3291  -0.0878  -0.2282  -0.6409  -0.8567
  -0.1151  -0.4086  -0.6513  -0.8582  -1.0000  -0.9781  -0.7957  -0.5884
  -0.0453   0.4209   0.7513   0.9670   0.7794   0.3081  -0.2658  -0.6563
  -0.5578  -0.7944   0.1045   0.7802   0.2629  -0.6139  -0.7410   0.1253
  -0.9399  -1.0000  -0.8732  -0.6511  -0.3441   0.0408   0.3805   0.5704

  -0.9619  -0.7847  -0.5781  -0.3104  -0.0063   0.1630   0.0582  -0.1350
   0.3208  -0.2038  -0.6619  -0.3291  -0.0878  -0.2282  -0.6409  -0.8567
  -0.1151  -0.4086  -0.6513  -0.8582  -1.0000  -0.9781  -0.7957  -0.5884
  -0.0453   0.4209   0.7513   0.9670   0.7794   0.3081  -0.2658  -0.6563
  -0.5578  -0.7944   0.1045   0.7802   0.2629  -0.6139  -0.7410   0.1253
  -0.9399  -1.0000  -0.8732  -0.6511  -0.3441   0.0408   0.3805   0.5704

  -0.4750  -0.8971  -1.0000  -0.8346  -0.6130  -0.3567  -0.1233  -0.0073
  -0.3521  -0.0210  -0.1034  -0.5866  -1.0000  -0.6004   0.0145   0.2508
  -0.4258  -0.2874  -0.1019   0.1806   0.4090   0.5031   0.5922   0.6297
  -1.0000  -0.7865  -0.1367   0.3751   0.7625   0.9887   0.7862   0.3682
   0.7097   0.2317  -0.5919  -0.7879   0.1346   0.9800   0.4459  -0.6455
   0.6810   0.7014   0.7475   0.7887   0.7112   0.5167   0.4167   0.2480

  -0.4750  -0.8971  -1.0000  -0.8346  -0.6130  -0.3567  -0.1233  -0.0073
  -0.3521  -0.0210  -0.1034  -0.5866  -1.0000  -0.6004   0.0145   0.2508
  -0.4258  -0.2874  -0.1019   0.1806   0.4090   0.5031   0.5922   0.6297
  -1.0000  -0.7865  -0.1367   0.3751   0.7625   0.9887   0.7862   0.3682
   0.7097   0.2317  -0.5919  -0.7879   0.1346   0.9800   0.4459  -0.6455
   0.6810   0.7014   0.7475   0.7887   0.7112   0.5167   0.4167   0.2480

   0.0892   0.0364  -0.3952  -0.8797
  -0.2994  -0.8238  -0.2790   0.2841
   0.5174   0.3806   0.2762   0.1253
  -0.1824  -0.6480  -0.9658  -0.6857
  -1.0000   0.0045   0.8344   0.4185
  -0.0289  -0.2496  -0.5031  -0.7685

   0.0892   0.0364  -0.3952  -0.8797
  -0.2994  -0.8238  -0.2790   0.2841
   0.5174   0.3806   0.2762   0.1253
  -0.1824  -0.6480  -0.9658  -0.6857
  -1.0000   0.0045   0.8344   0.4185
  -0.0289  -0.2496  -0.5031  -0.7685


 Average gradient, u_mean (dimensionless and in phys. dims.):
 -0.40057728376672719       -3.9150449788791647E-003
 -0.31018824939477657       -2.2276024968499539E-003
 -0.12981432702838105       -1.2201100813437216E-003
   5.6744686677329147E-002   3.5312583499336268E-004
  -3.5015430973986864E-002  -2.0568022764886183E-004
   2.2217541931691030E-002   2.9012565837015691E-004

 Associated directional derivatives, u_mean^t u_j, j=1..nvec :
           1  0.29681485166593979
           2  0.46008892617803876
           3  0.54099408120330084
           4  0.35091760039946684
           5  0.18696962710106346
           6  0.17234582861426026
           7  0.29809430447966256
           8  0.36722724352706737
           9  0.28829549220649864
          10  0.36603205430769808
          11  0.47544370495178956
          12  0.55921568524040066
          13  0.55699729989027891
          14  0.29708728246483040
          15   6.2704942183252586E-003
          16 -0.10761692078009510
          17   1.3993989084056762E-002
          18  0.14909551064890042
          19  0.11379973149123723
          20  0.17733979063218608

 Normalized gradient of variance, gradvar
   3.1090917889615745E-002
   3.8097792706834603E-002
   9.5806936709493903E-002
   2.8804813609556298E-003
   1.7695108193264537E-003
   2.1557954696161596E-003

 Associated directional derivatives, gradvar^t u_j, j=1..nvec :
           1  -3.1856900450714605E-002
           2  -7.3659724670921245E-002
           3 -0.10511863994895805
           4 -0.10164440537836104
           5  -9.7380870192461694E-002
           6  -9.7446494609308598E-002
           7 -0.10010021318188390
           8  -9.3645321798797532E-002
           9  -6.9132413805815307E-002
          10  -5.6571504549399881E-002
          11  -4.4622235247156417E-002
          12  -2.9603208953442973E-002
          13  -1.3999589568819864E-002
          14   1.9932974401899461E-002
          15   5.7407102815622023E-002
          16   7.0110909077367722E-002
          17   3.8578066575912603E-002
          18   3.8149235577768859E-003
          19   1.1591600675952031E-003
          20  -7.4121695915441588E-003

 Parameter r_max (upper bound on rank) =            6
 TOL =    1.0000000000000000E-002
 TOL only serves whenever 1-TOL < C_{L,L} <= 1 does occur

Returning from Gram-Schmidt :

    Parameter r (lower bound on rank) =    6
    Number of vectors admitting a known common descent direction, mu =   12
    Defect, m-mu =    8

Permutation of u-vectors =
   3  16   9   5  19   7  20   2
   1  10  11   6  12  17   4   8
  18  15  13  14

Reordered u-vectors (U matrix) :
  -0.5781  -0.0073  -0.4750  -0.0063  -0.3952   0.0582  -0.8797  -0.7847
  -0.6619   0.2508  -0.3521  -0.0878  -0.2790  -0.6409   0.2841  -0.2038
  -0.6513   0.6297  -0.4258  -1.0000   0.2762  -0.7957   0.1253  -0.4086
   0.7513   0.3682  -1.0000   0.7794  -0.9658  -0.2658  -0.6857   0.4209
   0.1045  -0.6455   0.7097   0.2629   0.8344  -0.7410   0.4185  -0.7944
  -0.8732   0.2480   0.6810  -0.3441  -0.5031   0.3805  -0.7685  -1.0000

  -0.9619  -0.8971  -1.0000   0.1630  -0.8346   0.0892  -0.3104  -0.1350
   0.3208  -0.0210  -0.1034  -0.2282  -0.5866  -0.2994  -0.3291  -0.8567
  -0.1151  -0.2874  -0.1019  -0.9781   0.1806   0.5174  -0.8582  -0.5884
  -0.0453  -0.7865  -0.1367   0.3081   0.3751  -0.1824   0.9670  -0.6563
  -0.5578   0.2317  -0.5919  -0.6139  -0.7879  -1.0000   0.7802   0.1253
  -0.9399   0.7014   0.7475   0.0408   0.7887  -0.0289  -0.6511   0.5704

   0.0364  -0.1233  -0.6130  -0.3567
  -0.8238   0.0145  -1.0000  -0.6004
   0.3806   0.5922   0.4090   0.5031
  -0.6480   0.7862   0.7625   0.9887
   0.0045   0.4459   0.1346   0.9800
  -0.2496   0.4167   0.7112   0.5167


Orthogonal basis (V matrix) :
  -0.5781  -0.1135  -0.2832   0.0520  -0.0563  -0.0127
  -0.6619   0.0810  -0.1357   0.1575   0.0741   0.0188
  -0.6513   0.3914   0.0084  -0.2882   0.0424  -0.0131
   0.7513   0.4395  -0.0798   0.2379   0.0277  -0.0135
   0.1045  -0.5060  -0.0043   0.0182   0.0800  -0.0167
  -0.8732   0.0394   0.2149   0.2680  -0.0171  -0.0096


Coefficient-vector beta =
   0.0005   0.0018   0.0076   0.0048   0.0638   0.9216

Coefficient-vector alfa (w.r.t. reordered U matrix) =
  -0.1658   0.3913  -0.0159   0.3661   0.3099   0.1143

Coefficient-vector alpha (w.r.t. original U0 matrix) =
   0.0000   0.0000  -0.1658   0.0000   0.3661   0.0000   0.1143   0.0000
  -0.0159   0.0000   0.0000   0.0000   0.0000   0.0000   0.0000   0.3913
   0.0000   0.0000   0.3099   0.0000

Provisional direction of search
 omega_1 = V * beta = U * alfa = U0 * alpha =
  -0.0176
   0.0216
  -0.0103
  -0.0090
  -0.0112
  -0.0074

Directional derivatives = dot products of omega_1
 with column-vectors of reordered U matrix :
   0.0011   0.0011   0.0011   0.0011   0.0011   0.0011   0.0275   0.0261
   0.0387   0.0176   0.0187   0.0061  -0.0002  -0.0004  -0.0054  -0.0098
  -0.0148  -0.0187  -0.0286  -0.0355
Constant a =   5.2937

UtU matrix :
   2.5342  -0.5793  -0.4868   1.6265   0.0342   0.2994
  -0.5793   1.0733  -1.0104  -0.6197  -0.9121  -0.1874
  -0.4868  -1.0104   2.4983  -0.3674   1.3836   0.5359
   1.6265  -0.6197  -0.3674   1.8027  -0.6095   0.3188
   0.0342  -0.9121   1.3836  -0.6095   2.1923  -0.6170
   0.2994  -0.1874   0.5359   0.3188  -0.6170   1.8119

Factorization : upon exit from DPOTRF, info =
   0

Inversion : upon exit from DPOTRS, info =
   0

Matrix W :
   1.5476  -3.7595   1.6692   1.6894   1.5248   1.0713
  -6.1998   7.6237  -3.0607  -2.7816  -3.9561  -2.2589
  -0.5409  -0.3953   0.1223   0.3394   0.5190   0.9207
  -4.8167   7.5760  -3.7567  -2.7491  -3.1943  -2.5002
  -4.1381   5.9879  -2.6650  -2.9813  -2.9762  -2.7335
  -1.2667   1.8792  -1.3143  -1.3455  -1.6752  -0.9640

Eta-vectors (U-tilde matrix) :
   1.0000  -0.0000   0.0000   0.0000   0.0000  -0.0000  -3.5640  -2.7019
   0.0000   1.0000  -0.0000  -0.0000  -0.0000   0.0000   9.2240   8.7931
   0.0000  -0.0000   1.0000   0.0000  -0.0000   0.0000  -0.3442  -0.7351
   0.0000   0.0000  -0.0000   1.0000  -0.0000   0.0000   8.3885   7.6516
   0.0000   0.0000  -0.0000  -0.0000   1.0000   0.0000   7.9070   6.9590
   0.0000   0.0000  -0.0000  -0.0000  -0.0000   1.0000   2.4459   2.8765

  -4.8212  -2.0132  -1.6618  -0.8942   1.4923   0.2635   1.4503   1.7233
  13.2182   5.9685   6.7572   1.7226   0.4417   0.1092  -2.2637  -3.8524
  -0.7909   0.9575   0.9039  -0.2941   1.1499  -0.4742   0.3267   0.7072
  11.7530   4.9105   4.8142   2.1722  -1.5888  -0.8740  -1.2972  -3.6522
  10.5731   4.0907   3.9168   1.3625  -1.4693   0.0580  -1.8245  -2.9789
   3.8742   1.4687   1.6613   1.2247  -0.2276   0.5927  -1.0779  -0.5424

   2.4337   3.1976   5.7489   6.2629
  -5.3228  -5.8296  -9.3353 -11.6995
  -0.0948   1.0153   1.7603   1.8117
  -5.4556  -6.1483 -10.4644 -11.8605
  -3.4973  -5.7911  -9.1593 -10.7363
  -0.9896  -2.8014  -3.5774  -4.8077

   1.0000   0.0000   0.0000   0.0000   0.0000   0.0000  -3.5640  -2.7019
   0.0000   1.0000   0.0000   0.0000   0.0000   0.0000   9.2240   8.7931
   0.0000   0.0000   1.0000   0.0000   0.0000   0.0000  -0.3442  -0.7351
   0.0000   0.0000   0.0000   1.0000   0.0000   0.0000   8.3885   7.6516
   0.0000   0.0000   0.0000   0.0000   1.0000   0.0000   7.9070   6.9590
   0.0000   0.0000   0.0000   0.0000   0.0000   1.0000   2.4459   2.8765

  -4.8212  -2.0132  -1.6618  -0.8942   1.4923   0.2635   1.4503   1.7233
  13.2182   5.9685   6.7572   1.7226   0.4417   0.1092  -2.2637  -3.8524
  -0.7909   0.9575   0.9039  -0.2941   1.1499  -0.4742   0.3267   0.7072
  11.7530   4.9105   4.8142   2.1722  -1.5888  -0.8740  -1.2972  -3.6522
  10.5731   4.0907   3.9168   1.3625  -1.4693   0.0580  -1.8245  -2.9789
   3.8742   1.4687   1.6613   1.2247  -0.2276   0.5927  -1.0779  -0.5424

   2.4337   3.1976   5.7489   6.2629
  -5.3228  -5.8296  -9.3353 -11.6995
  -0.0948   1.0153   1.7603   1.8117
  -5.4556  -6.1483 -10.4644 -11.8605
  -3.4973  -5.7911  -9.1593 -10.7363
  -0.9896  -2.8014  -3.5774  -4.8077


 EPS_HDIAG =    1.0000000000000000E-010

Matrix H :

    1.00    0.00    0.00    0.00    0.00    0.00   -3.56   -2.70   -4.82   -2.01   -1.66   -0.89    1.49    0.26    1.45    1.72    2.43    3.20    5.75    6.26

    0.00    1.00    0.00    0.00    0.00    0.00    9.22    8.79   13.22    5.97    6.76    1.72    0.44    0.11   -2.26   -3.85   -5.32   -5.83   -9.34  -11.70

    0.00    0.00    1.00    0.00    0.00    0.00   -0.34   -0.74   -0.79    0.96    0.90   -0.29    1.15   -0.47    0.33    0.71   -0.09    1.02    1.76    1.81

    0.00    0.00    0.00    1.00    0.00    0.00    8.39    7.65   11.75    4.91    4.81    2.17   -1.59   -0.87   -1.30   -3.65   -5.46   -6.15  -10.46  -11.86

    0.00    0.00    0.00    0.00    1.00    0.00    7.91    6.96   10.57    4.09    3.92    1.36   -1.47    0.06   -1.82   -2.98   -3.50   -5.79   -9.16  -10.74

    0.00    0.00    0.00    0.00    0.00    1.00    2.45    2.88    3.87    1.47    1.66    1.22   -0.23    0.59   -1.08   -0.54   -0.99   -2.80   -3.58   -4.81

   -3.56    9.22   -0.34    8.39    7.91    2.45  236.77  217.24  331.05  139.03  143.36   51.17  -27.14   -5.19  -54.11  -97.44 -133.58 -169.74 -276.16 -327.00

   -2.70    8.79   -0.74    7.65    6.96    2.88  217.24  200.41  304.49  127.48  132.11   47.40  -24.03   -3.98  -49.79  -89.29 -122.24 -156.05 -253.01 -300.42

   -4.82   13.22   -0.79   11.75   10.57    3.87  331.05  304.49  463.52  194.50  201.04   71.99  -37.36   -6.82  -75.89 -136.31 -186.95 -237.62 -386.19 -457.81

   -2.01    5.97    0.96    4.91    4.09    1.47  139.03  127.48  194.50   83.60   86.64   29.84  -13.41   -3.52  -31.53  -56.70  -79.31  -98.25 -159.71 -189.92

   -1.66    6.76    0.90    4.81    3.92    1.66  143.36  132.11  201.04   86.64   90.52   30.69  -12.24   -3.12  -32.59  -58.41  -81.70 -100.72 -163.24 -194.96

   -0.89    1.72   -0.29    2.17    1.36    1.22   51.17   47.40   71.99   29.84   30.69   11.93   -6.64   -1.00  -11.92  -21.04  -29.15  -37.88  -61.33  -72.57

    1.49    0.44    1.15   -1.59   -1.47   -0.23  -27.14  -24.03  -37.36  -13.41  -12.24   -6.64    8.48    1.06    6.53   11.99   15.20   22.28   37.38   41.98

    0.26    0.11   -0.47   -0.87    0.06    0.59   -5.19   -3.98   -6.82   -3.52   -3.12   -1.00    1.06    1.42    0.37    2.40    4.08    3.10    6.16    6.41

    1.45   -2.26    0.33   -1.30   -1.82   -1.08  -54.11  -49.79  -75.89  -31.53  -32.59  -11.92    6.53    0.37   13.51   22.21   30.07   39.73   64.19   76.32

    1.72   -3.85    0.71   -3.65   -2.98   -0.54  -97.44  -89.29 -136.31  -56.70  -58.41  -21.04   11.99    2.40   22.21   40.82   55.51   69.91  114.56  135.05

    2.43   -5.32   -0.09   -5.46   -3.50   -0.99 -133.58 -122.24 -186.95  -79.31  -81.70  -29.15   15.20    4.08   30.07   55.51   77.24   95.28  156.18  184.35

    3.20   -5.83    1.02   -6.15   -5.79   -2.80 -169.74 -156.05 -237.62  -98.25 -100.72  -37.88   22.28    3.10   39.73   69.91   95.28  124.43  201.99  238.63

    5.75   -9.34    1.76  -10.46   -9.16   -3.58 -276.16 -253.01 -386.19 -159.71 -163.24  -61.33   37.38    6.16   64.19  114.56  156.18  201.99  329.49  388.06

    6.26  -11.70    1.81  -11.86  -10.74   -4.81 -327.00 -300.42 -457.81 -189.92 -194.96  -72.57   41.98    6.41   76.32  135.05  184.35  238.63  388.06  458.44

 UPON RETURN FROM QPGEN2, XQP =
   2.3077164481355633E-012
  0.19729770349760042
   0.0000000000000000
  0.10276591901078914
  0.29009730383147558
   8.9857723137276848E-013
  -5.4415020746571990E-013
   2.6258197281156792E-012
   3.6150714846608628E-012
   1.0610925646801433E-012
   1.6096588578209032E-012
  0.29198494419959176
  -2.4828764543820907E-017
   5.8015767829290221E-013
   1.9923422042733431E-012
   6.4286469679030722E-013
   4.3662172339356611E-003
  0.11348791222839261
   2.2815863601882177E-012
   3.4701726087801101E-012
 Cleaned-up Xqp =
   0.0000000000000000
  0.19729770349760042
   0.0000000000000000
  0.10276591901078914
  0.29009730383147558
   0.0000000000000000
   0.0000000000000000
   0.0000000000000000
   0.0000000000000000
   0.0000000000000000
   0.0000000000000000
  0.29198494419959176
   0.0000000000000000
   0.0000000000000000
   0.0000000000000000
   0.0000000000000000
   4.3662172339356611E-003
  0.11348791222839261
   0.0000000000000000
   0.0000000000000000
 MODULE OF LARGEST NEGLECTED COMPONENT IN XQP (XDUMP) :    3.6150714846608628E-012

 Omega-tilde :
  0.11241233495389169
   1.5439281714044850E-002
   2.8940420344363282E-002
   1.5439281729248022E-002
   1.5439281714931363E-002
   3.5356562312799933E-002

 Ascent direction d = A_n * omega (scaled and raw) :
 -0.12044639216948430       -12.323763551898432
  -4.0493093885503206E-002  -5.6385651940524895
  -1.6970538931145376E-003 -0.18055904334708611
   2.0733462519398491E-002   3.3317127148783334
  -2.9150080637117007E-002  -4.9625705304919707
  -2.6897186280095478E-003 -0.20597604754423043

 Directional derivatives = dot products of d
 with column-vectors of reordered U matrix :
  0.11241233495389169
   1.5439281714044868E-002
   2.8940420344363199E-002
   1.5439281729248048E-002
   1.5439281714931315E-002
   3.5356562312799961E-002
   6.9885334970866270E-002
  0.13803726976178501
  0.12090966687265571
   8.4447570431723601E-002
  0.13721464655162263
   1.5439281596289985E-002
  0.15259208860106149
   2.5945184341185788E-002
   5.1227901374667102E-002
   3.3150334095805319E-002
   1.5439282121842901E-002
   1.5439282197067077E-002
  0.12360464653280956
   5.6966947044826965E-002

 GUIDELINES TO STEP-SIZE SELECTION
 STATISTICS :
 Concerning (transformed) input function values :
    - average, PHIbar :    4.3803037989795832E-002
    - standard deviation, SIGMAbar :    1.0190983484733520E-002
 Changes in input physical variables for which
 the expected INCREASE in average function value
 is equal to the standard deviation
  -1.9882629068425652
 -0.90970181113391357
  -2.9130618002378643E-002
  0.53752417265648345
 -0.80063974505963786
  -3.3231287934532519E-002

 FOR AN APPROPRIATE DESCENT STEP,
 IT IS SUGGESTED TO SUBTRACT TO VECTOR X
 A FRACTION OF THE ABOVE
                  
-1.9882629068425652
-0.90970181113391357
-2.9130618002378643E-002
 0.53752417265648345
-0.80063974505963786
-3.3231287934532519E-002

                  

License

Software MGDA ©Inria – 2017, all rights reserved, hereinafter "the Software".

This software has been developed by researchers of ACUMES project team of Inria (Institut National de Recherche en Informatique et Automatique).

Inria, Domaine de Voluceau, Rocquencourt - BP 105
78153 Le Chesnay Cedex, FRANCE


Inria holds all the ownership rights on the Software.

The Software has been registered with the Agence pour la Protection des Programmes (APP) under IDDN.FR.XX.XX.

The Software is still being currently developed. It is the Inria’s aim for the Software to be used by the scientific community so as to test it and, evaluate it so that Inria may improve it.

For these reasons Inria has decided to distribute the Software.

Inria grants to the academic user, a free of charge, without right to sublicense non-exclusive right to use the Software for research purposes for a period of one (1) year from the date of the download of the compiled code. Any other use without of prior consent of Inria is prohibited.

The academic user explicitly acknowledges having received from Inria all information allowing him to appreciate the adequacy between of the Software and his needs and to undertake all necessary precautions for his execution and use.

The Software is provided only as a compiled library file and decompilation procedures are strictly unauthorized.

In case of using the Software for a publication or other results obtained through the use of the Software, user should cite the Software as follows :

Every user of the Software could communicate to the developers of MGDA (amdt-mgda@inria.fr) his or her remarks as to the use of the Software.

THE USER CANNOT USE, EXPLOIT OR COMMERCIALY DISTRIBUTE THE SOFTWARE WITHOUT PRIOR AND EXPLICIT CONSENT OF INRIA (stip-sam@inria.fr). ANY SUCH ACTION WILL CONSTITUTE A FORGERY.

THIS SOFTWARE IS PROVIDED "AS IS" WITHOUT ANY WARRANTIES OF ANY NATURE AND ANY EXPRESS OR IMPLIED WARRANTIES,WITH REGARDS TO COMMERCIAL USE, PROFESSIONNAL USE, LEGAL OR NOT, OR OTHER, OR COMMERCIALISATION OR ADAPTATION.

UNLESS EXPLICITLY PROVIDED BY LAW, IN NO EVENT, SHALL INRIA OR THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, LOSS OF USE, DATA, OR PROFITS OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Coming next

  • A documented multi-objective optimization using the iterative mode.
  • Coupling with a Nash game for quick exploration of Pareto front.
  • Plots to visualize statistics.

Credits

The computational core of this platform was originally developed by Jean-Antoine Desideri, implementing the algorithms described in the references cited in the synopsis. The web interface and remote execution system were designed and realized by the SED team of Inria Sophia Antipolis.