9.3.2. Solvers

Uranie proposes different Nlopt solvers. For direct solvers, there is:

  • TNloptCobyla: Constrained Optimization BY Linear Approximation from M.J.D.Powell works. The only direct algorithm that supports constraints naturally.

  • TNloptBobyqa: Bounded Optimization BY Quadratic Approximation from M.J.D.Powell works. It is usually quicker than the previous one, but might give nonphysical results if the problem should not be assumed to be quadratic.

  • TNloptPraxis: It uses the PRincipal AXIS method of Richard Brend. This algorithm has a stochastic part.

  • TNloptNelderMead: The well known Nelder-Mead Simplex algorithm.

  • TNloptSubplexe: a simplex variant, the Tom Rowan’s subplex algorithm.

and for gradient-based solvers:

  • TNloptMMA: Method of Moving Asymptotes from Krister Svanberg works. It deals with nonlinear inequality constraints naturally.

  • TNloptSLSQP: Sequential Least-Squares Quadratic Programming from Dieter Kraft. It deals with both inequality and equality constraints naturally.

  • TNloptLBFGS: an implementation of the Limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm written by Ladislaw lukan.

  • TNloptNewtown: A preconditioned inexact truncated Newtown algorithm written by Ladislaw Lukan.

  • TNloptVariableMetric: An implementation of shifted limited-memory variable-metric by Ladislaw lukan.

You may notice that actually none of Nlopt global solvers is provided.

Tip

Remarks for the gradient case: oftenly one does not have access to the real gradient. In this case a finite difference method is used by doing \(2n_X+1\) computation around every point, which implies:

  • a code that will provide results with a sufficient accuracy so that the gradient is not always assumed to be null (which can happen when a code returns a value with a very low number of digits). Even with a non-null estimation, the gradient accuracy needs to be sufficient since gradient-based solver usually estimates the Hessian matrix.

  • a possible parallelisation of the computation.