9.3.1. TNlopt

The TMaster subclass for local optimizer is called TNlopt.

Local optimization starts from an initial guess point. This point has to be defined using the setStartingPoint method with an integer to precise the dimension, as first argument and a double * as second argument. This has changed in order to be complient with python but also to be able to check that the provided number of double is matching the dimension of our problem under consideration. You can call the setStartingPoint method many times. In Python, it could look like this:

import numpy
...
opt = Reoptimizer.TNlopt(tds, runner, solv)
...
p = numpy.array([0.2, 0.3])
opt.setStartingPoint(len(p), p)

In this case, each optimization, starting from a corresponding starting point, may be done in parallel using an appropriate TRun. If the results of the optimisation is not consistent when changing the starting points, this might be a sign for a problem with local minimum. In this case a safer (but slower) solution might be to consider using global solvers.

To modify the default configuration, the setMaximumEval method, with an int as only argument, may be used. The default value is 10 000. For multi starting point, it is interpreted as the accumulation of all evaluations.