Unconstrained minimization is used to minimize a function of many variables
without any constraints on the variables, such as bounds. The methods
available in TAO for solving these problems can be classified according
to the amount of derivative information required:
1. Function evaluation only -- Nelder-Mead method ( tao_nm)
2. Function and gradient evaluations -- limited-memory, variable-metric
method ( tao_lmvm) and nonlinear conjugate gradient method
( tao_cg)
3. Function, gradient, and Hessian evaluations -- Newton line-search
method ( tao_nls) and Newton trust-region method ( tao_ntr)
The best method to use depends on the particular problem being solved
and the accuracy required in the solution. If a Hessian evaluation
routine is available, then the Newton line-search and Newton trust-region
methods will be the best performers. When a Hessian evaluation routine
is not available, then the limited-memory, variable-metric method is
likely to perform best. The Nelder-Mead method should be used only
as a last resort when no gradient information is available.
Each solver has a set of options associated with it that can be set with
command line arguments. A brief description of these algorithms and the
associated options are discussed in this chapter.