The nonlinear conjugate gradient method can be viewed as an extensions of the conjugate gradient method for solving symmetric, positive-definite linear systems of equations. This algorithm requires only function and gradient evaluations as well as a line search. The TAO implementation uses a More-Thuente line search to obtain the step length. The nonlinear conjugate gradient method can be selected by using the TaoMethod tao_cg. For the best efficiency, function and gradient evaluations should be performed simultaneously when using this algorithm.
Five variations are currently supported by the TAO implementation: the Fletcher-Reeves method, the Polak-Ribiére method, the Polak-Ribiére-Plus method[(ref NW99)], the Hestenes-Stiefel method, and the Dai-Yuan method. These conjugate gradient methods can be specified by using the command line argument tao_cg_type <fr,pr,prp,hs,dy>, respectively. The default value is prp.
The conjugate gradient method incorporates automatic restarts when successive
gradients are not sufficiently orthogonal. TAO measures the orthogonality by
dividing the inner product of the gradient at the current point and the
gradient at the previous point by the square of the Euclidean norm of
the gradient at the current point. When the absolute value of this
ratio is greater than
, the algorithm restarts using the gradient
direction. The parameter
can be set using the command line argument
-tao_cg_eta <double>; 0.1 is the default value.