TAOBNCG#

Bound-constrained Nonlinear Conjugate Gradient method.

Options Database Keys#

  • -tao_bncg_recycle - enable recycling the latest calculated gradient vector in subsequent TaoSolve() calls (currently disabled)

  • -tao_bncg_eta - restart tolerance

  • -tao_bncg_type <taocg_type> - cg formula

  • -tao_bncg_as_type <none,bertsekas> - active set estimation method

  • -tao_bncg_as_tol - tolerance used in Bertsekas active-set estimation

  • -tao_bncg_as_step - trial step length used in Bertsekas active-set estimation

  • -tao_bncg_eps - cutoff used for determining whether or not we restart based on steplength each iteration, as well as determining whether or not we continue using the last stepdirection. Defaults to machine precision.

  • -tao_bncg_theta - convex combination parameter for the Broyden method

  • -tao_bncg_hz_eta - cutoff tolerance for the beta term in the HZ, DK methods

  • -tao_bncg_dk_eta - cutoff tolerance for the beta term in the HZ, DK methods

  • -tao_bncg_xi - Multiplicative constant of the gamma term in the KD method

  • -tao_bncg_hz_theta - Multiplicative constant of the theta term for the HZ method

  • -tao_bncg_bfgs_scale - Scaling parameter of the bfgs contribution to the scalar Broyden method

  • -tao_bncg_dfp_scale - Scaling parameter of the dfp contribution to the scalar Broyden method

  • -tao_bncg_diag_scaling - Whether or not to use diagonal initialization/preconditioning for the CG methods. Default True.

  • -tao_bncg_dynamic_restart - use dynamic restart strategy in the HZ, DK, KD methods

  • -tao_bncg_unscaled_restart - whether or not to scale the gradient when doing gradient descent restarts

  • -tao_bncg_zeta - Scaling parameter in the KD method

  • -tao_bncg_delta_min - Minimum bound for rescaling during restarted gradient descent steps

  • -tao_bncg_delta_max - Maximum bound for rescaling during restarted gradient descent steps

  • -tao_bncg_min_quad - Number of quadratic-like steps in a row necessary to do a dynamic restart

  • -tao_bncg_min_restart_num - This number, x, makes sure there is a gradient descent step every x*n iterations, where n is the dimension of the problem

  • -tao_bncg_spaced_restart - whether or not to do gradient descent steps every x*n iterations

  • -tao_bncg_no_scaling - If true, eliminates all scaling, including defaults.

  • -tao_bncg_neg_xi - Whether or not to use negative xi in the KD method under certain conditions

Notes#

CG formulas are#

  • “gd” - Gradient Descent

  • “fr” - Fletcher-Reeves

  • “pr” - Polak-Ribiere-Polyak

  • “prp” - Polak-Ribiere-Plus

  • “hs” - Hestenes-Steifel

  • “dy” - Dai-Yuan

  • “ssml_bfgs” - Self-Scaling Memoryless BFGS

  • “ssml_dfp” - Self-Scaling Memoryless DFP

  • “ssml_brdn” - Self-Scaling Memoryless Broyden

  • “hz” - Hager-Zhang (CG_DESCENT 5.3)

  • “dk” - Dai-Kou (2013)

  • “kd” - Kou-Dai (2015)

Level#

beginner

Location#

src/tao/bound/impls/bncg/bncg.c


Index of all Tao routines
Table of Contents for all manual pages
Index of all manual pages