petsc4py.PETSc.TAO#

class petsc4py.PETSc.TAO#

Bases: Object

Optimization solver.

TAO is described in the PETSc manual.

See also

Tao

Enumerations

ALMMType

TAO Augmented Lagrangian Multiplier method (ALMM) Type.

BNCGType

TAO Bound Constrained Conjugate Gradient (BNCG) Update Type.

ConvergedReason

TAO solver termination reason.

Type

TAO solver type.

Methods Summary

appendOptionsPrefix(prefix)

Append to the prefix used for searching for options in the database.

cancelMonitor()

Cancel all the monitors of the solver.

computeConstraints(x, c)

Compute the vector corresponding to the constraints.

computeDualVariables(xl, xu)

Compute the dual vectors corresponding to variables' bounds.

computeGradient(x, g)

Compute the gradient of the objective function.

computeHessian(x, H[, P])

Compute the Hessian of the objective function.

computeJacobian(x, J[, P])

Compute the Jacobian.

computeObjective(x)

Compute the value of the objective function.

computeObjectiveGradient(x, g)

Compute the gradient of the objective function and its value.

computeResidual(x, f)

Compute the residual.

computeVariableBounds(xl, xu)

Compute the vectors corresponding to variables' bounds.

create([comm])

Create a TAO solver.

createPython([context, comm])

Create an optimization solver of Python type.

destroy()

Destroy the solver.

getALMMSubsolver()

Return the subsolver inside the ALMM solver.

getALMMType()

Return the type of the ALMM solver.

getAppCtx()

Return the application context.

getBNCGType()

Return the type of the BNCG solver.

getBRGNDampingVector()

Return the damping vector.

getBRGNSubsolver()

Return the subsolver inside the BRGN solver.

getConstraintTolerances()

Return the constraints tolerance parameters used in the convergence tests.

getConvergedReason()

Return the reason for the solver convergence.

getConvergenceTest()

Return the callback used to test for solver convergence.

getEqualityConstraints()

Return tuple holding vector and callback of equality constraints.

getGradient()

Return the vector used to store the gradient and the evaluation callback.

getGradientNorm()

Return the matrix used to compute inner products.

getHessian()

Return the matrices used to store the Hessian and the evaluation callback.

getInequalityConstraints()

Return tuple holding vector and callback of inequality constraints.

getIterationNumber()

Return the current iteration number.

getJacobianEquality()

Return matrix, precon matrix and callback of equality constraints Jacobian.

getJacobianInequality()

Return matrix, precon matrix and callback of ineq.

getKSP()

Return the linear solver used by the nonlinear solver.

getLMVMH0()

Return the initial Hessian for the quasi-Newton approximation.

getLMVMH0KSP()

Return the KSP for the inverse of the initial Hessian approximation.

getLMVMMat()

Get the LMVM matrix.

getLineSearch()

Return the TAO Line Search object.

getMaximumFunctionEvaluations()

Return the maximum number of objective evaluations within the solver.

getMaximumIterations()

Return the maximum number of solver iterations.

getMonitor()

Return the callback used to monitor solver convergence.

getObjective()

Return the objective evaluation callback.

getObjectiveAndGradient()

Return the vector used to store the gradient and the evaluation callback.

getObjectiveValue()

Return the current value of the objective function.

getOptionsPrefix()

Return the prefix used for searching for options in the database.

getPythonContext()

Return the instance of the class implementing the required Python methods.

getPythonType()

Return the fully qualified Python name of the class used by the solver.

getSolution()

Return the vector holding the solution.

getSolutionNorm()

Return the objective function value and the norms of gradient and constraints.

getSolutionStatus()

Return the solution status.

getTolerances()

Return the tolerance parameters used in the solver convergence tests.

getType()

Return the type of the solver.

getUpdate()

Return the callback to compute the update.

getVariableBounds()

Return the lower and upper bounds vectors.

monitor([its, f, res, cnorm, step])

Monitor the solver.

setALMMSubsolver(subsolver)

Set the subsolver inside the ALMM solver.

setALMMType(tao_almm_type)

Set the ALMM type of the solver.

setAppCtx(appctx)

Set the application context.

setBNCGType(cg_type)

Set the type of the BNCG solver.

setBRGNDictionaryMatrix(D)

Set the dictionary matrix.

setBRGNRegularizerHessian(hessian[, H, ...])

Set the callback to compute the regularizer Hessian.

setBRGNRegularizerObjectiveGradient(objgrad)

Set the callback to compute the regularizer objective and gradient.

setBRGNRegularizerWeight(weight)

Set the regularizer weight.

setBRGNSmoothL1Epsilon(epsilon)

Set the smooth L1 epsilon.

setConstraintTolerances([catol, crtol])

Set the constraints tolerance parameters used in the solver convergence tests.

setConstraints(constraints[, C, args, kargs])

Set the callback to compute constraints.

setConvergedReason(reason)

Set the termination flag.

setConvergenceTest(converged[, args, kargs])

Set the callback used to test for solver convergence.

setEqualityConstraints(equality_constraints, c)

Set equality constraints callback.

setFromOptions()

Configure the solver from the options database.

setGradient(gradient[, g, args, kargs])

Set the gradient evaluation callback.

setGradientNorm(mat)

Set the matrix used to compute inner products.

setHessian(hessian[, H, P, args, kargs])

Set the callback to compute the Hessian matrix.

setInequalityConstraints(...[, args, kargs])

Set inequality constraints callback.

setInitialTrustRegionRadius(radius)

Set the initial trust region radius.

setIterationNumber(its)

Set the current iteration number.

setJacobian(jacobian[, J, P, args, kargs])

Set the callback to compute the Jacobian.

setJacobianDesign(jacobian_design[, J, ...])

Set Jacobian design callback.

setJacobianEquality(jacobian_equality[, J, ...])

Set Jacobian equality constraints callback.

setJacobianInequality(jacobian_inequality[, ...])

Set Jacobian inequality constraints callback.

setJacobianResidual(jacobian[, J, P, args, ...])

Set the callback to compute the least-squares residual Jacobian.

setJacobianState(jacobian_state[, J, P, I, ...])

Set Jacobian state callback.

setLMVMH0(mat)

Set the initial Hessian for the quasi-Newton approximation.

setLMVMMat(M)

Set the LMVM matrix.

setMaximumFunctionEvaluations(mit)

Set the maximum number of objective evaluations within the solver.

setMaximumIterations(mit)

Set the maximum number of solver iterations.

setMonitor(monitor[, args, kargs])

Set the callback used to monitor solver convergence.

setObjective(objective[, args, kargs])

Set the objective function evaluation callback.

setObjectiveGradient(objgrad[, g, args, kargs])

Set the objective function and gradient evaluation callback.

setOptionsPrefix(prefix)

Set the prefix used for searching for options in the database.

setPythonContext(context)

Set the instance of the class implementing the required Python methods.

setPythonType(py_type)

Set the fully qualified Python name of the class to be used.

setResidual(residual, R[, args, kargs])

Set the residual evaluation callback for least-squares applications.

setSolution(x)

Set the vector used to store the solution.

setStateDesignIS([state, design])

Set the index sets indicating state and design variables.

setTolerances([gatol, grtol, gttol])

Set the tolerance parameters used in the solver convergence tests.

setType(tao_type)

Set the type of the solver.

setUp()

Set up the internal data structures for using the solver.

setUpdate(update[, args, kargs])

Set the callback to compute update at each optimization step.

setVariableBounds(varbounds[, args, kargs])

Set the lower and upper bounds for the optimization problem.

solve([x])

Solve the optimization problem.

view([viewer])

View the solver.

Attributes Summary

appctx

Application context.

cnorm

Constraints norm.

converged

Boolean indicating if the solver has converged.

ctol

Broken.

diverged

Boolean indicating if the solver has failed.

ftol

Broken.

function

Objective value.

gnorm

Gradient norm.

gradient

Gradient vector.

gtol

Broken.

iterating

Boolean indicating if the solver has not converged yet.

its

Number of iterations.

ksp

Linear solver.

objective

Objective value.

reason

Converged reason.

solution

Solution vector.

Methods Documentation

appendOptionsPrefix(prefix)#

Append to the prefix used for searching for options in the database.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:220

Parameters:

prefix (str | None)

Return type:

None

cancelMonitor()#

Cancel all the monitors of the solver.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:1373

Return type:

None

computeConstraints(x, c)#

Compute the vector corresponding to the constraints.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • c (Vec) – The output constraints vector.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1040

computeDualVariables(xl, xu)#

Compute the dual vectors corresponding to variables’ bounds.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1004

Parameters:
Return type:

None

computeGradient(x, g)#

Compute the gradient of the objective function.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • g (Vec) – The output gradient vector.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:963

computeHessian(x, H, P=None)#

Compute the Hessian of the objective function.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • H (Mat) – The output Hessian matrix.

  • P (Mat | None) – The output Hessian matrix used to construct the preconditioner.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1059

computeJacobian(x, J, P=None)#

Compute the Jacobian.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • J (Mat) – The output Jacobian matrix.

  • P (Mat | None) – The output Jacobian matrix used to construct the preconditioner.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1082

computeObjective(x)#

Compute the value of the objective function.

Collective.

Parameters:

x (Vec) – The parameter vector.

Return type:

float

Source code at petsc4py/PETSc/TAO.pyx:925

computeObjectiveGradient(x, g)#

Compute the gradient of the objective function and its value.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • g (Vec) – The output gradient vector.

Return type:

float

Source code at petsc4py/PETSc/TAO.pyx:982

computeResidual(x, f)#

Compute the residual.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • f (Vec) – The output vector.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:944

computeVariableBounds(xl, xu)#

Compute the vectors corresponding to variables’ bounds.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1016

Parameters:
Return type:

None

create(comm=None)#

Create a TAO solver.

Collective.

Parameters:

comm (Comm | None) – MPI communicator, defaults to Sys.getDefaultComm.

Return type:

Self

Source code at petsc4py/PETSc/TAO.pyx:152

createPython(context=None, comm=None)#

Create an optimization solver of Python type.

Collective.

Parameters:
  • context (Any) – An instance of the Python class implementing the required methods.

  • comm (Comm | None) – MPI communicator, defaults to Sys.getDefaultComm.

Return type:

Self

Source code at petsc4py/PETSc/TAO.pyx:1877

destroy()#

Destroy the solver.

Collective.

See also

TaoDestroy

Source code at petsc4py/PETSc/TAO.pyx:139

Return type:

Self

getALMMSubsolver()#

Return the subsolver inside the ALMM solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1717

Return type:

TAO

getALMMType()#

Return the type of the ALMM solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1732

Return type:

ALMMType

getAppCtx()#

Return the application context.

Source code at petsc4py/PETSc/TAO.pyx:293

Return type:

Any

getBNCGType()#

Return the type of the BNCG solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1569

Return type:

BNCGType

getBRGNDampingVector()#

Return the damping vector.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1862

Return type:

Vec

getBRGNSubsolver()#

Return the subsolver inside the BRGN solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1779

Return type:

TAO

getConstraintTolerances()#

Return the constraints tolerance parameters used in the convergence tests.

Not collective.

Returns:

  • catol (float) – The absolute norm of the constraints.

  • crtol (float) – The relative norm of the constraints.

Return type:

tuple[float, float]

Source code at petsc4py/PETSc/TAO.pyx:1244

getConvergedReason()#

Return the reason for the solver convergence.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1626

Return type:

ConvergedReason

getConvergenceTest()#

Return the callback used to test for solver convergence.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1293

Return type:

tuple[TAOConvergedFunction, tuple[Any, …], dict[str, Any]]

getEqualityConstraints()#

Return tuple holding vector and callback of equality constraints.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:755

Return type:

tuple[Vec, tuple[TAOConstraintsFunction, tuple[Any, …] | None, dict[str, Any] | None]]

getGradient()#

Return the vector used to store the gradient and the evaluation callback.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:436

Return type:

tuple[Vec, TAOGradientFunction]

getGradientNorm()#

Return the matrix used to compute inner products.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1484

Return type:

Mat

getHessian()#

Return the matrices used to store the Hessian and the evaluation callback.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:598

Return type:

tuple[Mat, Mat, TAOHessianFunction]

getInequalityConstraints()#

Return tuple holding vector and callback of inequality constraints.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:828

Return type:

tuple[Vec, tuple[TAOConstraintsFunction, tuple[Any, …] | None, dict[str, Any] | None]]

getIterationNumber()#

Return the current iteration number.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1596

Return type:

int

getJacobianEquality()#

Return matrix, precon matrix and callback of equality constraints Jacobian.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:792

Return type:

tuple[Mat, Mat, tuple[TAOConstraintsJacobianFunction, tuple[Any, …] | None, dict[str, Any] | None]]

getJacobianInequality()#

Return matrix, precon matrix and callback of ineq. constraints Jacobian.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:865

Return type:

tuple[Mat, Mat, tuple[TAOConstraintsJacobianFunction, tuple[Any, …] | None, dict[str, Any] | None]]

getKSP()#

Return the linear solver used by the nonlinear solver.

Not collective.

See also

TaoGetKSP

Source code at petsc4py/PETSc/TAO.pyx:1700

Return type:

KSP

getLMVMH0()#

Return the initial Hessian for the quasi-Newton approximation.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1511

Return type:

Mat

getLMVMH0KSP()#

Return the KSP for the inverse of the initial Hessian approximation.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1526

Return type:

KSP

getLMVMMat()#

Get the LMVM matrix.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:709

Return type:

Mat

getLineSearch()#

Return the TAO Line Search object.

Not collective.

See also

TaoGetLineSearch

Source code at petsc4py/PETSc/TAO.pyx:1959

Return type:

TAOLineSearch

getMaximumFunctionEvaluations()#

Return the maximum number of objective evaluations within the solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1204

Return type:

int

getMaximumIterations()#

Return the maximum number of solver iterations.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1177

Return type:

int

getMonitor()#

Return the callback used to monitor solver convergence.

Not collective.

See also

setMonitor

Source code at petsc4py/PETSc/TAO.pyx:1361

Return type:

list[tuple[TAOMonitorFunction, tuple[Any, …], dict[str, Any]]]

getObjective()#

Return the objective evaluation callback.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:423

Return type:

TAOObjectiveFunction

getObjectiveAndGradient()#

Return the vector used to store the gradient and the evaluation callback.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:482

Return type:

tuple[Vec, TAOObjectiveGradientFunction]

getObjectiveValue()#

Return the current value of the objective function.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1610

Return type:

float

getOptionsPrefix()#

Return the prefix used for searching for options in the database.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:234

Return type:

str

getPythonContext()#

Return the instance of the class implementing the required Python methods.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1914

Return type:

Any

getPythonType()#

Return the fully qualified Python name of the class used by the solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1944

Return type:

str

getSolution()#

Return the vector holding the solution.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1457

Return type:

Vec

getSolutionNorm()#

Return the objective function value and the norms of gradient and constraints.

Not collective.

Returns:

  • f (float) – Current value of the objective function.

  • res (float) – Current value of the residual norm.

  • cnorm (float) – Current value of the constrains norm.

Return type:

tuple[float, float, float]

Source code at petsc4py/PETSc/TAO.pyx:1640

getSolutionStatus()#

Return the solution status.

Not collective.

Returns:

  • its (int) – Current number of iterations.

  • f (float) – Current value of the objective function.

  • res (float) – Current value of the residual norm.

  • cnorm (float) – Current value of the constrains norm.

  • step (float) – Current value of the step.

  • reason (ConvergedReason) – Current value of converged reason.

Return type:

tuple[int, float, float, float, float, ConvergedReason]

Source code at petsc4py/PETSc/TAO.pyx:1665

getTolerances()#

Return the tolerance parameters used in the solver convergence tests.

Not collective.

Returns:

  • gatol (float) – The absolute norm of the gradient.

  • grtol (float) – The relative norm of the gradient.

  • gttol (float) – The relative norm of the gradient with respect to the initial norm of the gradient.

Return type:

tuple[float, float, float]

Source code at petsc4py/PETSc/TAO.pyx:1140

getType()#

Return the type of the solver.

Not collective.

See also

setType, TaoGetType

Source code at petsc4py/PETSc/TAO.pyx:192

Return type:

str

getUpdate()#

Return the callback to compute the update.

Not collective.

See also

setUpdate

Source code at petsc4py/PETSc/TAO.pyx:911

Return type:

tuple[TAOUpdateFunction, tuple[Any, …], dict[str, Any]]

getVariableBounds()#

Return the lower and upper bounds vectors.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1541

Return type:

tuple[Vec, Vec]

monitor(its=None, f=None, res=None, cnorm=None, step=None)#

Monitor the solver.

Collective.

This function should be called without arguments, unless users want to modify the values internally stored by the solver.

Parameters:
  • its (int) – Current number of iterations or None to use the value stored internally by the solver.

  • f (float) – Current value of the objective function or None to use the value stored internally by the solver.

  • res (float) – Current value of the residual norm or None to use the value stored internally by the solver.

  • cnorm (float) – Current value of the constrains norm or None to use the value stored internally by the solver.

  • step (float) – Current value of the step or None to use the value stored internally by the solver.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1387

setALMMSubsolver(subsolver)#

Set the subsolver inside the ALMM solver.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:1746

Parameters:

subsolver (TAO)

Return type:

None

setALMMType(tao_almm_type)#

Set the ALMM type of the solver.

Logically collective.

Parameters:

tao_almm_type (ALMMType) – The type of the solver.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1759

setAppCtx(appctx)#

Set the application context.

Source code at petsc4py/PETSc/TAO.pyx:289

Parameters:

appctx (Any)

Return type:

None

setBNCGType(cg_type)#

Set the type of the BNCG solver.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1556

Parameters:

cg_type (BNCGType)

Return type:

None

setBRGNDictionaryMatrix(D)#

Set the dictionary matrix.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1850

Parameters:

D (Mat)

Return type:

None

setBRGNRegularizerHessian(hessian, H=None, args=None, kargs=None)#

Set the callback to compute the regularizer Hessian.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:1810

Parameters:
Return type:

None

setBRGNRegularizerObjectiveGradient(objgrad, args=None, kargs=None)#

Set the callback to compute the regularizer objective and gradient.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:1794

Parameters:
Return type:

None

setBRGNRegularizerWeight(weight)#

Set the regularizer weight.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1828

Parameters:

weight (float)

Return type:

None

setBRGNSmoothL1Epsilon(epsilon)#

Set the smooth L1 epsilon.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1837

Parameters:

epsilon (float)

Return type:

None

setConstraintTolerances(catol=None, crtol=None)#

Set the constraints tolerance parameters used in the solver convergence tests.

Collective.

Parameters:
  • catol (float) – The absolute norm of the constraints, or DETERMINE to use the value when the object’s type was set. Defaults to CURRENT.

  • crtol (float) – The relative norm of the constraints, or DETERMINE to use the value when the object’s type was set. Defaults to CURRENT.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1218

setConstraints(constraints, C=None, args=None, kargs=None)#

Set the callback to compute constraints.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:534

setConvergedReason(reason)#

Set the termination flag.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1305

Parameters:

reason (ConvergedReason)

Return type:

None

setConvergenceTest(converged, args=None, kargs=None)#

Set the callback used to test for solver convergence.

Logically collective.

Parameters:
  • converged (TAOConvergedFunction | None) – The callback. If None, reset to the default convergence test.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1265

setEqualityConstraints(equality_constraints, c, args=None, kargs=None)#

Set equality constraints callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:737

Parameters:
Return type:

None

setFromOptions()#

Configure the solver from the options database.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:248

Return type:

None

setGradient(gradient, g=None, args=None, kargs=None)#

Set the gradient evaluation callback.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:394

setGradientNorm(mat)#

Set the matrix used to compute inner products.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1472

Parameters:

mat (Mat)

Return type:

None

setHessian(hessian, H=None, P=None, args=None, kargs=None)#

Set the callback to compute the Hessian matrix.

Logically collective.

Parameters:
  • hessian (TAOHessianFunction) – The Hessian callback.

  • H (Mat | None) – The matrix to store the Hessian.

  • P (Mat | None) – The matrix to construct the preconditioner.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:563

setInequalityConstraints(inequality_constraints, c, args=None, kargs=None)#

Set inequality constraints callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:810

Parameters:
Return type:

None

setInitialTrustRegionRadius(radius)#

Set the initial trust region radius.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:274

Parameters:

radius (float)

Return type:

None

setIterationNumber(its)#

Set the current iteration number.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1583

Parameters:

its (int)

Return type:

None

setJacobian(jacobian, J=None, P=None, args=None, kargs=None)#

Set the callback to compute the Jacobian.

Logically collective.

Parameters:
  • jacobian (TAOJacobianFunction) – The Jacobian callback.

  • J (Mat | None) – The matrix to store the Jacobian.

  • P (Mat | None) – The matrix to construct the preconditioner.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:616

setJacobianDesign(jacobian_design, J=None, args=None, kargs=None)#

Set Jacobian design callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:689

Parameters:
Return type:

None

setJacobianEquality(jacobian_equality, J=None, P=None, args=None, kargs=None)#

Set Jacobian equality constraints callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:770

Parameters:
Return type:

None

setJacobianInequality(jacobian_inequality, J=None, P=None, args=None, kargs=None)#

Set Jacobian inequality constraints callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:843

Parameters:
Return type:

None

setJacobianResidual(jacobian, J=None, P=None, args=None, kargs=None)#

Set the callback to compute the least-squares residual Jacobian.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:361

setJacobianState(jacobian_state, J=None, P=None, I=None, args=None, kargs=None)#

Set Jacobian state callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:665

Parameters:
Return type:

None

setLMVMH0(mat)#

Set the initial Hessian for the quasi-Newton approximation.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1499

Parameters:

mat (Mat)

Return type:

None

setLMVMMat(M)#

Set the LMVM matrix.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:724

Parameters:

M (Mat)

Return type:

None

setMaximumFunctionEvaluations(mit)#

Set the maximum number of objective evaluations within the solver.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1191

Parameters:

mit (int)

Return type:

None

setMaximumIterations(mit)#

Set the maximum number of solver iterations.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1164

Parameters:

mit (int)

Return type:

float

setMonitor(monitor, args=None, kargs=None)#

Set the callback used to monitor solver convergence.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1332

setObjective(objective, args=None, kargs=None)#

Set the objective function evaluation callback.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:309

setObjectiveGradient(objgrad, g=None, args=None, kargs=None)#

Set the objective function and gradient evaluation callback.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:452

setOptionsPrefix(prefix)#

Set the prefix used for searching for options in the database.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:206

Parameters:

prefix (str | None)

Return type:

None

setPythonContext(context)#

Set the instance of the class implementing the required Python methods.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1902

Parameters:

context (Any)

Return type:

None

setPythonType(py_type)#

Set the fully qualified Python name of the class to be used.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1929

Parameters:

py_type (str)

Return type:

None

setResidual(residual, R, args=None, kargs=None)#

Set the residual evaluation callback for least-squares applications.

Logically collective.

Parameters:
  • residual (TAOResidualFunction) – The residual callback.

  • R (Vec) – The vector to store the residual.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:334

setSolution(x)#

Set the vector used to store the solution.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:297

Parameters:

x (Vec)

Return type:

None

setStateDesignIS(state=None, design=None)#

Set the index sets indicating state and design variables.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:650

Parameters:
Return type:

None

setTolerances(gatol=None, grtol=None, gttol=None)#

Set the tolerance parameters used in the solver convergence tests.

Collective.

Parameters:
  • gatol (float) – The absolute norm of the gradient, or DETERMINE to use the value when the object’s type was set. Defaults to CURRENT.

  • grtol (float) – The relative norm of the gradient with respect to the initial norm of the objective, or DETERMINE to use the value when the object’s type was set. Defaults to CURRENT.

  • gttol (float) – The relative norm of the gradient with respect to the initial norm of the gradient, or DETERMINE to use the value when the object’s type was set. Defaults to CURRENT.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1107

setType(tao_type)#

Set the type of the solver.

Logically collective.

Parameters:

tao_type (Type | str) – The type of the solver.

Return type:

None

See also

getType, TaoSetType

Source code at petsc4py/PETSc/TAO.pyx:173

setUp()#

Set up the internal data structures for using the solver.

Collective.

See also

TaoSetUp

Source code at petsc4py/PETSc/TAO.pyx:260

Return type:

None

setUpdate(update, args=None, kargs=None)#

Set the callback to compute update at each optimization step.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:882

setVariableBounds(varbounds, args=None, kargs=None)#

Set the lower and upper bounds for the optimization problem.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:498

solve(x=None)#

Solve the optimization problem.

Collective.

Parameters:

x (Vec | None) – The starting vector or None to use the vector stored internally.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1438

view(viewer=None)#

View the solver.

Collective.

Parameters:

viewer (Viewer | None) – A Viewer instance or None for the default viewer.

Return type:

None

See also

TaoView

Source code at petsc4py/PETSc/TAO.pyx:120

Attributes Documentation

appctx#

Application context.

Source code at petsc4py/PETSc/TAO.pyx:1980

cnorm#

Constraints norm.

Source code at petsc4py/PETSc/TAO.pyx:2049

converged#

Boolean indicating if the solver has converged.

Source code at petsc4py/PETSc/TAO.pyx:2086

ctol#

Broken.

Source code at petsc4py/PETSc/TAO.pyx:2024

diverged#

Boolean indicating if the solver has failed.

Source code at petsc4py/PETSc/TAO.pyx:2091

ftol#

Broken.

Source code at petsc4py/PETSc/TAO.pyx:1998

function#

Objective value.

Source code at petsc4py/PETSc/TAO.pyx:2064

gnorm#

Gradient norm.

Source code at petsc4py/PETSc/TAO.pyx:2044

gradient#

Gradient vector.

Source code at petsc4py/PETSc/TAO.pyx:2069

gtol#

Broken.

Source code at petsc4py/PETSc/TAO.pyx:2011

iterating#

Boolean indicating if the solver has not converged yet.

Source code at petsc4py/PETSc/TAO.pyx:2081

its#

Number of iterations.

Source code at petsc4py/PETSc/TAO.pyx:2039

ksp#

Linear solver.

Source code at petsc4py/PETSc/TAO.pyx:1990

objective#

Objective value.

Source code at petsc4py/PETSc/TAO.pyx:2059

reason#

Converged reason.

Source code at petsc4py/PETSc/TAO.pyx:2076

solution#

Solution vector.

Source code at petsc4py/PETSc/TAO.pyx:2054