petsc4py.PETSc.TAO#

class petsc4py.PETSc.TAO#

Bases: Object

Optimization solver.

TAO is described in the PETSc manual.

See also

Tao

Enumerations

BNCGType

TAO Bound Constrained Conjugate Gradient (BNCG) Update Type.

ConvergedReason

TAO solver termination reason.

Type

TAO solver type.

Methods Summary

appendOptionsPrefix(prefix)

Append to the prefix used for searching for options in the database.

cancelMonitor()

Cancel all the monitors of the solver.

computeConstraints(x, c)

Compute the vector corresponding to the constraints.

computeDualVariables(xl, xu)

Compute the dual vectors corresponding to variables' bounds.

computeGradient(x, g)

Compute the gradient of the objective function.

computeHessian(x, H[, P])

Compute the Hessian of the objective function.

computeJacobian(x, J[, P])

Compute the Jacobian.

computeObjective(x)

Compute the value of the objective function.

computeObjectiveGradient(x, g)

Compute the gradient of the objective function and its value.

computeResidual(x, f)

Compute the residual.

computeVariableBounds(xl, xu)

Compute the vectors corresponding to variables' bounds.

create([comm])

Create a TAO solver.

createPython([context, comm])

Create an optimization solver of Python type.

destroy()

Destroy the solver.

getAppCtx()

Return the application context.

getBNCGType()

Return the type of the BNCG solver.

getBRGNDampingVector()

Return the damping vector.

getBRGNSubsolver()

Return the subsolver inside the BRGN solver.

getConstraintTolerances()

Return the constraints tolerance parameters used in the convergence tests.

getConvergedReason()

Return the termination flag.

getConvergenceTest()

Return the callback used to test for solver convergence.

getGradient()

Return the vector used to store the gradient and the evaluation callback.

getGradientNorm()

Return the matrix used to compute inner products.

getHessian()

Return the matrices used to store the Hessian and the evaluation callback.

getIterationNumber()

Return the current iteration number.

getKSP()

Return the linear solver used by the nonlinear solver.

getLMVMH0()

Return the initial Hessian for the quasi-Newton approximation.

getLMVMH0KSP()

Return the KSP for the inverse of the initial Hessian approximation.

getLineSearch()

Return the TAO Line Search object.

getMaximumFunctionEvaluations()

Return the maximum number of objective evaluations within the solver.

getMaximumIterations()

Return the maximum number of solver iterations.

getMonitor()

Return the callback used to monitor solver convergence.

getObjectiveAndGradient()

Return the vector used to store the gradient and the evaluation callback.

getObjectiveValue()

Return the current value of the objective function.

getOptionsPrefix()

Return the prefix used for searching for options in the database.

getPythonContext()

Return the instance of the class implementing the required Python methods.

getPythonType()

Return the fully qualified Python name of the class used by the solver.

getSolution()

Return the vector holding the solution.

getSolutionNorm()

Return the objective function value and the norms of gradient and constraints.

getSolutionStatus()

Return the solution status.

getTolerances()

Return the tolerance parameters used in the solver convergence tests.

getType()

Return the type of the solver.

getUpdate()

Return the callback to compute the update.

getVariableBounds()

Return the upper and lower bounds vectors.

monitor([its, f, res, cnorm, step])

Monitor the solver.

setAppCtx(appctx)

Set the application context.

setBNCGType(cg_type)

Set the type of the BNCG solver.

setBRGNDictionaryMatrix(D)

Set the dictionary matrix.

setBRGNRegularizerHessian(hessian[, H, ...])

Set the callback to compute the regularizer Hessian.

setBRGNRegularizerObjectiveGradient(objgrad)

Set the callback to compute the regularizer objective and gradient.

setBRGNRegularizerWeight(weight)

Set the regularizer weight.

setBRGNSmoothL1Epsilon(epsilon)

Set the smooth L1 epsilon.

setConstraintTolerances([catol, crtol])

Set the constraints tolerance parameters used in the solver convergence tests.

setConstraints(constraints[, C, args, kargs])

Set the callback to compute constraints.

setConvergedReason(reason)

Set the termination flag.

setConvergenceTest(converged[, args, kargs])

Set the callback used to test for solver convergence.

setEqualityConstraints(equality_constraints, c)

Set equality constraints callback.

setFromOptions()

Configure the solver from the options database.

setGradient(gradient[, g, args, kargs])

Set the gradient evaluation callback.

setGradientNorm(mat)

Set the matrix used to compute inner products.

setHessian(hessian[, H, P, args, kargs])

Set the callback to compute the Hessian matrix.

setInitialTrustRegionRadius(radius)

Set the initial trust region radius.

setIterationNumber(its)

Set the current iteration number.

setJacobian(jacobian[, J, P, args, kargs])

Set the callback to compute the Jacobian.

setJacobianDesign(jacobian_design[, J, ...])

Set Jacobian design callback.

setJacobianEquality(jacobian_equality[, J, ...])

Set Jacobian equality constraints callback.

setJacobianResidual(jacobian[, J, P, args, ...])

Set the callback to compute the least-squares residual Jacobian.

setJacobianState(jacobian_state[, J, P, I, ...])

Set Jacobian state callback.

setLMVMH0(mat)

Set the initial Hessian for the quasi-Newton approximation.

setMaximumFunctionEvaluations(mit)

Set the maximum number of objective evaluations within the solver.

setMaximumIterations(mit)

Set the maximum number of solver iterations.

setMonitor(monitor[, args, kargs])

Set the callback used to monitor solver convergence.

setObjective(objective[, args, kargs])

Set the objective function evaluation callback.

setObjectiveGradient(objgrad[, g, args, kargs])

Set the objective function and gradient evaluation callback.

setOptionsPrefix(prefix)

Set the prefix used for searching for options in the database.

setPythonContext(context)

Set the instance of the class implementing the required Python methods.

setPythonType(py_type)

Set the fully qualified Python name of the class to be used.

setResidual(residual, R[, args, kargs])

Set the residual evaluation callback for least-squares applications.

setSolution(x)

Set the vector used to store the solution.

setStateDesignIS([state, design])

Set the index sets indicating state and design variables.

setTolerances([gatol, grtol, gttol])

Set the tolerance parameters used in the solver convergence tests.

setType(tao_type)

Set the type of the solver.

setUp()

Set up the internal data structures for using the solver.

setUpdate(update[, args, kargs])

Set the callback to compute update at each optimization step.

setVariableBounds(varbounds[, args, kargs])

Set the upper and lower bounds for the optimization problem.

solve([x])

Solve the optimization problem.

view([viewer])

View the solver.

Attributes Summary

appctx

Application context.

cnorm

Constraints norm.

converged

Boolean indicating if the solver has converged.

ctol

Broken.

diverged

Boolean indicating if the solver has failed.

ftol

Broken.

function

Objective value.

gnorm

Gradient norm.

gradient

Gradient vector.

gtol

Broken.

iterating

Boolean indicating if the solver has not converged yet.

its

Number of iterations.

ksp

Linear solver.

objective

Objective value.

reason

Converged reason.

solution

Solution vector.

Methods Documentation

appendOptionsPrefix(prefix)#

Append to the prefix used for searching for options in the database.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:213

Parameters:

prefix (str | None)

Return type:

None

cancelMonitor()#

Cancel all the monitors of the solver.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:1211

Return type:

None

computeConstraints(x, c)#

Compute the vector corresponding to the constraints.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • c (Vec) – The output constraints vector.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:887

computeDualVariables(xl, xu)#

Compute the dual vectors corresponding to variables’ bounds.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:851

Parameters:
Return type:

None

computeGradient(x, g)#

Compute the gradient of the objective function.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • g (Vec) – The output gradient vector.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:810

computeHessian(x, H, P=None)#

Compute the Hessian of the objective function.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • H (Mat) – The output Hessian matrix.

  • P (Mat | None) – The output Hessian matrix used to construct the preconditioner.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:906

computeJacobian(x, J, P=None)#

Compute the Jacobian.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • J (Mat) – The output Jacobian matrix.

  • P (Mat | None) – The output Jacobian matrix used to construct the preconditioner.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:929

computeObjective(x)#

Compute the value of the objective function.

Collective.

Parameters:

x (Vec) – The parameter vector.

Return type:

float

Source code at petsc4py/PETSc/TAO.pyx:772

computeObjectiveGradient(x, g)#

Compute the gradient of the objective function and its value.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • g (Vec) – The output gradient vector.

Return type:

float

Source code at petsc4py/PETSc/TAO.pyx:829

computeResidual(x, f)#

Compute the residual.

Collective.

Parameters:
  • x (Vec) – The parameter vector.

  • f (Vec) – The output vector.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:791

computeVariableBounds(xl, xu)#

Compute the vectors corresponding to variables’ bounds.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:863

Parameters:
Return type:

None

create(comm=None)#

Create a TAO solver.

Collective.

Parameters:

comm (Comm | None) – MPI communicator, defaults to Sys.getDefaultComm.

Return type:

Self

Source code at petsc4py/PETSc/TAO.pyx:145

createPython(context=None, comm=None)#

Create an optimization solver of Python type.

Collective.

Parameters:
  • context (Any) – An instance of the Python class implementing the required methods.

  • comm (Comm | None) – MPI communicator, defaults to Sys.getDefaultComm.

Return type:

Self

Source code at petsc4py/PETSc/TAO.pyx:1653

destroy()#

Destroy the solver.

Collective.

See also

TaoDestroy

Source code at petsc4py/PETSc/TAO.pyx:132

Return type:

Self

getAppCtx()#

Return the application context.

Source code at petsc4py/PETSc/TAO.pyx:286

Return type:

Any

getBNCGType()#

Return the type of the BNCG solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1407

Return type:

BNCGType

getBRGNDampingVector()#

Return the damping vector.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1638

Return type:

Vec

getBRGNSubsolver()#

Return the subsolver inside the BRGN solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1555

Return type:

TAO

getConstraintTolerances()#

Return the constraints tolerance parameters used in the convergence tests.

Not collective.

Returns:

  • catol (float) – The absolute norm of the constraints.

  • crtol (float) – The relative norm of the constraints.

Return type:

tuple[float, float]

Source code at petsc4py/PETSc/TAO.pyx:1082

getConvergedReason()#

Return the termination flag.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1156

Return type:

ConvergedReason

getConvergenceTest()#

Return the callback used to test for solver convergence.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1131

Return type:

tuple[TAOConvergedFunction, tuple[Any, …], dict[str, Any]]

getGradient()#

Return the vector used to store the gradient and the evaluation callback.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:416

Return type:

tuple[Vec, TAOGradientFunction]

getGradientNorm()#

Return the matrix used to compute inner products.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1322

Return type:

Mat

getHessian()#

Return the matrices used to store the Hessian and the evaluation callback.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:578

Return type:

tuple[Mat, Mat, TAOHessianFunction]

getIterationNumber()#

Return the current iteration number.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1434

Return type:

int

getKSP()#

Return the linear solver used by the nonlinear solver.

Not collective.

See also

TaoGetKSP

Source code at petsc4py/PETSc/TAO.pyx:1538

Return type:

KSP

getLMVMH0()#

Return the initial Hessian for the quasi-Newton approximation.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1349

Return type:

Mat

getLMVMH0KSP()#

Return the KSP for the inverse of the initial Hessian approximation.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1364

Return type:

KSP

getLineSearch()#

Return the TAO Line Search object.

Not collective.

See also

TaoGetLineSearch

Source code at petsc4py/PETSc/TAO.pyx:1735

Return type:

TAOLineSearch

getMaximumFunctionEvaluations()#

Return the maximum number of objective evaluations within the solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1046

Return type:

int

getMaximumIterations()#

Return the maximum number of solver iterations.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1019

Return type:

int

getMonitor()#

Return the callback used to monitor solver convergence.

Not collective.

See also

setMonitor

Source code at petsc4py/PETSc/TAO.pyx:1199

Return type:

list[tuple[TAOMonitorFunction, tuple[Any, …], dict[str, Any]]]

getObjectiveAndGradient()#

Return the vector used to store the gradient and the evaluation callback.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:462

Return type:

tuple[Vec, TAOObjectiveGradientFunction]

getObjectiveValue()#

Return the current value of the objective function.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1448

Return type:

float

getOptionsPrefix()#

Return the prefix used for searching for options in the database.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:227

Return type:

str

getPythonContext()#

Return the instance of the class implementing the required Python methods.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1690

Return type:

Any

getPythonType()#

Return the fully qualified Python name of the class used by the solver.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1720

Return type:

str

getSolution()#

Return the vector holding the solution.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1295

Return type:

Vec

getSolutionNorm()#

Return the objective function value and the norms of gradient and constraints.

Not collective.

Returns:

  • f (float) – Current value of the objective function.

  • res (float) – Current value of the residual norm.

  • cnorm (float) – Current value of the constrains norm.

Return type:

tuple[float, float, float]

Source code at petsc4py/PETSc/TAO.pyx:1478

getSolutionStatus()#

Return the solution status.

Not collective.

Returns:

  • its (int) – Current number of iterations.

  • f (float) – Current value of the objective function.

  • res (float) – Current value of the residual norm.

  • cnorm (float) – Current value of the constrains norm.

  • step (float) – Current value of the step.

  • reason (ConvergedReason) – Current value of converged reason.

Return type:

tuple[int, float, float, float, float, ConvergedReason]

Source code at petsc4py/PETSc/TAO.pyx:1503

getTolerances()#

Return the tolerance parameters used in the solver convergence tests.

Not collective.

Returns:

  • gatol (float) – The absolute norm of the gradient.

  • grtol (float) – The relative norm of the gradient with respect to the initial norm of the objective.

  • gttol (float) – The relative norm of the gradient with respect to the initial norm of the gradient.

Return type:

tuple[float, float, float]

Source code at petsc4py/PETSc/TAO.pyx:981

getType()#

Return the type of the solver.

Not collective.

See also

setType, TaoGetType

Source code at petsc4py/PETSc/TAO.pyx:185

Return type:

str

getUpdate()#

Return the callback to compute the update.

Not collective.

See also

setUpdate

Source code at petsc4py/PETSc/TAO.pyx:758

Return type:

tuple[TAOUpdateFunction, tuple[Any, …], dict[str, Any]]

getVariableBounds()#

Return the upper and lower bounds vectors.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1379

Return type:

tuple[Vec, Vec]

monitor(its=None, f=None, res=None, cnorm=None, step=None)#

Monitor the solver.

Collective.

This function should be called without arguments, unless users want to modify the values internally stored by the solver.

Parameters:
  • its (int) – Current number of iterations or None to use the value stored internally by the solver.

  • f (float) – Current value of the objective function or None to use the value stored internally by the solver.

  • res (float) – Current value of the residual norm or None to use the value stored internally by the solver.

  • cnorm (float) – Current value of the constrains norm or None to use the value stored internally by the solver.

  • step (float) – Current value of the step or None to use the value stored internally by the solver.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1225

setAppCtx(appctx)#

Set the application context.

Source code at petsc4py/PETSc/TAO.pyx:282

Parameters:

appctx (Any)

Return type:

None

setBNCGType(cg_type)#

Set the type of the BNCG solver.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1394

Parameters:

cg_type (BNCGType)

Return type:

None

setBRGNDictionaryMatrix(D)#

Set the dictionary matrix.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1626

Parameters:

D (Mat)

Return type:

None

setBRGNRegularizerHessian(hessian, H=None, args=None, kargs=None)#

Set the callback to compute the regularizer Hessian.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:1586

Parameters:
Return type:

None

setBRGNRegularizerObjectiveGradient(objgrad, args=None, kargs=None)#

Set the callback to compute the regularizer objective and gradient.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:1570

Parameters:
Return type:

None

setBRGNRegularizerWeight(weight)#

Set the regularizer weight.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1604

Parameters:

weight (float)

Return type:

None

setBRGNSmoothL1Epsilon(epsilon)#

Set the smooth L1 epsilon.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1613

Parameters:

epsilon (float)

Return type:

None

setConstraintTolerances(catol=None, crtol=None)#

Set the constraints tolerance parameters used in the solver convergence tests.

Collective.

Parameters:
  • catol (float) – The absolute norm of the constraints. Defaults to DEFAULT.

  • crtol (float) – The relative norm of the constraints. Defaults to DEFAULT.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1060

setConstraints(constraints, C=None, args=None, kargs=None)#

Set the callback to compute constraints.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:514

setConvergedReason(reason)#

Set the termination flag.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1143

Parameters:

reason (ConvergedReason)

Return type:

None

setConvergenceTest(converged, args=None, kargs=None)#

Set the callback used to test for solver convergence.

Logically collective.

Parameters:
  • converged (TAOConvergedFunction | None) – The callback. If None, reset to the default convergence test.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1103

setEqualityConstraints(equality_constraints, c, args=None, kargs=None)#

Set equality constraints callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:689

Parameters:
Return type:

None

setFromOptions()#

Configure the solver from the options database.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:241

Return type:

None

setGradient(gradient, g=None, args=None, kargs=None)#

Set the gradient evaluation callback.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:387

setGradientNorm(mat)#

Set the matrix used to compute inner products.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1310

Parameters:

mat (Mat)

Return type:

None

setHessian(hessian, H=None, P=None, args=None, kargs=None)#

Set the callback to compute the Hessian matrix.

Logically collective.

Parameters:
  • hessian (TAOHessianFunction) – The Hessian callback.

  • H (Mat | None) – The matrix to store the Hessian.

  • P (Mat | None) – The matrix to construct the preconditioner.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:543

setInitialTrustRegionRadius(radius)#

Set the initial trust region radius.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:267

Parameters:

radius (float)

Return type:

None

setIterationNumber(its)#

Set the current iteration number.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1421

Parameters:

its (int)

Return type:

None

setJacobian(jacobian, J=None, P=None, args=None, kargs=None)#

Set the callback to compute the Jacobian.

Logically collective.

Parameters:
  • jacobian (TAOJacobianFunction) – The Jacobian callback.

  • J (Mat | None) – The matrix to store the Jacobian.

  • P (Mat | None) – The matrix to construct the preconditioner.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:596

setJacobianDesign(jacobian_design, J=None, args=None, kargs=None)#

Set Jacobian design callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:669

Parameters:
Return type:

None

setJacobianEquality(jacobian_equality, J=None, P=None, args=None, kargs=None)#

Set Jacobian equality constraints callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:707

Parameters:
Return type:

None

setJacobianResidual(jacobian, J=None, P=None, args=None, kargs=None)#

Set the callback to compute the least-squares residual Jacobian.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:354

setJacobianState(jacobian_state, J=None, P=None, I=None, args=None, kargs=None)#

Set Jacobian state callback.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:645

Parameters:
Return type:

None

setLMVMH0(mat)#

Set the initial Hessian for the quasi-Newton approximation.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1337

Parameters:

mat (Mat)

Return type:

None

setMaximumFunctionEvaluations(mit)#

Set the maximum number of objective evaluations within the solver.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1033

Parameters:

mit (int)

Return type:

None

setMaximumIterations(mit)#

Set the maximum number of solver iterations.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1006

Parameters:

mit (int)

Return type:

float

setMonitor(monitor, args=None, kargs=None)#

Set the callback used to monitor solver convergence.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1170

setObjective(objective, args=None, kargs=None)#

Set the objective function evaluation callback.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:302

setObjectiveGradient(objgrad, g=None, args=None, kargs=None)#

Set the objective function and gradient evaluation callback.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:432

setOptionsPrefix(prefix)#

Set the prefix used for searching for options in the database.

Logically collective.

Source code at petsc4py/PETSc/TAO.pyx:199

Parameters:

prefix (str | None)

Return type:

None

setPythonContext(context)#

Set the instance of the class implementing the required Python methods.

Not collective.

Source code at petsc4py/PETSc/TAO.pyx:1678

Parameters:

context (Any)

Return type:

None

setPythonType(py_type)#

Set the fully qualified Python name of the class to be used.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:1705

Parameters:

py_type (str)

Return type:

None

setResidual(residual, R, args=None, kargs=None)#

Set the residual evaluation callback for least-squares applications.

Logically collective.

Parameters:
  • residual (TAOResidualFunction) – The residual callback.

  • R (Vec) – The vector to store the residual.

  • args (tuple[Any, ...] | None) – Positional arguments for the callback.

  • kargs (dict[str, Any] | None) – Keyword arguments for the callback.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:327

setSolution(x)#

Set the vector used to store the solution.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:290

Parameters:

x (Vec)

Return type:

None

setStateDesignIS(state=None, design=None)#

Set the index sets indicating state and design variables.

Collective.

Source code at petsc4py/PETSc/TAO.pyx:630

Parameters:
Return type:

None

setTolerances(gatol=None, grtol=None, gttol=None)#

Set the tolerance parameters used in the solver convergence tests.

Collective.

Parameters:
  • gatol (float) – The absolute norm of the gradient. Defaults to DEFAULT.

  • grtol (float) – The relative norm of the gradient with respect to the initial norm of the objective. Defaults to DEFAULT.

  • gttol (float) – The relative norm of the gradient with respect to the initial norm of the gradient. Defaults to DEFAULT.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:954

setType(tao_type)#

Set the type of the solver.

Logically collective.

Parameters:

tao_type (Type | str) – The type of the solver.

Return type:

None

See also

getType, TaoSetType

Source code at petsc4py/PETSc/TAO.pyx:166

setUp()#

Set up the internal data structures for using the solver.

Collective.

See also

TaoSetUp

Source code at petsc4py/PETSc/TAO.pyx:253

Return type:

None

setUpdate(update, args=None, kargs=None)#

Set the callback to compute update at each optimization step.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:729

setVariableBounds(varbounds, args=None, kargs=None)#

Set the upper and lower bounds for the optimization problem.

Logically collective.

Parameters:
Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:478

solve(x=None)#

Solve the optimization problem.

Collective.

Parameters:

x (Vec | None) – The starting vector or None to use the vector stored internally.

Return type:

None

Source code at petsc4py/PETSc/TAO.pyx:1276

view(viewer=None)#

View the solver.

Collective.

Parameters:

viewer (Viewer | None) – A Viewer instance or None for the default viewer.

Return type:

None

See also

TaoView

Source code at petsc4py/PETSc/TAO.pyx:113

Attributes Documentation

appctx#

Application context.

Source code at petsc4py/PETSc/TAO.pyx:1756

cnorm#

Constraints norm.

Source code at petsc4py/PETSc/TAO.pyx:1825

converged#

Boolean indicating if the solver has converged.

Source code at petsc4py/PETSc/TAO.pyx:1862

ctol#

Broken.

Source code at petsc4py/PETSc/TAO.pyx:1800

diverged#

Boolean indicating if the solver has failed.

Source code at petsc4py/PETSc/TAO.pyx:1867

ftol#

Broken.

Source code at petsc4py/PETSc/TAO.pyx:1774

function#

Objective value.

Source code at petsc4py/PETSc/TAO.pyx:1840

gnorm#

Gradient norm.

Source code at petsc4py/PETSc/TAO.pyx:1820

gradient#

Gradient vector.

Source code at petsc4py/PETSc/TAO.pyx:1845

gtol#

Broken.

Source code at petsc4py/PETSc/TAO.pyx:1787

iterating#

Boolean indicating if the solver has not converged yet.

Source code at petsc4py/PETSc/TAO.pyx:1857

its#

Number of iterations.

Source code at petsc4py/PETSc/TAO.pyx:1815

ksp#

Linear solver.

Source code at petsc4py/PETSc/TAO.pyx:1766

objective#

Objective value.

Source code at petsc4py/PETSc/TAO.pyx:1835

reason#

Converged reason.

Source code at petsc4py/PETSc/TAO.pyx:1852

solution#

Solution vector.

Source code at petsc4py/PETSc/TAO.pyx:1830