- Solve Nonlinear System Python
- Solve Nonlinear System
- Linear Or Nonlinear Solver
- Solving Nonlinear Equations By Substitution
- Nonlinear System Of Equation Solver
Wolfram Alpha is capable of solving a wide variety of systems of equations. It can solve systems of linear equations or systems involving nonlinear equations, and it can search specifically for integer solutions or solutions over another domain. Additionally, it can solve systems involving inequalities. This video demonstrates how to use the function NLSOLVE to solve nonlinear equation systems in both Excel and Google Sheets. Multiple examples are illustrated including a system with inequality equation and a parameterized integral problem. NLSOLVE is based on Levenberg, Marquardt algorithm. Solving One Equation Linear functions are trivial to solve, as are quadratic functions if you have the quadratic formula memorized. However, polynomials of higher degree and non-polynomial functions are much more difficult to solve. The simplest technique for solving these types of equations is to use an iterative root-finding technique. Abaqus/Standard by default uses the Newton's method to solve nonlinear problems iteratively (see section Convergence for a description). In some cases it uses an exact implementation of Newton's method, in the sense that the Jacobian or the stiffness matrix of the system is defined exactly, and quadratic convergence is obtained when the estimate of the solution is within the radius of.
Optimization Toolbox |
Solve a system of nonlinear equations
for x, where x is a vector and F(x) is a function that returns a vector value.
Syntax
Description
fsolve
finds a root (zero) of a system of nonlinear equations.
x = fsolve(fun,x0)
starts at x0
and tries to solve the equations described in fun
.
x = fsolve(fun,x0,options)
minimizes with the optimization parameters specified in the structure options
. Use optimset
to set these parameters.
x = fsolve(fun,x0,options,P1,P2,...)
passes the problem-dependent parameters P1
, P2
, etc., directly to the function fun
. Pass an empty matrix for options
to use the default values for options
.
[x,fval] = fsolve(fun,x0)
returns the value of the objective function fun
at the solution x
.
[x,fval,exitflag] = fsolve(...)
returns a value exitflag
that describes the exit condition.
[x,fval,exitflag,output] = fsolve(...)
returns a structure output
that contains information about the optimization.
[x,fval,exitflag,output,jacobian] = fsolve(...)
returns the Jacobian of fun
at the solution x
.
Input Arguments
Function Arguments contains general descriptions of arguments passed in to fsolve
. This section provides function-specific details for fun
and options
:
fun | The nonlinear system of equations to solve. fun is a function that accepts a vector x and returns a vector F , the nonlinear equations evaluated at x . The function fun can be specified as a function handle.where myfun is a MATLAB function such asfun can also be an inline object.If the Jacobian can also be computed and the Jacobian parameter is 'on' , set bythen the function fun must return, in a second output argument, the Jacobian value J , a matrix, at x . Note that by checking the value of nargout the function can avoid computing J when fun is called with only one output argument (in the case where the optimization algorithm only needs the value of F but not J ).If fun returns a vector (matrix) of m components and x has length n , where n is the length of x0 , then the Jacobian J is an m-by-n matrix where J(i,j) is the partial derivative of F(i) with respect to x(j) . (Note that the Jacobian J is the transpose of the gradient of F .) |
options | Options provides the function-specific details for the options parameters. |
Output Arguments
Function Arguments contains general descriptions of arguments returned by fsolve
. This section provides function-specific details for exitflag
and output
:
exitflag | Describes the exit condition: | |
> 0 | The function converged to a solution x . | |
0 | The maximum number of function evaluations or iterations was exceeded. | |
< 0 | The function did not converge to a solution. | |
output | Structure containing information about the optimization. The fields of the structure are: | |
iterations | Number of iterations taken. | |
funcCount | Number of function evaluations. | |
algorithm | Algorithm used. | |
cgiterations | Number of PCG iterations (large-scale algorithm only). | |
stepsize | Final step size taken (medium-scale algorithm only). | |
firstorderopt | Measure of first-order optimality (large-scale algorithm only). For large scale problems, the first-order optimality is the infinity norm of the gradient g = JTF (see Nonlinear Least-Squares). |
Options
Optimization options parameters used by fsolve
. Some parameters apply to all algorithms, some are only relevant when using the large-scale algorithm, and others are only relevant when using the medium-scale algorithm.You can use optimset
to set or change the values of these fields in the parameters structure, options
. See Optimization Parameters, for detailed information.
We start by describing the LargeScale
option since it states a preference for which algorithm to use. It is only a preference since certain conditions must be met to use the large-scale algorithm. For fsolve
, the nonlinear system of equations cannot be underdetermined; that is, the number of equations (the number of elements of F
returned by fun
) must be at least as many as the length of x
or else the medium-scale algorithm is used:
LargeScale | Use large-scale algorithm if possible when set to 'on' . Use medium-scale algorithm when set to 'off' . The default for fsolve is 'off' . |
Medium-Scale and Large-Scale Algorithms.These parameters are used by both the medium-scale and large-scale algorithms:
Diagnostics | Print diagnostic information about the function to be minimized. |
Display | Level of display. 'off' displays no output; 'iter' displays output at each iteration; 'final' (default) displays just the final output. |
Jacobian | If 'on' , fsolve uses a user-defined Jacobian (defined in fun ), or Jacobian information (when using JacobMult ), for the objective function. If 'off' , fsolve approximates the Jacobian using finite differences. |
MaxFunEvals | Maximum number of function evaluations allowed. |
MaxIter | Maximum number of iterations allowed. |
TolFun | Termination tolerance on the function value. |
TolX | Termination tolerance on x . |
Large-Scale Algorithm Only.These parameters are used only by the large-scale algorithm:
JacobMult | Function handle for Jacobian multiply function. For large-scale structured problems, this function computes the Jacobian matrix products J*Y , J'*Y , or J'*(J*Y) without actually forming J . The function is of the form | |
where Jinfo and the additional parameters p1,p2,... contain the matrices used to compute J*Y (or J'*Y , or J'*(J*Y) ). The first argument Jinfo must be the same as the second argument returned by the objective function fun .The parameters p1,p2,... are the same additional parameters that are passed to fsolve (and to fun ).Y is a matrix that has the same number of rows as there are dimensions in the problem. flag determines which product to compute. If flag 0 then W = J'*(J*Y) . If flag > 0 then W = J*Y . If flag < 0 then W = J'*Y . In each case, J is not formed explicitly. fsolve uses Jinfo to compute the preconditioner.
| ||
JacobPattern | Sparsity pattern of the Jacobian for finite-differencing. If it is not convenient to compute the Jacobian matrix J in fun , lsqnonlin can approximate J via sparse finite-differences provided the structure of J -- i.e., locations of the nonzeros -- is supplied as the value for JacobPattern . In the worst case, if the structure is unknown, you can set JacobPattern to be a dense matrix and a full finite-difference approximation is computed in each iteration (this is the default if JacobPattern is not set). This can be very expensive for large problems so it is usually worth the effort to determine the sparsity structure. | |
MaxPCGIter | Maximum number of PCG (preconditioned conjugate gradient) iterations (see the Algorithm section below). | |
PrecondBandWidth | Upper bandwidth of preconditioner for PCG. By default, diagonal preconditioning is used (upper bandwidth of 0). For some problems, increasing the bandwidth reduces the number of PCG iterations. | |
TolPCG | Termination tolerance on the PCG iteration. | |
TypicalX | Typical x values. |
Medium-Scale Algorithm Only.These parameters are used only by the medium-scale algorithm:
DerivativeCheck | Compare user-supplied derivatives (Jacobian) to finite-differencing derivatives. |
DiffMaxChange | Maximum change in variables for finite-differencing. |
DiffMinChange | Minimum change in variables for finite-differencing. |
NonlEqnAlgorithm | Choose Levenberg-Marquardt or Gauss-Newton over the trust-region dogleg algorithm. |
LineSearchType | Line search algorithm choice. |
Examples
Example 1. This example finds a zero of the system of two equations and two unknowns
Thus we want to solve the following system for x
starting at x0 = [-5 -5]
.
First, write an M-file that computes F
, the values of the equations at x
.
Next, call an optimization routine.
After 33 function evaluations, a zero is found.
Example 2. Find a matrix x that satisfies the equation
starting at the point x= [1,1; 1,1]
.
First, write an M-file that computes the equations to be solved.
Next, invoke an optimization routine.
The solutionis
and the residual is close to zero.
Notes
Solve Nonlinear System Python
If the system of equations is linear, then use the (the backslash operator; see
help slash
) for better speed and accuracy. For example, to find the solution to the following linear system of equations.
Then the problem is formulated and solved as
Algorithm
The Gauss-Newton, Levenberg-Marquardt, and large-scale methods are based on the nonlinear least-squares algorithms also used in lsqnonlin
. Use one of these methods if the system may not have a zero. The algorithm still returns a point where the residual is small. However, if the Jacobian of the system is singular, the algorithm may converge to a point that is not a solution of the system of equations (see Limitations and Diagnostics below).
Large-Scale Optimization.fsolve
, with the LargeScale
parameter set to 'on'
with optimset
, uses the large-scale algorithm if possible. This algorithm is a subspace trust region method and is based on the interior-reflective Newton method described in [1],[2]. Each iteration involves the approximate solution of a large linear system using the method of preconditioned conjugate gradients (PCG). See Trust-Region Methods for Nonlinear Minimization, and Preconditioned Conjugate Gradients.
Medium-Scale Optimization.b
y default fsolve
chooses the medium-scale algorithm and uses the trust-region dogleg method. The algorithm is a variant of the Powell dogleg method described in [8]. It is similar in nature to the algorithm implemented in [7].
Alternatively, you can select a Gauss-Newton method [3] with line-search, or a Levenberg-Marquardt method [4], [5], [6] with line-search. The choice of algorithm is made by setting the NonlEqnAlgorithm
parameter to 'dogleg'
(default), 'lm'
, or 'gn'
.
The default line search algorithm for the Levenberg-Marquardt and Gauss-Newton methods, i.e., the LineSearchType
parameter set to 'quadcubic'
, is a safeguarded mixed quadratic and cubic polynomial interpolation and extrapolation method. A safeguarded cubic polynomial method can be selected by setting LineSearchType
to 'cubicpoly'
. This method generally requires fewer function evaluations but more gradient evaluations. Thus, if gradients are being supplied and can be calculated inexpensively, the cubic polynomial line search method is preferable. The algorithms used are described fully in the Standard Algorithms chapter.
Diagnostics
Medium and Large Scale Optimization.fsolve
may converge to a nonzero point and give this message
In this case, run fsolve
again with other starting values.
Medium Scale Optimization.For the trust-region dogleg method, fsolve
stops if the step size becomes to small and it can make no more progress. fsolve gives this message
In this case, run fsolve
again with other starting values.
Limitations
The function to be solved must be continuous. When successful, fsolve
only gives one root. fsolve
may converge to a nonzero point, in which case, try other starting values.
fsolve
only handles real variables. When x has complex variables, the variables must be split into real and imaginary parts.
Large-Scale Optimization.Currently, if the analytical Jacobian is provided in fun
, the options
parameter DerivativeCheck
cannot be used with the large-scale method to compare the analytic Jacobian to the finite-difference Jacobian. Instead, use the medium-scale method to check the derivative with options
parameter MaxIter
set to 0 iterations. Then run the problem again with the large-scale method. See Table 2-4, Large-Scale Problem Coverage and Requirements, for more information on what problem formulations are covered and what information must be provided.
The preconditioner computation used in the preconditioned conjugate gradient part of the large-scale method forms JTJ (where J is the Jacobian matrix) before computing the preconditioner; therefore, a row of J with many nonzeros, which results in a nearly dense product JTJ, may lead to a costly solution process for large problems.
Medium-Scale Optimization.The default trust-region dogleg method can only be used when the system of equations is square, i.e., the number of equations equals the number of unknowns. For the Levenberg-Marquardt and Gauss-Newton methods, the system of equations need not be square.
See Also
@
(function_handle
),
, inline
, lsqcurvefit
, lsqnonlin
, optimset
References
[1] Coleman, T.F. and Y. Li, 'An Interior, Trust Region Approach for Nonlinear Minimization Subject to Bounds,' SIAM Journal on Optimization, Vol. 6, pp. 418-445, 1996.
[2] Coleman, T.F. and Y. Li, 'On the Convergence of Reflective Newton Methods for Large-Scale Nonlinear Minimization Subject to Bounds,' Mathematical Programming, Vol. 67, Number 2, pp. 189-224, 1994.
[3] Dennis, J. E. Jr., 'Nonlinear Least-Squares,' State of the Art in Numerical Analysis, ed. D. Jacobs, Academic Press, pp. 269-312.
[4] Levenberg, K., 'A Method for the Solution of Certain Problems in Least-Squares,' Quarterly Applied Mathematics 2, pp. 164-168, 1944.
Solve Nonlinear System
[5] Marquardt, D., 'An Algorithm for Least-squares Estimation of Nonlinear Parameters,' SIAM Journal Applied Mathematics, Vol. 11, pp. 431-441, 1963.
[6] Moré, J. J., 'The Levenberg-Marquardt Algorithm: Implementation and Theory,' Numerical Analysis, ed. G. A. Watson, Lecture Notes in Mathematics 630, Springer Verlag, pp. 105-116, 1977.
[7] Moré, J. J., B. S. Garbow, K. E. Hillstrom, User Guide for MINPACK 1, Argonne National Laboratory, Rept. ANL-80-74, 1980.
[8] Powell, M. J. D., 'A Fortran Subroutine for Solving Systems of Nonlinear Algebraic Equations,' Numerical Methods for Nonlinear Algebraic Equations, P. Rabinowitz, ed., Ch.7, 1970.
fseminf | fzero |
Linear Or Nonlinear Solver
Optimization Software Support from the Excel-literate Business Analyst to the Pro Developer
Solve Large-Scale Smooth Nonlinear Models with Great Performance
- Excel Solver users: Solve models faster, benefit from model diagnosis and automatic differentiation - 100% compatible upgrade from the developers of Excel Solver.
- Use 'best of breed' methods (GRG, SQP, Barrier/Interior Point) and world-class Solvers like SNOPT and KNITRO. Solve MINLP (mixed-integer nonlinear) problems.
- Find globally optimal solutions using Multistart and Evolutionary methods, calling any nonlinear Solver for subproblems, plus Interval Branch & Bound.
- Easily move models from Excel desktop to Excel for the Web, to Tableau and Power BI dashboards, or your own server, web, or mobile applications.
Proven in Use over 25 years in over 9,000 organizations, including more than half of the companies in the world with $1 billion + revenue.
Solving Nonlinear Equations By Substitution
Easily Deploy Models with RASON® High-Level Modeling Language
- Platform-independent RASON (RESTful Analytic Solver® Object Notation) contains the entire Excel formula language, but is embedded in JSON, understood by developers, radically simple to use in web and mobile applications.
- Deploy modelswithout rework: Analytic Solver in Excel can translate Excel optimization models to RASON. Business analysts and developers can both 'speak and understand' RASON, and work together easily.
- Developers can 'do it all in code' with a compatible object library, spanning Excel VBA, C++, C#, VB.Net, Java, R, Python, MATLAB and JavaScript. RASON models easily inter-operate with this object library!
Nonlinear System Of Equation Solver
Free Trial: You have everything to gain and nothing to lose! Fill out the form to register for a 15-day free trial of Analytic Solver (Excel for Windows & Macintosh, and Excel for the Web), Solver SDK for Visual Studio, access to User Guides, Reference Guides and 100+ example models, and support via Live Chat, phone & email.