Description Usage Arguments Details Value Author(s) References See Also Examples
Given a nonlinear model expressed as an expression of the form
lhs ~ formula_for_rhs
and a start vector where parameters used in the model formula are named,
attempts to find the minimum of the residual sum of squares using the
Nash variant (Nash, 1979) of the Marquardt algorithm, where the linear
subproblem is solved by a qr method. This is a restructured version
of a function by the same name from package nlmrt
which is now
deprecated.
1 2 
formula 
This is a modeling formula of the form (as in 
start 
A named parameter vector. For our example, we could use

trace 
Logical 
data 
A data frame containing the data of the variables in the formula. This data may, however, be supplied directly in the parent frame. 
lower 
Lower bounds on the parameters. If a single number, this will be applied to all
parameters. Default 
upper 
Upper bounds on the parameters. If a single number, this will be applied to all
parameters. Default 
masked 
Character vector of quoted parameter names. These parameters will NOT be altered by the algorithm. Masks may also be defined by setting lower and upper bounds equal for the parameters to be fixed. Note that the starting parameter value must also be the same as the lower and upper bound value. 
weights 
A vector of fixed weights. The objective function that will be minimized is the
sum of squares where each residual is multiplied by the square root of the
corresponding weight. Default 
control 
A list of controls for the algorithm. These are:

nlxb
attempts to solve the nonlinear sum of squares problem by using
a variant of Marquardt's approach to stabilizing the GaussNewton method using
the LevenbergMarquardt adjustment. This is explained in Nash (1979 or 1990) in
the sections that discuss Algorithm 23. (?? do we want a vignette. Yes, because
folk don't have access to book easily, but finding time.)
In this code, we solve the (adjusted) Marquardt equations by use of the
qr.solve()
. Rather than forming the J'J + lambda*D
matrix, we augment
the J
matrix with extra rows and the y
vector with null elements.
A list of the following items
coefficients 
A named vector giving the parameter values at the supposed solution. 
ssquares 
The sum of squared residuals at this set of parameters. 
resid 
The residual vector at the returned parameters. 
jacobian 
The jacobian matrix (partial derivatives of residuals w.r.t. the parameters) at the returned parameters. 
feval 
The number of residual evaluations (sum of squares computations) used. 
jeval 
The number of Jacobian evaluations used. 
John C Nash <[email protected]>
Nash, J. C. (1979, 1990) _Compact Numerical Methods for Computers. Linear Algebra and Function Minimisation._ Adam Hilger./Institute of Physics Publications
others!!
Function nls()
, packages optim
and optimx
.
1  cat("See examples in nlsrpackage.Rd\n")

Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.