Calculate the m by n numerical approximation of the gradient of a real m-vector valued function with n-vector argument.
1 2 3 4 5 |
func |
a function with a real (vector) result. |
x |
a real or real vector argument to func, indicating the point at which the gradient is to be calculated. |
method |
one of |
method.args |
arguments passed to method. See |
... |
any additional arguments passed to |
side |
an indication of whether one-sided derivatives should be
attempted (see details in function |
For f:R^n -> R^m calculate the m x n
Jacobian dy/dx.
The function jacobian
calculates a numerical approximation of the
first derivative of func
at the point x
. Any additional
arguments in ... are also passed to func
, but the gradient is not
calculated with respect to these additional arguments.
If method is "Richardson", the calculation is done by
Richardson's extrapolation. See link{grad}
for more details.
For this method method.args=list(eps=1e-4, d=0.0001,
zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE)
is set as the default.
If method is "simple", the calculation is done using a simple epsilon
difference.
For method "simple" method.args=list(eps=1e-4)
is the
default. Only eps
is used by this method.
If method is "complex", the calculation is done using the complex step
derivative approach. See addition comments in grad
before
choosing this method.
For method "complex", method.args
is ignored.
The algorithm uses an eps
of .Machine$double.eps
which cannot
(and should not) be modified.
A real m by n matrix.
grad
,
hessian
,
numericDeriv
1 2 3 4 |
Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.
All documentation is copyright its authors; we didn't write any of that.