# genD: Generate Bates and Watts D Matrix In numDeriv: Accurate Numerical Derivatives

## Description

Generate a matrix of function derivative information.

## Usage

 ```1 2 3 4 5``` ``` genD(func, x, method="Richardson", method.args=list(), ...) ## Default S3 method: genD(func, x, method="Richardson", method.args=list(), ...) ```

## Arguments

 `func` a function for which the first (vector) argument is used as a parameter vector. `x` The parameter vector first argument to `func`. `method` one of `"Richardson"` or `"simple"` indicating the method to use for the aproximation. `method.args` arguments passed to method. See `grad`. (Arguments not specified remain with their default values.) `...` any additional arguments passed to `func`. WARNING: None of these should have names matching other arguments of this function.

## Details

The derivatives are calculated numerically using Richardson improvement. Methods "simple" and "complex" are not supported in this function. The "Richardson" method calculates a numerical approximation of the first and second derivatives of `func` at the point `x`. For a scalar valued function these are the gradient vector and Hessian matrix. (See `grad` and `hessian`.) For a vector valued function the first derivative is the Jacobian matrix (see `jacobian`). For the Richardson method ```method.args=list(eps=1e-4, d=0.0001, zero.tol=sqrt(.Machine\$double.eps/7e-7), r=4, v=2)``` is set as the default. See `grad` for more details on the Richardson's extrapolation parameters.

A simple approximation to the first order derivative with respect to x_i is

f'_{i}(x) = <f(x_{1},…,x_{i}+d,…,x_{n}) - f(x_{1},…,x_{i}-d,…,x_{n})>/(2*d)

A simple approximation to the second order derivative with respect to x_i is

f''_{i}(x) = <f(x_{1},…,x_{i}+d,…,x_{n}) - 2 *f(x_{1},…,x_{n}) + f(x_{1},…,x_{i}-d,…,x_{n})>/(d^2)

The second order derivative with respect to x_i, x_j is

f''_{i,j}(x) = <f(x_{1},…,x_{i}+d,…,x_{j}+d,…,x_{n}) - 2 *f(x_{1},…,x_{n}) +

f(x_{1},…,x_{i}-d,…,x_{j}-d,…,x_{n})>/(2*d^2) - (f''_{i}(x) + f''_{j}(x))/2

Richardson's extrapolation is based on these formula with the `d` being reduced in the extrapolation iterations. In the code, `d` is scaled to accommodate parameters of different magnitudes.

`genD` does `1 + r (N^2 + N)` evaluations of the function `f`, where `N` is the length of `x`.

## Value

A list with elements as follows: `D` is a matrix of first and second order partial derivatives organized in the same manner as Bates and Watts, the number of rows is equal to the length of the result of `func`, the first p columns are the Jacobian, and the next p(p+1)/2 columns are the lower triangle of the second derivative (which is the Hessian for a scalar valued `func`). `p` is the length of `x` (dimension of the parameter space). `f0` is the function value at the point where the matrix `D` was calculated. The `genD` arguments `func`, `x`, `d`, `method`, and `method.args` also are returned in the list.

## References

Linfield, G.R. and Penny, J.E.T. (1989) "Microcomputers in Numerical Analysis." Halsted Press.

Bates, D.M. & Watts, D. (1980), "Relative Curvature Measures of Nonlinearity." J. Royal Statistics Soc. series B, 42:1-25

Bates, D.M. and Watts, D. (1988) "Non-linear Regression Analysis and Its Applications." Wiley.

## See Also

`hessian`, `grad`

## Examples

 ```1 2``` ``` func <- function(x){c(x, x, x^2)} z <- genD(func, c(2,2,5)) ```

### Example output

```
```

numDeriv documentation built on June 6, 2019, 5:07 p.m.