# Gradient of a Vector Valued Function

### Description

Calculate the m by n numerical approximation of the gradient of a real m-vector valued function with n-vector argument.

### Usage

1 2 3 4 5 |

### Arguments

`func` |
a function with a real (vector) result. |

`x` |
a real or real vector argument to func, indicating the point at which the gradient is to be calculated. |

`method` |
one of |

`method.args` |
arguments passed to method. See |

`...` |
any additional arguments passed to |

`side` |
an indication of whether one-sided derivatives should be
attempted (see details in function |

### Details

For *f:R^n -> R^m* calculate the *m x n*
Jacobian *dy/dx*.
The function `jacobian`

calculates a numerical approximation of the
first derivative of `func`

at the point `x`

. Any additional
arguments in ... are also passed to `func`

, but the gradient is not
calculated with respect to these additional arguments.

If method is "Richardson", the calculation is done by
Richardson's extrapolation. See `link{grad}`

for more details.
For this method ```
method.args=list(eps=1e-4, d=0.0001,
zero.tol=sqrt(.Machine$double.eps/7e-7), r=4, v=2, show.details=FALSE)
```

is set as the default.

If method is "simple", the calculation is done using a simple epsilon
difference.
For method "simple" `method.args=list(eps=1e-4)`

is the
default. Only `eps`

is used by this method.

If method is "complex", the calculation is done using the complex step
derivative approach. See addition comments in `grad`

before
choosing this method.
For method "complex", `method.args`

is ignored.
The algorithm uses an `eps`

of `.Machine$double.eps`

which cannot
(and should not) be modified.

### Value

A real m by n matrix.

### See Also

`grad`

,
`hessian`

,
`numericDeriv`

### Examples

1 2 3 4 |