numericGradient: Functions to Calculate Numeric Derivatives

Description Usage Arguments Details Value Warning Author(s) See Also Examples

View source: R/numericGradient.R

Description

Calculate (central) numeric gradient and Hessian, including of vector-valued functions.

Usage

1
2
3
numericGradient(f, t0, eps=1e-06, fixed, ...)
numericHessian(f, grad=NULL, t0, eps=1e-06, fixed, ...)
numericNHessian(f, t0, eps=1e-6, fixed, ...)

Arguments

f

function to be differentiated. The first argument must be the parameter vector with respect to which it is differentiated. For numeric gradient, f may return a (numeric) vector, for Hessian it should return a numeric scalar

grad

function, gradient of f

t0

vector, the parameter values

eps

numeric, the step for numeric differentiation

fixed

logical index vector, fixed parameters. Derivative is calculated only with respect to the parameters for which fixed == FALSE, NA is returned for the fixed parameters. If missing, all parameters are treated as active.

...

furter arguments for f

Details

numericGradient numerically differentiates a (vector valued) function with respect to it's (vector valued) argument. If the functions value is a \code{N_val * 1} vector and the argument is \code{N_par * 1} vector, the resulting gradient is a \code{NVal * NPar} matrix.

numericHessian checks whether a gradient function is present. If yes, it calculates the gradient of the gradient, if not, it calculates the full numeric Hessian (numericNHessian).

Value

Matrix. For numericGradient, the number of rows is equal to the length of the function value vector, and the number of columns is equal to the length of the parameter vector.

For the numericHessian, both numer of rows and columns is equal to the length of the parameter vector.

Warning

Be careful when using numerical differentiation in optimization routines. Although quite precise in simple cases, they may work very poorly in more complicated conditions.

Author(s)

Ott Toomet

See Also

compareDerivatives, deriv

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# A simple example with Gaussian bell surface
f0 <- function(t0) exp(-t0[1]^2 - t0[2]^2)
numericGradient(f0, c(1,2))
numericHessian(f0, t0=c(1,2))

# An example with the analytic gradient
gradf0 <- function(t0) -2*t0*f0(t0)
numericHessian(f0, gradf0, t0=c(1,2))
# The results should be similar as in the previous case

# The central numeric derivatives are often quite precise
compareDerivatives(f0, gradf0, t0=1:2)
# The difference is around 1e-10

Example output

Loading required package: miscTools

Please cite the 'maxLik' package as:
Henningsen, Arne and Toomet, Ott (2011). maxLik: A package for maximum likelihood estimation in R. Computational Statistics 26(3), 443-458. DOI 10.1007/s00180-010-0217-1.

If you have questions, suggestions, or comments regarding the 'maxLik' package, please use a forum or 'tracker' at maxLik's R-Forge site:
https://r-forge.r-project.org/projects/maxlik/
            [,1]        [,2]
[1,] -0.01347589 -0.02695179
           [,1]       [,2]
[1,] 0.01348748 0.05390306
[2,] 0.05390306 0.09432906
           [,1]       [,2]
[1,] 0.01347589 0.05390358
[2,] 0.05390358 0.09433126
-------- compare derivatives -------- 
Note: analytic gradient is vector.  Transforming into a matrix form
Function value:
[1] 0.006737947
Dim of analytic gradient: 1 2 
       numeric          : 1 2 
t0
[1] 1 2
analytic gradient
            [,1]        [,2]
[1,] -0.01347589 -0.02695179
numeric gradient
            [,1]        [,2]
[1,] -0.01347589 -0.02695179
(anal-num)/(0.5*(abs(anal)+abs(num)))
              [,1]       [,2]
[1,] -2.763538e-10 -5.108e-11
Max relative difference: 2.763538e-10 
-------- END of compare derivatives -------- 

maxLik documentation built on July 27, 2021, 1:07 a.m.