Symbolic and analytical derivatives in R


rm(list=ls()) # clear workspace for each major section

\pagebreak

This article is an attempt to catalog and illustrate the various capabilities in the R statistical computing system to perform analytic or symbolic differentiation. There are many traps and pitfalls for the unwary in doing this, and it is hoped that this rather long treatment will serve to record these and show how to avoid them, and how to reliably compute the derivatives desired. Derivative capabilities of R are in the base system (essentially the functions D() and deriv()) and in different packages, namely nlsr, Deriv, and Ryacas. General tools for approximations to derivatives are found in the package numDeriv as well as optimx. Other approximations may be embedded in various packages, but not necessarily exported for use in scripts or packages.

As a way of recording where attention is needed either to this document or to the functions and methods described, I have put double question marks in various places.

Note: To distinguish output results (which are prefaced '##' by knitr, I have attempted to put comments in the R code with the preface '#-'.)

Available analytic differentiation tools

R has a number of tools for finding analytic derivatives.

How the tools are used

This is an overview section to give an idea of the capabilities. It is not intended to be exhaustive, but to give pointers to how the tools can be used quickly.

An important issue that may cause a lot of difficulty is the iterating of the tools. That is, we compute a derivative, then want to apply a tool to the derivative to get a second derivative. In doing so, we need to be careful that the type (class??) of the quantity output by the tool is passed back into the tool in a form that will generate a derivative expression. Some examples are presented.

We also note that the Deriv package will give a result in cases when the input is undefined. This is clearly a bug. There is an example below on the section for Deriv.

stats (i.e., the base R installation)

D(), deriv() and deriv3(): As deriv3() is stated to be the same as deriv() but with argument hessian=TRUE, we will for now only consider the first two.

dx2x <- deriv(~ x^2, "x") 
dx2x
mode(dx2x)
str(dx2x)
x <- -1:2
eval(dx2x) # This is evaluated at -1, 0, 1, 2, with the result in the "gradient" attribute
# Note that we cannot (easily) differentiate this again.
firstd <- attr(dx2x,"gradient")
str
#  ... and the following gives an error
d2x2x <- try(deriv(firstd, "x"))
str(d2x2x)
#- Build a function from the expression
fdx2x<-function(x){eval(dx2x)}
fdx2x(1)
fdx2x(3.21)
fdx2x(1:5)
#- # Now try D()
Dx2x <- D(expression(x^2), "x")
Dx2x
x <- -1:2
eval(Dx2x)
# We can differentiate aggain
D2x2x <- D(Dx2x,"x")
D2x2x
eval(D2x2x) #- But we don't get a vector -- could be an issue in gradients/Jacobians
#- Note how we handle an expression stored in a string via parse(text=  ))
sx2 <- "x^2"
sDx2x <- D(parse(text=sx2), "x")
sDx2x
#- But watch out! The following "seems" to work, but the answer is not as intended.  The problem is that the first
# argument is evaluated before being used.  Since 
# x exists, it fails
x
Dx2xx <- D(x^2, "x")
Dx2xx
eval(Dx2xx)
#-  Something 'tougher':
trig.exp <- expression(sin(cos(x + y^2)))
( D.sc <- D(trig.exp, "x") )
all.equal(D(trig.exp[[1]], "x"), D.sc)
( dxy <- deriv(trig.exp, c("x", "y")) )
y <- 1
eval(dxy)
eval(D.sc)
#-  function returned:
deriv((y ~ sin(cos(x) * y)), c("x","y"), func = TRUE)
#- ??#-  Surely there is an error, since documentation says no lhs! i.e.,
#- "expr: a 'expression' or 'call' or (except 'D') a formula with no lhs."
#-  function with defaulted arguments:
(fx <- deriv(y ~ b0 + b1 * 2^(-x/th), c("b0", "b1", "th"),
             function(b0, b1, th, x = 1:7){} ) )
fx(2, 3, 4)
#-  First derivative
D(expression(x^2), "x")
#-  stopifnot(D(as.name("x"), "x") == 1) #- A way of testing. 
#- This works by coercing "x" to name/symbol, and derivative should be 1.
#- Would fail only if "x" cannot be so coerced. How could this happen??
#-  Higher derivatives showing deriv3
myd3 <- deriv3(y ~ b0 + b1 * 2^(-x/th), c("b0", "b1", "th"),
     c("b0", "b1", "th", "x") )
myd3(2,3,4, x=1:7)
#- check against deriv()
myd3a <- deriv(y ~ b0 + b1 * 2^(-x/th), c("b0", "b1", "th"),
     c("b0", "b1", "th", "x"), hessian=TRUE )
myd3a(2,3,4, x=1:7)
identical(myd3a, myd3) #- Remember to check things!

#-  Higher derivatives:
DD <- function(expr, name, order = 1) {
   if(order < 1) stop("'order' must be >= 1")
   if(order == 1) D(expr, name)
   else DD(D(expr, name), name, order - 1)
}
DD(expression(sin(x^2)), "x", 3)
#-  showing the limits of the internal "simplify()" :
#-  -sin(x^2) * (2 * x) * 2 + ((cos(x^2) * (2 * x) * (2 * x) + sin(x^2) *
#-     2) * (2 * x) + sin(x^2) * (2 * x) * 2)

nlsr

library(nlsr)
dx2xn <- nlsDeriv(~ x^2, "x")
dx2xn
mode(dx2xn)
str(dx2xn)
x <- -1:2
eval(dx2xn) # This is evaluated at -1, 0, 1, 2, BUT result is returned directly,
#-  NOT in "gradient" attribute
firstdn <- dx2xn
str(firstdn)
d2x2xn <- nlsDeriv(firstdn, "x")
d2x2xn
d2x2xnF <- nlsDeriv(firstdn, "x", do_substitute=FALSE)
d2x2xnF # in this case we get the same result
d2x2xnT <- nlsDeriv(firstdn, "x", do_substitute=TRUE)
d2x2xnT # 0 ## WATCH OUT

#- ?? We can iterate the derivatives
nlsDeriv(d2x2xn, "x")

nlsDeriv(x^2, "x")# 0
nlsDeriv(x^2, "x", do_substitute=FALSE)# 0
nlsDeriv(x^2, "x", do_substitute=TRUE) # 2 * x
nlsDeriv(~ x^2, "x") # 2 * x
nlsDeriv(~ x^2, "x", do_substitute=FALSE) # 2 * x
nlsDeriv(~ x^2, "x", do_substitute=TRUE) # 2 * x

#?? firstde <- quote(firstd)
#?? firstde
#?? firstde <- bquote(firstd)
#?? firstde
#?? nlsDeriv(firstde, "x")
d2 <- nlsDeriv(2 * x, "x")
str(d2)
d2
#?? firstc <- as.call(firstd)
#?? nlsDeriv(firstc, "x")
#- Build a function from the expression
#?? fdx2xn<-function(x){eval(dx2xn)}
#?? fdx2xn(1)
#?? fdx2xn(3.21)
#?? fdx2xn(1:5)

The tool codeDeriv returns an R expression to evaluate the derivative efficiently. fnDeriv wraps it in a function. By default the arguments to the function are constructed from all variables in the expression. In the example below this includes x.

codeDeriv(parse(text="b0 + b1 * 2^(-x/th)"), c("b0", "b1", "th")) 
#- Include parameters as arguments
fj.1 <- fnDeriv(parse(text="b0 + b1 * 2^(-x/th)"), c("b0", "b1", "th")) 
head(fj.1)
fj.1(1,2,3,4)
#- Get all parameters from the calling environment
fj.2 <- fnDeriv(parse(text="b0 + b1 * 2^(-x/th)"), c("b0", "b1", "th"),
        args = character())
head(fj.2)
b0 <- 1
b1 <- 2
x <- 3
th <- 4 
fj.2()

#- Just use an expression
fje <- codeDeriv(parse(text="b0 + b1 * 2^(-x/th)"), c("b0", "b1", "th"))
eval(fje)

dx2xnf <- fnDeriv(~ x^2, "x") #- Use tilde
dx2xnf <- fnDeriv(expression(x^2), "x") #- or use expression()
dx2xnf
mode(dx2xnf)
str(dx2xnf)
x <- -1:2
#?? eval(dx2xnf) # This is evaluated at -1, 0, 1, 2, BUT result is returned directly,
#-  NOT in "gradient" attribute
# Note that we cannot (easily) differentiate this again.
# firstd <- dx2xnf
# str(firstd)
# d2x2xnf <- try(nlsDeriv(firstd, "x")) #- this APPEARS to work, but WRONG answer 
# str(d2x2xnf)
# d2x2xnf
# eval(d2x2xnf)

# dx2xnfh <- fnDeriv(expression(x^2), "x", hessian=TRUE) #- Try for second derivatives
# dx2xnfh
# mode(dx2xnfh)
# str(dx2xnfh)
# x <- -1
# eval(dx2xnfh) # This is evaluated at -1, 0, 1, 2, BUT result is returned directly,

Deriv

The following examples are drawn from the example(Deriv) contained in the Deriv package.

require(Deriv)

f <- function(x) x^2
Deriv(f)
#- Should see
#- function (x)
#- 2 * x
#- Now save the derivative
f1 <- Deriv(f)
f1 #- print it
f2 <- Deriv(f1) #- and take second derivative
f2 #- print it

f <- function(x, y) sin(x) * cos(y)
f_ <- Deriv(f)
f_ #- print it
#- Should see
#- function (x, y)
#- c(x = cos(x) * cos(y), y = -(sin(x) * sin(y)))
f_(3, 4)
#- Should see
#-              x         y
#- [1,] 0.6471023 0.1068000

f2 <- Deriv(~ f(x, y^2), "y") #- This has a tilde to render the 1st argument as a formula object
#- Also we are substituting in y^2 for y
f2 #- print it
#- -(2 * (y * sin(x) * sin(y^2)))
mode(f2) #- check what type of object it is
arg1 <- ~ f(x,y^2)
mode(arg1) #- check the type
f2a <- Deriv(arg1, "y")
f2a #- and print to see if same as before
#- try evaluation of f using current x and y
x
y
f(x,y^2)
eval(f2a) #- We need x and y defined to do this.

f3 <- Deriv(quote(f(x, y^2)), c("x", "y"), cache.exp=FALSE) #- check cache.exp operation
#- Note that we need to quote or will get evaluation at current x, y values (if they exist)
f3 #- print it
#- c(x = cos(x) * cos(y^2), y = -(2 * (y * sin(x) * sin(y^2))))
f3c <- Deriv(quote(f(x, y^2)), c("x", "y"), cache.exp=TRUE) #- check cache.exp operation
f3c #- print it
#- Now want to evaluate the results
#- First must provide some data
x <- 3
y <- 4
eval(f3c)
#- Should see
#- x         y 
#- 0.9480757 0.3250313 
eval(f3) #- check this also
#- or we can create functions
f3cf <- function(x, y){eval(f3c)}
f3cf(x=1, y=2)
#-          x          y 
#- -0.3531652  2.5473094 
f3f <- function(x,y){eval(f3)}
f3f(x=3, y=4)
#-         x         y 
#- 0.9480757 0.3250313 

#- try an expression
Deriv(expression(sin(x^2) * y), "x")
#- should see
#- expression(2 * (x * y * cos(x^2)))

#- quoted string
Deriv("sin(x^2) * y", "x") # differentiate only by x
#- Should see
#- "2 * (x * y * cos(x^2))"

Deriv("sin(x^2) * y", cache.exp=FALSE) #- differentiate by all variables (here by x and y)
#- Note that default is to differentiate by all variables.
#- Should see
#- "c(x = 2 * (x * y * cos(x^2)), y = sin(x^2))"

#- Compound function example (here abs(x) smoothed near 0)
#- Note that this introduces the possibilty of `if` statements in the code
#- BUT (JN) seems to give back quoted string, so we must parse.
fc <- function(x, h=0.1) if (abs(x) < h) 0.5*h*(x/h)**2 else abs(x)-0.5*h
efc1 <- Deriv("fc(x)", "x", cache.exp=FALSE) 
#- "if (abs(x) < h) x/h else sign(x)"
#- A few checks on the results
efc1

fc1 <- function(x,h=0.1){ eval(parse(text=efc1)) }
fc1
## h=0.1
fc1(1)
fc1(0.001)
fc1(-0.001)
fc1(-10)
fc1(0.001, 1)

#- Example of a first argument that cannot be evaluated in the current environment:
  try(suppressWarnings(rm("xx", "yy"))) #- Make sure there are no objects xx or yy
  Deriv(~ xx^2+yy^2)
#- Should show
#- c(xx = 2 * xx, yy = 2 * yy)
#- ?? What is the meaning / purpose of this construct?

#- ?? Is following really AD?  
#- Automatic differentiation (AD), note intermediate variable 'd' assignment
Deriv(~{d <- ((x-m)/s)^2; exp(-0.5*d)}, "x")
# Note that the result we see does NOT match what follows in the example(Deriv) (JN ??)
#{
#   d <- ((x - m)/s)^2
#   .d_x <- 2 * ((x - m)/s^2)
#   -(0.5 * (.d_x * exp(-(0.5 * d))))
#}
#- For some reason the intermediate variable d is NOT included.??


#- Custom derivative rule. Note that this needs explanations??
  myfun <- function(x, y=TRUE) NULL #- do something useful
  dmyfun <- function(x, y=TRUE) NULL #- myfun derivative by x.
  drule[["myfun"]] <- alist(x=dmyfun(x, y), y=NULL) #- y is just a logical
  Deriv(myfun(z^2, FALSE), "z")
  # 2 * (z * dmyfun(z^2, FALSE))

#- Differentiation by list components
  theta <- list(m=0.1, sd=2.) #- Why do we set values??
  x <- names(theta) #- and why these particular names??
  names(x)=rep("theta", length(theta))
  Deriv(~exp(-(x-theta$m)**2/(2*theta$sd)), x, cache.exp=FALSE)
#- Should show the following (but why??)
#- c(theta_m = exp(-((x - theta$m)^2/(2 * theta$sd))) *
#-  (x - theta$m)/theta$sd, theta_sd = 2 * (exp(-((x - theta$m)^2/
#-  (2 * theta$sd))) * (x - theta$m)^2/(2 * theta$sd)^2))
lderiv <-  Deriv(~exp(-(x-theta$m)**2/(2*theta$sd)), x, cache.exp=FALSE)
fld <- function(x){ eval(lderiv)} #- put this in a function
fld(2) #- and evaluate at a value

Deriv has some design choices that can get the user into trouble. The following example shows one such problem.

library(Deriv)
rm(x) # ensures x is undefined
Deriv(~ x, "x")  # returns [1] 1 -- clearly a bug!
Deriv(~ x^2, "x")   # returns 2 * x
x <- quote(x^2)
Deriv(x, "x") # returns 2 * x

By comparison, nlsr

rm(x) # in case it is defined
try(nlsDeriv(x, "x")  ) # fails, not a formula
try(nlsDeriv(as.expression("x"), "x")  ) # expression(NULL)
try(nlsDeriv(~x, "x")  ) # 1
try(nlsDeriv(x^2, "x"))   # fails
try(nlsDeriv(~x^2, "x")) # 2 * x
x <- quote(x^2)
try(nlsDeriv(x, "x")) # returns 2 * x

Ryacas

There is at least one other symbolic package for R. Here we look at Ryacas. The structures for using yacas tools do not seem at the time of writing (2016-10-21) to be suitable for working with nonlinear least squares or optimization facilities of R. Thus, for the moment, we will not pursue the derivatives available in Ryacas beyond the following example provided by Gabor Grothendieck.

dnlsr <- nlsr::nlsDeriv(~ sin(x+y), "x")
print(dnlsr)
class(dnlsr)
detach("package:nlsr", unload=TRUE)
detach("package:Deriv", unload=TRUE)

## New Ryacas mechanism as of 2019-8-29 from mikl@math.aau.dk (Mikkel Meyer Andersen)

yac_str("D(x) Sin(x+y)") 

# or if an expression is needed:
ex <- yac_expr("D(x) Sin(x+y)")
ex
expression(cos(x + y))
eval(ex, list(x = pi, y = pi/2))

## Previous syntax for Ryacas was
## x <- Sym("x")
## y <- Sym("y")
## dryacas <- deriv(sin(x+y), x)
## print(dryacas)
## class(dryacas)

detach("package:Ryacas", unload=TRUE)

Derivatives and simplifications -- base R

See specific notes either in comments or at the end of the section.

The help page for D lists the functions for which derivatives are known: "The internal code knows about the arithmetic operators +, -, *, / and ^, and the single-variable functions exp, log, sin, cos, tan, sinh, cosh, sqrt, pnorm, dnorm, asin, acos, atan, gamma, lgamma, digamma and trigamma, as well as psigamma for one or two arguments (but derivative only with respect to the first)."

Derivatives and simplifications -- package nlsr

This package supports the derivatives that D supports, as well as a few others, and users can add their own definitions. The current list is

ls(nlsr::sysDerivs)

Derivatives table

Here is a slightly expanded testing of the elements of the nlsr derivatives table.

require(nlsr)
## Try different ways to supply the log function
aDeriv <- nlsDeriv(~ log(x), "x")
class(aDeriv)
aDeriv
aderiv <- try(deriv( ~ log(x), "x"))
class(aderiv)
aderiv
aD <- D(expression(log(x)), "x")
class(aD)
aD
cat("but \n")
try(D( "~ log(x)", "x")) # fails -- gives NA rather than expected answer due to quotes
try(D( ~ log(x), "x"))
interm <- ~ log(x)
interm
class(interm)
interme <- as.expression(interm)
class(interme)
try(D(interme, "x"))
try(deriv(interme, "x"))
try(deriv(interm, "x"))


nlsDeriv(~ log(x, base=3), "x" ) # OK
try(D(expression(log(x, base=3)), "x" )) # fails - only single-argument calls supported
try(deriv(~ log(x, base=3), "x" )) # fails - only single-argument calls supported
try(deriv(expression(log(x, base=3)), "x" )) # fails - only single-argument calls supported
try(deriv3(expression(log(x, base=3)), "x" )) # fails - only single-argument calls supported
fnDeriv(quote(log(x, base=3)), "x" )

nlsDeriv(~ exp(x), "x")
D(expression(exp(x)), "x") # OK
deriv(~exp(x), "x") # OK, but much more complicated
fnDeriv(quote(exp(x)), "x")

nlsDeriv(~ sin(x), "x")
D(expression(sin(x)), "x")
deriv(~sin(x), "x")
fnDeriv(quote(sin(x)), "x")

nlsDeriv(~ cos(x), "x")
D(expression(cos(x)), "x")
deriv(~ cos(x), "x")
fnDeriv(quote(cos(x)), "x")

nlsDeriv(~ tan(x), "x")
D(expression(tan(x)), "x")
deriv(~ tan(x), "x")
fnDeriv(quote(tan(x)), "x")

nlsDeriv(~ sinh(x), "x")
D(expression(sinh(x)), "x")
deriv(~sinh(x), "x")
fnDeriv(quote(sinh(x)), "x")

nlsDeriv(~ cosh(x), "x")
D(expression(cosh(x)), "x")
deriv(~cosh(x), "x")
fnDeriv(quote(cosh(x)), "x")

nlsDeriv(~ sqrt(x), "x")
D(expression(sqrt(x)), "x")
deriv(~sqrt(x), "x")
fnDeriv(quote(sqrt(x)), "x")

nlsDeriv(~ pnorm(q), "q")
D(expression(pnorm(q)), "q")
deriv(~pnorm(q), "q")
fnDeriv(quote(pnorm(q)), "q")

nlsDeriv(~ dnorm(x, mean), "mean")
D(expression(dnorm(x, mean)), "mean")
deriv(~dnorm(x, mean), "mean")
fnDeriv(quote(dnorm(x, mean)), "mean")

nlsDeriv(~ asin(x), "x")
D(expression(asin(x)), "x")
deriv(~asin(x), "x")
fnDeriv(quote(asin(x)), "x")

nlsDeriv(~ acos(x), "x")
D(expression(acos(x)), "x")
deriv(~acos(x), "x")
fnDeriv(quote(acos(x)), "x")

nlsDeriv(~ atan(x), "x")
D(expression(atan(x)), "x")
deriv(~atan(x), "x")
fnDeriv(quote(atan(x)), "x")

nlsDeriv(~ gamma(x), "x")
D(expression(gamma(x)), "x")
deriv(~gamma(x), "x")
fnDeriv(quote(gamma(x)), "x")

nlsDeriv(~ lgamma(x), "x")
D(expression(lgamma(x)), "x")
deriv(~lgamma(x), "x")
fnDeriv(quote(lgamma(x)), "x")

nlsDeriv(~ digamma(x), "x")
D(expression(digamma(x)), "x")
deriv(~digamma(x), "x")
fnDeriv(quote(digamma(x)), "x")

nlsDeriv(~ trigamma(x), "x")
D(expression(trigamma(x)), "x")
deriv(~trigamma(x), "x")
fnDeriv(quote(trigamma(x)), "x")

nlsDeriv(~ psigamma(x, deriv = 5), "x")
D(expression(psigamma(x, deriv = 5)), "x")
deriv(~psigamma(x, deriv = 5), "x")
fnDeriv(quote(psigamma(x, deriv = 5)), "x")

nlsDeriv(~ x*y, "x")
D(expression(x*y), "x")
deriv(~x*y, "x")
fnDeriv(quote(x*y), "x")

nlsDeriv(~ x/y, "x")
D(expression(x/y), "x")
deriv(~x/y, "x")
fnDeriv(quote(x/y), "x")

nlsDeriv(~ x^y, "x")
D(expression(x^y), "x")
deriv(~x^y, "x")
fnDeriv(quote(x^y), "x")

nlsDeriv(~ (x), "x")
D(expression((x)), "x")
deriv(~(x), "x")
fnDeriv(quote((x)), "x")

nlsDeriv(~ +x, "x")
D(expression(+x), "x")
deriv(~ +x, "x")
fnDeriv(quote(+x), "x")

nlsDeriv(~ -x, "x")
D(expression(- x), "x")
deriv(~ -x, "x")
fnDeriv(quote(-x), "x")

nlsDeriv(~ abs(x), "x")
try(D(expression(abs(x)), "x")) # 'abs' not in derivatives table
try(deriv(~ abs(x), "x"))
fnDeriv(quote(abs(x)), "x")

nlsDeriv(~ sign(x), "x")
try(D(expression(sign(x)), "x")) # 'sign' not in derivatives table
try(deriv(~ sign(x), "x"))
fnDeriv(quote(sign(x)), "x")

Notes:

Simplifying algebraic expressions

nlsr also includes some tools for simplification of algebraic expressions, extensible by the user. Currently these involve the following functions:

ls(nlsr::sysSimplifications)
#- Remove ##? to see reproducible error
#- ?? For some reason, if we leave packages attached, we get errors.
#- Here we detach all the non-base packages and then reload nlsr
##? sessionInfo()
##? ##? nlsSimplify(quote(+(a+b)))
##? nlsSimplify(quote(-5))
#- ?? For some reason, if we leave packages attached, we get errors.
#- Here we detach all the non-base packages and then reload nlsr
sessionInfo()
if ("Deriv" %in% loadedNamespaces()){detach("package:Deriv", unload=TRUE)} 
  #- ?? Do we need to unload too.
if ("nlsr" %in% loadedNamespaces() ){detach("package:nlsr", unload=TRUE)}
if ("Ryacas" %in% loadedNamespaces() ){detach("package:Ryacas", unload=TRUE)}
#- require(Deriv)
#- require(stats)
#- Various simplifications
#- ?? Do we need quote() to stop attempt to evaluate before applying simplification
require(nlsr)
nlsSimplify(quote(+(a+b)))
nlsSimplify(quote(-5))
nlsSimplify(quote(--(a+b)))

nlsSimplify(quote(exp(log(a+b))))
nlsSimplify(quote(exp(1)))

nlsSimplify(quote(log(exp(a+b))))
nlsSimplify(quote(log(1)))

nlsSimplify(quote(!TRUE))
nlsSimplify(quote(!FALSE))

nlsSimplify(quote((a+b)))

nlsSimplify(quote(a + b + 0))
nlsSimplify(quote(0 + a + b))
nlsSimplify(quote((a+b) + (a+b)))
nlsSimplify(quote(1 + 4))

nlsSimplify(quote(a + b - 0))
nlsSimplify(quote(0 - a - b))
nlsSimplify(quote((a+b) - (a+b)))
nlsSimplify(quote(5 - 3))

nlsSimplify(quote(0*(a+b)))
nlsSimplify(quote((a+b)*0))
nlsSimplify(quote(1L * (a+b)))
nlsSimplify(quote((a+b) * 1))
nlsSimplify(quote((-1)*(a+b)))
nlsSimplify(quote((a+b)*(-1)))
nlsSimplify(quote(2*5))

nlsSimplify(quote((a+b) / 1))
nlsSimplify(quote((a+b) / (-1)))
nlsSimplify(quote(0/(a+b)))
nlsSimplify(quote(1/3))

nlsSimplify(quote((a+b) ^ 1))
nlsSimplify(quote(2^10))

nlsSimplify(quote(log(exp(a), 3)))

nlsSimplify(quote(FALSE && b))
nlsSimplify(quote(a && TRUE))
nlsSimplify(quote(TRUE && b))

nlsSimplify(quote(a || TRUE))
nlsSimplify(quote(FALSE || b))
nlsSimplify(quote(a || FALSE))

nlsSimplify(quote(if (TRUE) a+b))
nlsSimplify(quote(if (FALSE) a+b))

nlsSimplify(quote(if (TRUE) a+b else a*b))
nlsSimplify(quote(if (FALSE) a+b else a*b))
nlsSimplify(quote(if (cond) a+b else a+b))

nlsSimplify(quote(--(a+b)))
nlsSimplify(quote(-(-(a+b))))

Derivatives and simplifications -- package Deriv

Derivatives table

Simplifications

#- ?? For some reason, if we leave packages attached, we get errors.
#- Here we detach all the non-base packages and then reload nlsr
sessionInfo()
if ("Deriv" %in% loadedNamespaces()){detach("package:Deriv", unload=TRUE)} 
  #- ?? Do we need to unload too.
if ("Deriv" %in% loadedNamespaces() ){detach("package:nlsr", unload=TRUE)}
if ("Deriv" %in% loadedNamespaces() ){detach("package:Ryacas", unload=TRUE)}
require(Deriv)
#- Various simplifications
#- ?? Do we need quote() to stop attempt to evaluate before applying simplification

Simplify(quote(+(a+b)))
Simplify(quote(-5))
Simplify(quote(--(a+b)))

Simplify(quote(exp(log(a+b))))
Simplify(quote(exp(1)))

Simplify(quote(log(exp(a+b))))
Simplify(quote(log(1)))

Simplify(quote(!TRUE))
Simplify(quote(!FALSE))

Simplify(quote((a+b)))

Simplify(quote(a + b + 0))
Simplify(quote(0 + a + b))
Simplify(quote((a+b) + (a+b)))
Simplify(quote(1 + 4))

Simplify(quote(a + b - 0))
Simplify(quote(0 - a - b))
Simplify(quote((a+b) - (a+b)))
Simplify(quote(5 - 3))

Simplify(quote(0*(a+b)))
Simplify(quote((a+b)*0))
Simplify(quote(1L * (a+b)))
Simplify(quote((a+b) * 1))
Simplify(quote((-1)*(a+b)))
Simplify(quote((a+b)*(-1)))
Simplify(quote(2*5))

Simplify(quote((a+b) / 1))
Simplify(quote((a+b) / (-1)))
Simplify(quote(0/(a+b)))
Simplify(quote(1/3))

Simplify(quote((a+b) ^ 1))
Simplify(quote(2^10))

Simplify(quote(log(exp(a), 3)))

Simplify(quote(FALSE && b))
Simplify(quote(a && TRUE))
Simplify(quote(TRUE && b))

Simplify(quote(a || TRUE))
Simplify(quote(FALSE || b))
Simplify(quote(a || FALSE))

Simplify(quote(if (TRUE) a+b))
Simplify(quote(if (FALSE) a+b))

Simplify(quote(if (TRUE) a+b else a*b))
Simplify(quote(if (FALSE) a+b else a*b))
Simplify(quote(if (cond) a+b else a+b))

#- This one is wrong... the double minus is an error, yet it works ??.
Simplify(quote(--(a+b)))
#- By comparison
Simplify(quote(-(-(a+b))))

Comparison with other approaches

check modelexpr() works with an ssgrfun ??

test model2rjfun vs model2rjfunx ??

Need more extensive discussion of Simplify??

Issues of programming on the language

?? need to explain where Deriv package comes from

One of the key tasks with tools for derivatives is that of taking objects in one or other form (that is, R class) and using it as an input for a symbolic function. The object may, of course, be an output from another such function, and this is one of the reasons we need to do such transformations.

We also note that the different tools for symbolic derivatives use slightly different inputs. For example, for the derivative of log(x), we have

dlogx <- nlsr::nlsDeriv(~ log(x), "x")
str(dlogx)
print(dlogx)

Unfortunately, there are complications when we have an expression object, and we need to specify that we do NOT execute the substitute() function. Here we show how to do this implicitly and with an explicit object.

dlogxs <- nlsr::nlsDeriv(expression(log(x)), "x", do_substitute=FALSE)
str(dlogxs)
print(dlogxs)
cat(as.character(dlogxs), "\n")
fne <- expression(log(x))
dlogxe <- nlsr::nlsDeriv(fne, "x", do_substitute=FALSE)
str(dlogxe)
print(dlogxe)


# base R
dblogx <- D(expression(log(x)), "x")
str(dblogx)
print(dblogx)

require(Deriv)
ddlogx <- Deriv::Deriv(expression(log(x)), "x")
str(ddlogx)
print(ddlogx)
cat(as.character(ddlogx), "\n")
ddlogxf <- ~ ddlogx
str(ddlogxf)

??? do each example by all methods and by numDeriv and put in dataframe for later presentation in a table.

Do we want examples in columns or rows. Probably 1 fn per row and work out a name for the row that is reasonably meaningful. Probably want an index column as well that is a list of strings. Can we then act on those strings to automate the whole setup?

##
# require(stats)
# require(Deriv)
# require(Ryacas)

# Various derivatives 

new <- codeDeriv(quote(1 + x + y), c("x", "y"))
old <- deriv(quote(1 + x + y), c("x", "y"))
print(new)
# Following generates a very long line on output of knitr (for markdown)
class(new)
str(new)
as.expression(new)
newf <- function(x, y){
   eval(new)
}
newf(3,5)

print(old)
class(old)
str(old)
oldf <- function(x,y){
    eval(old)
}
oldf(3,5)

Unfortunately, the inputs and outputs are not always easily transformed so that the symbolic derivatives can be found. (?? Need to codify this and provide filters so we can get things to work nicely.)

As an example, how could we take object new and embed it in a function we can then use in R? We can certainly copy and paste the output into a function template, as follows,

fnfromnew <- function(x,y){
    .value <- 1 + x + y
    .grad <- array(0, c(length(.value), 2L), list(NULL, c("x", 
    "y")))
    .grad[, "x"] <- 1
    .grad[, "y"] <- 1
    attr(.value, "gradient") <- .grad
    .value
}

print(fnfromnew(3,5))

However, we would ideally like to be able to automate this to generate functions and gradients for nonlinear least squares and optimization calculations. The same criticism applies to the object old

Another issue:

If we have x and y set such that the function is not admissible, then both our old and new functions give a gradient that is seemingly reasonable. While the gradient of this simple function could be considered to be defined for ANY values of x and y, I (JN) am sure most users would wish for a warning at the very least in such cases.

x <- NA
y <- Inf
print(eval(new))
print(eval(old))

SafeD

We could define a way to avoid the issue of character vs. expression (and possibly other classes) as follows:

safeD <- function(obj, var) {
   # safeguarded D() function for symbolic derivs
   if (! is.character(var) ) stop("The variable var MUST be character type")
   if (is.character(obj) ) {
       eobj <- parse(text=obj)
       result <- D(eobj, var)
   } else {
       result <- D(obj, var)
   }
}

lxy2 <- expression(log(x+y^2))
clxy2 <- "log(x+y^2)"
try(print(D(clxy2, "y")))
print(try(D(lxy2, "y")))
print(safeD(clxy2, "y"))
print(safeD(lxy2, "y"))

Indexed parameters or variables

Erin Hodgess on R-help in January 2015 raised the issue of taking the derivative of an expression that contains an indexed variable. We show the example and its resolution, then give an explanation.

zzz <- expression(y[3]*r1 + r2)
try(deriv(zzz,c("r1","r2")))
##
try(nlsr::nlsDeriv(zzz, c("r1","r2")))
try(fnDeriv(zzz, c("r1","r2")))
newDeriv(`[`(x,y), stop("no derivative when indexing"))
try(nlsr::nlsDeriv(zzz, c("r1","r2")))
try(nlsr::fnDeriv(zzz, c("r1","r2")))

Richard Heiberger pointed out that internally, R stores

y[3]

as

"["(y,3)

that is, as a function. Duncan Murdoch pointed out the availability of nlsr and the use of newDeriv() to redefine the "[" function for the purposes of derivatives.

This is not an ideal resolution, especially as we would like to be able to get the gradients of functions with respect to vectors of parameters, noted also by Sergei Sokol in the manual for package Deriv. The following examples illustrate this.

try(nlsr::nlsDeriv(zzz, "y[3]"))
try(nlsr::nlsDeriv(y3*r1+r2,"y3"))
try(nlsr::nlsDeriv(y[3]*r1+r2,"y[3]"))

\pagebreak

Appendix D: A comparison of nlsr::nlxb with nls and minpack::nlsLM

R has several tools for estimating nonlinear models and minimizing sums of squares functions. Sometimes we talk of nonlinear regression and at other times of minimizing a sum of squares function. Many workers conflate these two tasks. In this appendix, some of the differences between the tools available in R for these two computational tasks are highlighted. In particular, we compare the tools from the package nlsr (@nlsr2019manual), particularly function nlxb() with those from base-R nls() and the nlsLM function of package minpack.lm (@minpacklm12). We also compare how nlsr:nlfb() and minpack.lm:nls.lm allow a sum of squares function to be minimized.

Principal differences

The main differences in the tools relate to the following features:

Derivative information

As detailed above, nlsr::nlxb() attempts to use symbolic and algorithmic tools to obtain the derivatives of the model expression that are needed for the Jacobian matrix that is used in creating a linearized sub-problem at each iteration of an attempted solution of the minimization of the sum of squared residuals. As discussed in the section "Analytic versus approximate Jacobians" and using the code in Appendix B, nls() and minpack.lm::nlsLM() use a very simple forward-difference approximation for the partial derivatives for the Jacobian.

Forward difference approximations are less accurate than central differences, and both are subject to numerical error when the modelling function is "flat", so that there is a large amount of digit cancellation in the subtraction necessary to compute the derivative approximation.

minpack.lm::nlsLM uses the same derivatives as far as I can determine. The loss of information compared to the analytic or algorithmic derivatives of nlsr::nlxb() is important in that it can lead to Jacobian matrices that are computationally singular, where nls() will stop with "singular gradient". (It is actually the Jacobian which is singular here, and I will stay with that terminology.) minpack.lm::nlsLM() may fail to get started if the initial Jacobian is singular, but is less susceptible in general, as described in the sub-section on Marquardt stabilization which follows.

Consequences of different derivative computations

While readers might expect that the precise derivative information of nlsr::nlxb() would mean a faster solution, this is quite often not the case. Approximate derivatives may allow faster approach to the solution by "ironing out" wrinkles in the function surface. In my opinion, the main advantage of precise derivative information is in testing that we actually have arrived at a solution.

There are even some cases where the approximation may be helpful, though users may not realize the potential danger. Thanks to Karl Schilling for an example of modelling with the function

a * (x ^ b)

where x is our data and we wish to estimate a and b. Now the partial derivative of this function w.r.t. b is

partialderiv <- D(expression(a * (x ^ b)),"b")
print(partialderiv)

The danger here is that we may have data values x = 0, in which case the derivative is not defined, though the model can still be evaluated. Thus nlsr::nlxb() will not compute a solution, while nls() and minpack.lm::nlsLM() will generally proceed. A workaround is to provide a very small value instead of zero for the data, though I find this inelegant. Another approach is to drop the offending element of the data, though this risks altering the model estimated. A proper treatment might be to develop the limit of the derivative as the data value goes to zero, but finding general software that can detect and deal with this is a large project.

Timing comparisons

Let us compare timings on the (scaled) Hobbs weed problem introduced in Section ??.

require(microbenchmark)
## nls on Hobbs scaled model
weed <- c(5.308, 7.24, 9.638, 12.866, 17.069, 23.192, 31.443,
          38.558, 50.156, 62.948, 75.995, 91.972)
tt <- 1:12
weeddf <- data.frame(y=weed, tt=tt)
wmods <- y ~ 100*b1/(1+10*b2*exp(-0.1*b3*tt))
stx<-c(b1=2, b2=5, b3=3)
tnls<-microbenchmark((anls<-nls(wmods, start=stx, data=weeddf)), unit="us")
tnls
## nlsr::nlfb() on Hobbs scaled model
tnlxb<-microbenchmark((anlxb<-nlsr::nlxb(wmods, start=stx, data=weeddf)), unit="us")
tnlxb
## minpack.lm::nlsLM() on Hobbs scaled model
tnlsLM<-microbenchmark((anlsLM<-minpack.lm::nlsLM(start=stx, formula=wmods, data=weeddf)), 
                       unit="us")
tnlsLM

Programmatic modelling functions

A consequence of the symbolic derivative approach in nlsr::nlxb() is that it cannot be applied to a modelling expression that includes an R function i.e., sub-program.

This limitation could be overcome if there were appropriate automatic differentiation code (to provide derivative computations based on transformation of the modelling function's programmatic form) or a mechanism to specify a form of numerical approximation. As of December 2020, it seems more likely that the latter approach will be realized first, and it is one the near-term development goals.

Functional expression of residuals and Jacobian

require(microbenchmark)
## nlsr::nlfb() on Hobbs scaled
# Scaled Hobbs problem
shobbs.res  <-  function(x){ # scaled Hobbs weeds problem -- residual
  # This variant uses looping
  if(length(x) != 3) stop("shobbs.res -- parameter vector n!=3")
  y  <-  c(5.308, 7.24, 9.638, 12.866, 17.069, 23.192, 31.443, 
           38.558, 50.156, 62.948, 75.995, 91.972)
  tt  <-  1:12
  res  <-  100.0*x[1]/(1+x[2]*10.*exp(-0.1*x[3]*tt)) - y
}

shobbs.jac  <-  function(x) { # scaled Hobbs weeds problem -- Jacobian
  jj  <-  matrix(0.0, 12, 3)
  tt  <-  1:12
  yy  <-  exp(-0.1*x[3]*tt)
  zz  <-  100.0/(1+10.*x[2]*yy)
  jj[tt,1]   <-   zz
  jj[tt,2]   <-   -0.1*x[1]*zz*zz*yy
  jj[tt,3]   <-   0.01*x[1]*zz*zz*yy*x[2]*tt
  attr(jj, "gradient") <- jj
  jj
}

st1 <- c(b1=1, b2=1, b3=1)
tnlfb<-microbenchmark((anlfb<-nlsr::nlfb(start=st1, resfn=shobbs.res, jacfn=shobbs.jac)), unit="us")
tnlfb
## minpack.lm::nls.lm() on Hobbs scaled
tnls.lm<-microbenchmark((anls.lm<-minpack.lm::nls.lm(par=st1, fn=shobbs.res, jac=shobbs.jac)))
tnls.lm

Marquardt stabilization

All three of the R functions under consideration try to minimize a sum of squares. If the model is provided in the form

y ~ (some expression)

then the residuals are computed by evaluating the difference between (some expression) and y. My own preference, and that of K F Gauss, is to use (some expression) - y. This is to avoid having to be concerned with the negative sign -- the derivative of the residual defined in this way is the same as the derivative of the modelling function, and we avoid the chance of a sign error. The Jacobian matrix is made up of elements where element i, j is the partial derivative of residual i w.r.t. parameter j.

nls() attempts to minimize a sum of squared residuals by a Gauss-Newton method. If we compute a Jacobian matrix J and a vector of residuals r from a vector of parameters x, then we can define a linearized problem

$$ J^T J \delta = - J^T r $$

This leads to an iteration where, from a set of starting parameters x0, we compute

$$ x_{i+1} = x_i + \delta$$

This is commonly modified to use a step factor step

$$ x_{i+1} = x_i + step * \delta$$

It is in the mechanisms to choose the size of step and to decide when to terminate the iteration that Gauss-Newton methods differ. Indeed, though I have tried several times, I find the very convoluted code behind nls() very difficult to decipher. Unfortunately, its authors have (at 2018 as far as I am aware) all ceased to maintain the code.

Both nlsr::nlxb() and minpack.lm::nlsLM use a Levenberg-Marquardt stabilization of the iteration. (@Marquardt1963, @Levenberg1944), solving

$$ (J^T J + \lambda D) \delta = - J^T r $$

where $D$ is some diagonal matrix and lambda is a number of modest size initially. Clearly for $\lambda = 0$ we have a Gauss-Newton method. Typically, the sum of squares of the residuals calculated at the "new" set of parameters is used as a criterion for keeping those parameter values. If so, the size of $\lambda$ is reduced. If not, we increase the size of $\lambda$ and compute a new $\delta$. Note that a new $J$, the expensive step in each iteration, is NOT required.

As for Gauss-Newton methods, the details of how to start, adjust and terminate the iteration lead to many variants, increased by different possibilities for specifying $D$. See @jncnm79. There are also a number of ways to solve the stabilized Gauss-Newton equations, some of which do not require the explicit $J^T J$ matrix.

Criterion used to terminate the iteration

nls() and nlsr use a form of the relative offset convergence criterion, @BatesWatts81. minpack.lm uses a somewhat different and more complicated set of tests. Unfortunately, the relative offset criterion as implemented in nls() is unsuited to problems where the residuals can be zero. There are ways to work around the difficulties, and nlsr has used one approach. See An illustrative nonlinear regression problem below.

Output of the modelling functions

nls() and nlsLM() return the same solution structure. Let us examine this for one of our example results (we will choose one that does NOT have small residuals, so that all the functions "work").

# Here we set up an example problem with data
# Define independent variable
t0 <- 0:19
t0a<-t0
t0a[1]<-1e-20 # very small value
# Drop first value in vectors
t0t<-t0[-1]
y1 <- 4 * (t0^0.25)
y1t<-y1[-1]
n <- length(t0)
fuzz <- rnorm(n)
range <- max(y1)-min(y1)
## add some "error" to the dependent variable
y1q <- y1 + 0.2*range*fuzz
edta <- data.frame(t0=t0, t0a=t0a, y1=y1, y1q=y1q)
edtat <- data.frame(t0t=t0t, y1t=y1t)

start1 <- c(a=1, b=1)
try(nlsy0t0ax <- nls(formula=y1~a*(t0a^b), start=start1, data=edta, control=nls.control(maxiter=10000)))
nlsry1t0a <- nlxb(formula=y1~a*(t0a^b), start=start1, data=edta)
library(minpack.lm)
nlsLMy1t0a <- nlsLM(formula=y1~a*(t0a^b), start=start1, data=edta)
str(nlsy0t0ax)

The minpack.lm::nlsLM output has the same structure, which could be revealed by the R command str(nlsLMy1t0a). Note that this structure has a lot of special functions in the sub-list m. By contrast, the nlsr() output is much less flamboyant. There are, in fact, no functions as part of the structure.

str(nlsry1t0a)

Which of these approaches is "better" can be debated. My preference is for the results of optimization computations to be essentially data, including messages, though some tools within some of my packages will return functions for specific reasons, e.g., to return a function from an expression. However, I prefer to use specified functions such as predict.nlsr() below to obtain predictions. I welcome comment and discussion, as this is not, in my view, a closed topic.

Prediction

Let us predict our models at the mean of the data. Because nlxb() returns a different structure from that found by nls() and nlsLM() the code for predict() for an object from nlsr is different. minpack.lm uses predict.nls since the output structure of the modelling step is equivalent to that from nls().

nudta <- colMeans(edta)
predict(nlsy0t0ax, newdata=nudta)
predict(nlsLMy1t0a, newdata=nudta)
predict(nlsry1t0a, newdata=nudta)

An illustrative nonlinear regression problem

So we can illustrate some of the issues, let us create some example data for a seemingly straightforward computational problem.

# Here we set up an example problem with data
# Define independent variable
t0 <- 0:19
t0a<-t0
t0a[1]<-1e-20 # very small value
# Drop first value in vectors
t0t<-t0[-1]
y1 <- 4 * (t0^0.25)
y1t<-y1[-1]
n <- length(t0)
fuzz <- rnorm(n)
range <- max(y1)-min(y1)
## add some "error" to the dependent variable
y1q <- y1 + 0.2*range*fuzz
edta <- data.frame(t0=t0, t0a=t0a, y1=y1, y1q=y1q)
edtat <- data.frame(t0t=t0t, y1t=y1t)

Let us try this example modelling y0 against t0. Note that this is a zero-residual problem, so nls() should complain or fail, which it appears to do but by exceeding the iteration limit, which is not very communicative of the underlying issue. The nls() documentation warns

"Warning

Do not use nls on artificial "zero-residual" data."

It goes on to recommend that users add "error" to the data to avoid such problems. I feel this is a very unsatisfactory kludge. It is NOT due to a genuine mathematical issue, but due to the relative offset convergence criterion used to terminate the method. In October 2020, I suggested a patch for nls() to R-core that it seems will become part of base R eventually. This patch allows the user to specify a parameter, tentatively named convTestAdd with a zero default value, in nls.control(). (This allows existing examples using nsl() to function without change.) A small positive value for this control parameter avoids a zero divided by zero issue in the relative offset convergence test in nls() used to terminate iterations. This adjustment to the convergence test has been in nlsr since its creation.

Here is the output.

nls

cprint <- function(obj){
   # print object if it exists
  sobj<-deparse(substitute(obj))
  if (exists(sobj)) {
      print(obj)
  } else {
      cat(sobj," does not exist\n")
  }
#  return(NULL)
}
start1 <- c(a=1, b=1)
try(nlsy0t0 <- nls(formula=y1~a*(t0^b), start=start1, data=edta))
cprint(nlsy0t0)
# Since this fails to converge, let us increase the maximum iterations
try(nlsy0t0x <- nls(formula=y1~a*(t0^b), start=start1, data=edta,
                    control=nls.control(maxiter=10000)))
cprint(nlsy0t0x)
try(nlsy0t0ax <- nls(formula=y1~a*(t0a^b), start=start1, data=edta, 
                     control=nls.control(maxiter=10000)))
cprint(nlsy0t0ax)
try(nlsy0t0t <- nls(formula=y1t~a*(t0t^b), start=start1, data=edtat))
cprint(nlsy0t0t)

nlsr

nlsry1t0 <- try(nlxb(formula=y1~a*(t0^b), start=start1, data=edta))
cprint(nlsry1t0)
nlsry1t0a <- nlxb(formula=y1~a*(t0a^b), start=start1, data=edta)
cprint(nlsry1t0a)
nlsry1t0t <- nlxb(formula=y1t~a*(t0t^b), start=start1, data=edtat)
cprint(nlsry1t0t)

minpack.lm

library(minpack.lm)
nlsLMy1t0 <- nlsLM(formula=y1~a*(t0^b), start=start1, data=edta)
nlsLMy1t0
nlsLMy1t0a <- nlsLM(formula=y1~a*(t0a^b), start=start1, data=edta)
nlsLMy1t0a
nlsLMy1t0t <- nlsLM(formula=y1t~a*(t0t^b), start=start1, data=edtat)
nlsLMy1t0t

We have seemingly found a workaround for our difficulty, but I caution that initially I found very unsatisfactory results when I set the "very small value" to 1.0e-7. The correct approach is clearly to understand what is going on. Getting computers to provide that understanding is a serious challenge.

Problems that are NOT regressions

Some nonlinear least squares problems are NOT nonlinear regressions. That is, we do not have a formula y ~ (some function) to define the problem. This is another reason to use the residual in the form (some function) - y In many cases of interest we have no y.

The Brown and Dennis test problem (@More1981TUO, problem 16) is of this form. Suppose we have m observations, then we create a scaled index t which is the "data" for the function. To run the nonlinear least squares functions that use a formula, we do, however, need a "y" variable. Clearly adding zero to the residual will not change the problem, so we set the data for "y" as all zeros. Note that nls() and nlsLM() need some extra iterations to find the solution to this somewhat nasty problem.

m <- 20
t <- seq(1, m) / 5
y <- rep(0,m)
library(nlsr)
library(minpack.lm)

bddata <- data.frame(t=t, y=y)
bdform <- y ~ ((x1 + t * x2 - exp(t))^2 + (x3 + x4 * sin(t) - cos(t))^2)
prm0 <- c(x1=25, x2=5, x3=-5, x4=-1)
fbd <-model2ssgrfun(bdform, prm0, bddata)
cat("initial sumsquares=",as.numeric(crossprod(fbd(prm0))),"\n")
nlsrbd <- nlxb(bdform, start=prm0, data=bddata, trace=FALSE)
nlsrbd

nlsbd10k <- nls(bdform, start=prm0, data=bddata, trace=FALSE, 
                control=nls.control(maxiter=10000))
nlsbd10k

nlsLMbd10k <- nlsLM(bdform, start=prm0, data=bddata, trace=FALSE, 
                    control=nls.lm.control(maxiter=10000, maxfev=10000))
nlsLMbd10k

Let us try predicting the "residual" for some new data.

ndata <- data.frame(t=c(5,6), y=c(0,0))
predict(nlsLMbd10k, newdata=ndata)

# now nls
predict(nlsbd10k, newdata=ndata)

# now nlsr
predict(nlsrbd, newdata=ndata)

We could, of course, try setting up a different formula, since the "residuals" can be computed in any way such that their absolute value is the same. Therefore we could try moving the exponential part of the function for each equation to the left hand side as in bdf2 below.

bdf2 <-  (x1 + t * x2 - exp(t))^2 ~ - (x3 + x4 * sin(t) - cos(t))^2

However, we discover that the parsing of the model formula fails for this formulation.

A check on the Brown and Dennis calculation via function minimization

We can attack the Brown and Dennis problem by applying nonlinear function minimization programs to the sum of squared "residuals" as a function of the parameters. The code below does this. We omit the output for space reasons.

#' Brown and Dennis Function
#'
#' Test function 16 from the More', Garbow and Hillstrom paper.
#'
#' The objective function is the sum of \code{m} functions, each of \code{n}
#' parameters.
#'
#' \itemize{
#'   \item Dimensions: Number of parameters \code{n = 4}, number of summand
#'   functions \code{m >= n}.
#'   \item Minima: \code{f = 85822.2} if \code{m = 20}.
#' }
#'
#' @param m Number of summand functions in the objective function. Should be
#'   equal to or greater than 4.
#' @return A list containing:
#' \itemize{
#'   \item \code{fn} Objective function which calculates the value given input
#'   parameter vector.
#'   \item \code{gr} Gradient function which calculates the gradient vector
#'   given input parameter vector.
#'   \item \code{fg} A function which, given the parameter vector, calculates
#'   both the objective value and gradient, returning a list with members
#'   \code{fn} and \code{gr}, respectively.
#'   \item \code{x0} Standard starting point.
#' }
#' @references
#' More', J. J., Garbow, B. S., & Hillstrom, K. E. (1981).
#' Testing unconstrained optimization software.
#' \emph{ACM Transactions on Mathematical Software (TOMS)}, \emph{7}(1), 17-41.
#' \url{https://doi.org/10.1145/355934.355936}
#'
#' Brown, K. M., & Dennis, J. E. (1971).
#' \emph{New computational algorithms for minimizing a sum of squares of
#' nonlinear functions} (Report No. 71-6).
#' New Haven, CT: Department of Computer Science, Yale University.
#'
#' @examples
#' # Use 10 summand functions
#' fun <- brown_den(m = 10)
#' # Optimize using the standard starting point
#' x0 <- fun$x0
#' res_x0 <- stats::optim(par = x0, fn = fun$fn, gr = fun$gr, method =
#' "L-BFGS-B")
#' # Use your own starting point
#' res <- stats::optim(c(0.1, 0.2, 0.3, 0.4), fun$fn, fun$gr, method =
#' "L-BFGS-B")
#'
#' # Use 20 summand functions
#' fun20 <- brown_den(m = 20)
#' res <- stats::optim(fun20$x0, fun20$fn, fun20$gr, method = "L-BFGS-B")
#' @export
#`
brown_den <- function(m = 20) {
  list(
    fn = function(par) {
      x1 <- par[1]
      x2 <- par[2]
      x3 <- par[3]
      x4 <- par[4]

      ti <- (1:m) * 0.2
      l <- x1 + ti * x2 - exp(ti)
      r <- x3 + x4 * sin(ti) - cos(ti)
      f <- l * l + r * r
      sum(f * f)
    },
    gr = function(par) {
      x1 <- par[1]
      x2 <- par[2]
      x3 <- par[3]
      x4 <- par[4]

      ti <- (1:m) * 0.2
      sinti <- sin(ti)
      l <- x1 + ti * x2 - exp(ti)
      r <- x3 + x4 * sinti - cos(ti)
      f <- l * l + r * r
      lf4 <- 4 * l * f
      rf4 <- 4 * r * f
      c(
        sum(lf4),
        sum(lf4 * ti),
        sum(rf4),
        sum(rf4 * sinti)
      )
    },
    fg = function(par) {
      x1 <- par[1]
      x2 <- par[2]
      x3 <- par[3]
      x4 <- par[4]

      ti <- (1:m) * 0.2
      sinti <- sin(ti)
      l <- x1 + ti * x2 - exp(ti)
      r <- x3 + x4 * sinti - cos(ti)
      f <- l * l + r * r
      lf4 <- 4 * l * f
      rf4 <- 4 * r * f

      fsum <- sum(f * f)
      grad <- c(
        sum(lf4),
        sum(lf4 * ti),
        sum(rf4),
        sum(rf4 * sinti)
      )

      list(
        fn = fsum,
        gr = grad
      )
    },
    x0 = c(25, 5, -5, 1)
  )
}
mbd <- brown_den(m=20)
mbd
mbd$fg(mbd$x0)
bdsolnm <- optim(mbd$x0, mbd$fn, control=list(trace=0))
bdsolnm
bdsolbfgs <- optim(mbd$x0, mbd$fn, method="BFGS", control=list(trace=0))
bdsolbfgs

library(optimx)
methlist <- c("Nelder-Mead","BFGS","Rvmmin","L-BFGS-B","Rcgmin","ucminf")

solo <- opm(mbd$x0, mbd$fn, mbd$gr, method=methlist, control=list(trace=0))
summary(solo, order=value)

## A failure above is generally because a package in the 'methlist' is not installed.

\pagebreak

References



Try the nlsr package in your browser

Any scripts or data that you put into this service are public.

nlsr documentation built on Sept. 8, 2023, 5:48 p.m.