lbfgsb3c: Using the 2011 version of L-BFGSB

knitr::opts_chunk$set(echo = TRUE)

Abstract

In 2011 the authors of the L-BFGSB program published a correction and update to their 1995 code. The latter is the basis of the L-BFGS-B method of the optim() function in base-R. The package lbfgsb3 wrapped the updated code using a .Fortran call after removing a very large number of Fortran output statements. Matthew Fidler used this Fortran code and an Rcpp interface to produce package lbfgsb3c where the function lbfgsb3c() returns an object similar to that of base-R optim() and that of optimx::optimr(). Subsequently, in a fine example of the collaborations that have made R so useful, we have merged the functionality of package lbfgsb3 into lbfgsb3c, as explained in this vignette.

Provenance of the R optim::L-BFGS-B and related solvers

The base-R code lbfgsb.c (at writing in R-3.5.2/src/appl/) is commented:

/* l-bfgs-b.f -- translated by f2c (version 19991025).

  From ?optim:
  The code for method ‘"L-BFGS-B"’ is based on Fortran code by Zhu,
  Byrd, Lu-Chen and Nocedal obtained from Netlib (file 'opt/lbfgs_bcm.shar')

  The Fortran files contained no copyright information.

  Byrd, R. H., Lu, P., Nocedal, J. and Zhu, C.  (1995) A limited
  memory algorithm for bound constrained optimization.
  \emph{SIAM J. Scientific Computing}, \bold{16}, 1190--1208.
*/

The paper @Byrd95 builds on @Lu94limitedmemory. There have been a number of other workers who have followed-up on this work, but R code and packages seem to have largely stayed with codes derived from these original papers. Though the date of the paper is 1995, the ideas it embodies were around for a decade and a half at least, in particular in Nocedal80 and LiuN89. The definitive Fortran code was published as @Zhu1997LBFGS. This is available as toms/778.zip on www.netlib.org. A side-by-side comparison of the main subroutines in the two downloads from Netlib unfortunately shows a lot of differences. I have not tried to determine if these affect performance or are simply cosmetic.

More seriously perhaps, there were some deficiencies in the code(s), and in 2011 Nocedal's team published a Fortran code with some corrections (@Morales2011). Since the R code predates this, I prepared package lbfgsb3 (@lbfgsb3JN) to wrap the Fortran code. However, I did not discover any test cases where the optim::L-BFGS-B and lbfgsb3 were different, though I confess to only running some limited tests. There are, in fact, more in this vignette.

In 2016, I was at a Fields Institute optimization conference in Toronto for the 70th birthday of Andy Conn. By sheer serendipity, Nocedal did not attend the conference, but sat down next to me at the conference dinner. When I asked him about the key changes, he said that the most important one was to fix the computation of the machine precision, which was not always correct in the 1995 code. Since R gets this number as .Machine$double.eps, the offending code is irrelevant.

Within @Morales2011, there is also reported an improvement in the subspace minimization that is applied in cases of bounds constraints. Since few of the tests I have applied impose such constraints, it is reasonable that I will not have observed performance differences between the base-R optim code and my lbfsgb3 package. More appropriate tests are welcome, and on my agenda.

Besides the ACM TOMS code, there are two related codes from the Northwestern team on NETLIB: http://netlib.org/opt/lbfgs_um.shar is for unconstrained minimization, while http://netlib.org/opt/lbfgs_bcm.shar handles bounds constrained problems. To these are attached references @LiuN89 and @Byrd1995 respectively, most likely reflecting the effort required to implement the constraints.

The unconstrained code has been converted to C under the leadership of Naoaki Okazaki (see http://www.chokkan.org/software/liblbfgs/, or the fork at https://github.com/MIRTK/LBFGS). This has been wrapped for R as @Coppola2014 as the lbfgs package. This can be called from optimx::optimr().

Using Rcpp (see @RCppDERF2011) and the Fortran code in package lbfgs3, Matthew Fidler developed package lbfgsb3c (@lbfgsb3cMF). As this provides a more standard call and return than lbfgsb3 Fidler and I are unified the two packages as lbfgsb3x.

Functions in package lbfgsb3c

There is really only one optimizer function in the package, but it may be called by four (4) names:

We recommend using the lbfsgb3c() call for most uses.

Comparison with optim::L-BFGS-B

The new Fortran package claims better performance on bounds-constrained problems. Below we present two fairly simple tests, which unfortunately do not show any advantage. We welcome examples showing differences, either better or not. Note that we use the call that returns expanded information, but we do not interpret that here. See the documentation in the source Fortran for an explanation of the data returned in object info.

# ref BT.RES in Nash and Walker-Smith (1987)
library(lbfgsb3c)
sessionInfo()

bt.f<-function(x){
 sum(x*x)
}

bt.g<-function(x){
  gg<-2.0*x
}

bt.badsetup<-function(n){
   x<-rep(0,n)
   lo<-rep(0,n)
   up<-lo # to get arrays set
   bmsk<-rep(1,n)
   bmsk[(trunc(n/2)+1)]<-0
   for (i in 1:n) { 
      x[i]<-2.2*i-n
      lo[i]<-1.0*(i-1)*(n-1)/n
      up[i]<-1.0*i*(n+1)/n
   }
   result<-list(x=x, lower=lo, upper=up, bdmsk=bmsk)
}

bt.setup0<-function(n){
   x<-rep(0,n)
   lo<-rep(0,n)
   up<-lower # to get arrays set
   bmsk<-rep(1,n)
   bmsk[(trunc(n/2)+1)]<-0
   for (i in 1:n) { 
      lo[i]<-1.0*(i-1)*(n-1)/n
      up[i]<-1.0*i*(n+1)/n
   }
   x<-0.5*(lo+up)
   result<-list(x=x, lower=lo, upper=up, bdmsk=bmsk)
}
nn <- 4
baddy <- bt.badsetup(nn)
lo <- baddy$lower
up <- baddy$upper
x0 <- baddy$x
baddy
## optim()
solbad0 <- optim(x0, bt.f, bt.g, lower=lo, upper=up, method="L-BFGS-B", control=list(trace=3))
solbad0
## lbfgsb3c
solbad1 <- lbfgsb3(x0, bt.f, bt.g, lower=lo, upper=up, control=list(trace=3))
solbad1
## Possible timings
## library(microbenchmark)
## tbad0 <- microbenchmark(optim(x0, bt.f, bt.g, lower=lo, upper=up, method="L-BFGS-B"))
## t3c <- microbenchmark(lbfgsb3(x0, bt.f, bt.g, lower=lo, upper=up))
## Via optimx package
## library(optimx)
## meths <- c("L-BFGS-B", "lbfgsb3") # Note: lbfgsb3c not yet in optimx on CRAN
## allbt0 <- opm(x0, bt.f, bt.g, lower=lo, upper=up, method=meths)
## summary(allbt0, order=value)
# candlestick function
# J C Nash 2011-2-3
cstick.f<-function(x,alpha=100){
  x<-as.vector(x)
  r2<-crossprod(x)
  f<-as.double(r2+alpha/r2)
  return(f)
}

cstick.g<-function(x,alpha=100){
  x<-as.vector(x)
  r2<-as.numeric(crossprod(x))
  g1<-2*x
  g2 <- (-alpha)*2*x/(r2*r2)
  g<-as.double(g1+g2)
  return(g)
}
library(lbfgsb3c)
nn <- 2
x0 <- c(10,10)
lo <- c(1, 1)
up <- c(10,10)
print(x0)
c2o <- optim(x0, cstick.f, cstick.g, lower=lo, upper=up, method="L-BFGS-B", control=list(trace=3))
c2o
c2l <- lbfgsb3(x0, cstick.f, cstick.g, lower=lo, upper=up, control=list(trace=3))
c2l

## meths <- c("L-BFGS-B", "lbfgsb3c", "Rvmmin", "Rcgmin", "Rtnmin")
## require(optimx)

## cstick2a <- opm(x0, cstick.f, cstick.g, method=meths, upper=up, lower=lo, control=list(kkt=FALSE))
## print(summary(cstick2a, par.select=1:2, order=value))
lo <- c(4, 4)
c2ob <- optim(x0, cstick.f, cstick.g, lower=lo, upper=up, method="L-BFGS-B", control=list(trace=3))
c2ob
c2lb <- lbfgsb3(x0, cstick.f, cstick.g, lower=lo, upper=up, control=list(trace=3))
c2lb




## cstick2b <- opm(x0, cstick.f, cstick.g, method=meths, upper=up, lower=lo, control=list(kkt=FALSE))
## print(summary(cstick2b, par.select=1:2, order=value))

nn <- 100
x0 <- rep(10, nn)
up <- rep(10, nn)
lo <- rep(1e-4, nn)
cco <- optim(x0, cstick.f, cstick.g, lower=lo, upper=up, method="L-BFGS-B", control=list(trace=3))
cco
ccl <- lbfgsb3(x0, cstick.f, cstick.g, lower=lo, upper=up, control=list(trace=3))
ccl
## cstickc0 <- opm(x0, cstick.f, cstick.g, method=meths, upper=up, lower=lo, control=list(kkt=FALSE))
## print(summary(cstickc0, par.select=1:5, order=value))
## lo <- rep(1, nn)
## cstickca <- opm(x0, cstick.f, cstick.g, method=meths, upper=up, lower=lo, control=list(kkt=FALSE))
## print(summary(cstickca, par.select=1:5, order=value))
## lo <- rep(4, nn)
## cstickcb <- opm(x0, cstick.f, cstick.g, method=meths, upper=up, lower=lo, control=list(kkt=FALSE))
## print(summary(cstickcb, par.select=1:5, order=value))
# require(funconstrain) ## not in CRAN, so explicit inclusion of this function
# exrosen <- ex_rosen()
# exrosenf <- exrosen$fn
exrosenf <- function (par) {
    n <- length(par)
    if (n%%2 != 0) {
        stop("Extended Rosenbrock: n must be even")
    }
    fsum <- 0
    for (i in 1:(n/2)) {
        p2 <- 2 * i
        p1 <- p2 - 1
        f_p1 <- 10 * (par[p2] - par[p1]^2)
        f_p2 <- 1 - par[p1]
        fsum <- fsum + f_p1 * f_p1 + f_p2 * f_p2
    }
    fsum
}
# exroseng <- exrosen$gr
exroseng <- function (par) {
    n <- length(par)
    if (n%%2 != 0) {
        stop("Extended Rosenbrock: n must be even")
    }
    grad <- rep(0, n)
    for (i in 1:(n/2)) {
        p2 <- 2 * i
        p1 <- p2 - 1
        xx <- par[p1] * par[p1]
        yx <- par[p2] - xx
        f_p1 <- 10 * yx
        f_p2 <- 1 - par[p1]
        grad[p1] <- grad[p1] - 400 * par[p1] * yx - 2 * f_p2
        grad[p2] <- grad[p2] + 200 * yx
    }
    grad
}

exrosenx0 <- function (n = 20) {
    if (n%%2 != 0) {
        stop("Extended Rosenbrock: n must be even")
    }
    rep(c(-1.2, 1), n/2)
}

## meths <- c("L-BFGS-B", "lbfgsb3c", "Rvmmin", "Rcgmin", "Rtnmin")
## require(optimx)
for (n in seq(2,12, by=2)) {
  cat("ex_rosen try for n=",n,"\n")
  x0 <- exrosenx0(n)
  lo <- rep(.5, n)
  up <- rep(3, n)
  print(x0)
  eo <- optim(x0, exrosenf, exroseng, lower=lo, upper=up, method="L-BFGS-B", control=list(trace=3))
  eo
  el <- lbfgsb3(x0, exrosenf, exroseng, lower=lo, upper=up, control=list(trace=3))
  el
##   erfg <- opm(x0, exrosenf, exroseng, method=meths, lower=lo, upper=up)
##   print(summary(erfg, par.select=1:2, order=value))
}

Using compiled function code

The following example shows how this is done using the file jrosen.f

       subroutine rosen(n, x, fval)
       double precision x(n), fval, dx
       integer n, i
       fval = 0.0D0
       do 10 i=1,(n-1)
          dx = x(i + 1) - x(i) * x(i)
          fval = fval + 100.0 * dx * dx
          dx = 1.0 - x(i)
          fval = fval + dx * dx
 10    continue
       return
       end

Here is the example script, which is run OUTSIDE the vignette builder in a temporary directory.

system("cd ~/temp")
system("R CMD SHLIB jrosen.f")
dyn.load("jrosen.so")
is.loaded("rosen")
x0 <- as.double(c(-1.2,1))
fv <- as.double(-999)
n <- as.double(2)
testf <- .Fortran("rosen", n=as.integer(n), x=as.double(x0), fval=as.double(fv))
testf

rrosen <- function(x) {
  fval <- 0.0
  for (i in 1:(n-1)) {
    dx <- x[i + 1] - x[i] * x[i]
    fval <- fval + 100.0 * dx * dx
    dx <- 1.0 - x[i]
    fval <- fval + dx * dx
  }
  fval
}

(rrosen(x0))

frosen <- function(x){
    nn <- length(x)
    if (nn > 100) { stop("max number of parameters is 100")}
    fv <- -999.0
    val <- .Fortran("rosen", n=as.integer(nn), x=as.double(x), fval=as.double(fv))
    val$fval # NOTE--need ONLY function value returned
}
# Test the funcion
tval <- frosen(x0)
str(tval)

mynm <- optim(x0, rrosen, control=list(trace=1))
mynm
mynmf <- optim(x0, frosen, control=list(trace=1))
mynmf

library(lbfgsb3c)
cat("try min")
myopR <- lbfgsb3c(x0, rrosen, gr=NULL, control=list(trace=3))
myopR
myop <- lbfgsb3c(x0, frosen, gr=NULL, control=list(trace=3))
myop

The output is as follows.

> system("cd ~/temp")

> system("R CMD SHLIB jrosen.f")

> dyn.load("jrosen.so")

> is.loaded("rosen")
[1] TRUE

> x0 <- as.double(c(-1.2,1))

> fv <- as.double(-999)

> n <- as.double(2)

> testf <- .Fortran("rosen", n=as.integer(n), x=as.double(x0), fval=as.double(fv))

> testf
$n
[1] 2

$x
[1] -1.2  1.0

$fval
[1] 24.2


> rrosen <- function(x) {
+   fval <- 0.0
+   for (i in 1:(n-1)) {
+     dx <- x[i + 1] - x[i] * x[i]
+     fval <- fval + 100.0 * dx * dx
+     dx <- .... [TRUNCATED] 

> tt <-rrosen(x0)

> str(tt)
 num 24.2

> frosen <- function(x){
+     nn <- length(x)
+     if (nn > 100) { stop("max number of parameters is 100")}
+     fv <- -999.0
+     val <- .Fortran .... [TRUNCATED] 

> tval <- frosen(x0)

> str(tval)
 num 24.2

> mynm <- optim(x0, rrosen, control=list(trace=1))
  Nelder-Mead direct search function minimizer
function value for initial parameters = 24.200000
  Scaled convergence tolerance is 3.60608e-07
Stepsize computed as 0.120000
BUILD              3 24.200000 7.095296
REFLECTION         5 15.080000 4.541696
REFLECTION         7 7.095296 4.456256
HI-REDUCTION       9 4.728125 4.456256
HI-REDUCTION      11 4.541696 4.210000
LO-REDUCTION      13 4.456256 4.178989
LO-REDUCTION      15 4.210000 4.070813
HI-REDUCTION      17 4.178989 4.039810
HI-REDUCTION      19 4.070813 4.009379
HI-REDUCTION      21 4.039810 4.009379
REFLECTION        23 4.020798 3.993600
HI-REDUCTION      25 4.009379 3.993115
EXTENSION         27 3.993600 3.971089
HI-REDUCTION      29 3.993115 3.971089
EXTENSION         31 3.980807 3.932179
LO-REDUCTION      33 3.971089 3.932179
EXTENSION         35 3.935345 3.849794
LO-REDUCTION      37 3.932179 3.849794
EXTENSION         39 3.855730 3.684292
LO-REDUCTION      41 3.849794 3.684292
EXTENSION         43 3.699672 3.413447
EXTENSION         45 3.684292 3.229015
LO-REDUCTION      47 3.413447 3.229015
REFLECTION        49 3.230691 3.103710
EXTENSION         51 3.229015 3.003490
REFLECTION        53 3.103710 2.942017
EXTENSION         55 3.003490 2.708997
LO-REDUCTION      57 2.942017 2.708997
EXTENSION         59 2.713454 2.299507
HI-REDUCTION      61 2.708997 2.299507
REFLECTION        63 2.531644 2.205440
EXTENSION         65 2.299507 1.918698
HI-REDUCTION      67 2.205440 1.918698
EXTENSION         69 2.090865 1.646608
HI-REDUCTION      71 1.918698 1.646608
REFLECTION        73 1.846675 1.599128
REFLECTION        75 1.646608 1.512003
REFLECTION        77 1.599128 1.398251
LO-REDUCTION      79 1.512003 1.396387
REFLECTION        81 1.398251 1.343582
REFLECTION        83 1.396387 1.300594
LO-REDUCTION      85 1.343582 1.257303
HI-REDUCTION      87 1.300594 1.257303
HI-REDUCTION      89 1.269330 1.242346
EXTENSION         91 1.257303 1.199712
HI-REDUCTION      93 1.242346 1.199712
EXTENSION         95 1.226601 1.157795
EXTENSION         97 1.199712 1.100429
EXTENSION         99 1.157795 0.980250
LO-REDUCTION     101 1.100429 0.980250
EXTENSION        103 0.998877 0.807009
LO-REDUCTION     105 0.980250 0.807009
EXTENSION        107 0.853222 0.586726
LO-REDUCTION     109 0.807009 0.586726
HI-REDUCTION     111 0.689741 0.586726
LO-REDUCTION     113 0.656247 0.586726
EXTENSION        115 0.622985 0.558089
EXTENSION        117 0.586726 0.448731
LO-REDUCTION     119 0.558089 0.448731
EXTENSION        121 0.499381 0.340534
LO-REDUCTION     123 0.448731 0.340534
EXTENSION        125 0.377625 0.243089
REFLECTION       127 0.340534 0.226575
REFLECTION       129 0.243089 0.180213
HI-REDUCTION     131 0.226575 0.180213
EXTENSION        133 0.204666 0.123935
HI-REDUCTION     135 0.180213 0.123935
LO-REDUCTION     137 0.164902 0.123935
REFLECTION       139 0.126595 0.088760
HI-REDUCTION     141 0.123935 0.088760
EXTENSION        143 0.109726 0.075099
EXTENSION        145 0.088760 0.050955
EXTENSION        147 0.075099 0.022726
HI-REDUCTION     149 0.050955 0.022726
LO-REDUCTION     151 0.038467 0.017697
HI-REDUCTION     153 0.022726 0.017697
HI-REDUCTION     155 0.022600 0.013923
REFLECTION       157 0.017697 0.008524
HI-REDUCTION     159 0.013923 0.008524
EXTENSION        161 0.012718 0.008024
EXTENSION        163 0.008524 0.002530
EXTENSION        165 0.008024 0.000463
HI-REDUCTION     167 0.002530 0.000463
HI-REDUCTION     169 0.002405 0.000351
HI-REDUCTION     171 0.000710 0.000351
HI-REDUCTION     173 0.000463 0.000183
HI-REDUCTION     175 0.000351 0.000044
HI-REDUCTION     177 0.000183 0.000044
LO-REDUCTION     179 0.000082 0.000002
HI-REDUCTION     181 0.000044 0.000002
HI-REDUCTION     183 0.000012 0.000002
HI-REDUCTION     185 0.000009 0.000002
HI-REDUCTION     187 0.000002 0.000002
HI-REDUCTION     189 0.000002 0.000000
HI-REDUCTION     191 0.000002 0.000000
LO-REDUCTION     193 0.000001 0.000000
Exiting from Nelder Mead minimizer
    195 function evaluations used

> mynm
$par
[1] 1.000260 1.000506

$value
[1] 8.825241e-08

$counts
function gradient 
     195       NA 

$convergence
[1] 0

$message
NULL


> mynmf <- optim(x0, frosen, control=list(trace=1))
  Nelder-Mead direct search function minimizer
function value for initial parameters = 24.200000
  Scaled convergence tolerance is 3.60608e-07
Stepsize computed as 0.120000
BUILD              3 24.200000 7.095296
REFLECTION         5 15.080000 4.541696
REFLECTION         7 7.095296 4.456256
HI-REDUCTION       9 4.728125 4.456256
HI-REDUCTION      11 4.541696 4.210000
LO-REDUCTION      13 4.456256 4.178989
LO-REDUCTION      15 4.210000 4.070813
HI-REDUCTION      17 4.178989 4.039810
HI-REDUCTION      19 4.070813 4.009379
HI-REDUCTION      21 4.039810 4.009379
REFLECTION        23 4.020798 3.993600
HI-REDUCTION      25 4.009379 3.993115
EXTENSION         27 3.993600 3.971089
HI-REDUCTION      29 3.993115 3.971089
EXTENSION         31 3.980807 3.932179
LO-REDUCTION      33 3.971089 3.932179
EXTENSION         35 3.935345 3.849794
LO-REDUCTION      37 3.932179 3.849794
EXTENSION         39 3.855730 3.684292
LO-REDUCTION      41 3.849794 3.684292
EXTENSION         43 3.699672 3.413447
EXTENSION         45 3.684292 3.229015
LO-REDUCTION      47 3.413447 3.229015
REFLECTION        49 3.230691 3.103710
EXTENSION         51 3.229015 3.003490
REFLECTION        53 3.103710 2.942017
EXTENSION         55 3.003490 2.708997
LO-REDUCTION      57 2.942017 2.708997
EXTENSION         59 2.713454 2.299507
HI-REDUCTION      61 2.708997 2.299507
REFLECTION        63 2.531644 2.205440
EXTENSION         65 2.299507 1.918698
HI-REDUCTION      67 2.205440 1.918698
EXTENSION         69 2.090865 1.646608
HI-REDUCTION      71 1.918698 1.646608
REFLECTION        73 1.846675 1.599128
REFLECTION        75 1.646608 1.512003
REFLECTION        77 1.599128 1.398251
LO-REDUCTION      79 1.512003 1.396387
REFLECTION        81 1.398251 1.343582
REFLECTION        83 1.396387 1.300594
LO-REDUCTION      85 1.343582 1.257303
HI-REDUCTION      87 1.300594 1.257303
HI-REDUCTION      89 1.269330 1.242346
EXTENSION         91 1.257303 1.199712
HI-REDUCTION      93 1.242346 1.199712
EXTENSION         95 1.226601 1.157795
EXTENSION         97 1.199712 1.100429
EXTENSION         99 1.157795 0.980250
LO-REDUCTION     101 1.100429 0.980250
EXTENSION        103 0.998877 0.807009
LO-REDUCTION     105 0.980250 0.807009
EXTENSION        107 0.853222 0.586726
LO-REDUCTION     109 0.807009 0.586726
HI-REDUCTION     111 0.689741 0.586726
LO-REDUCTION     113 0.656247 0.586726
EXTENSION        115 0.622985 0.558089
EXTENSION        117 0.586726 0.448731
LO-REDUCTION     119 0.558089 0.448731
EXTENSION        121 0.499381 0.340534
LO-REDUCTION     123 0.448731 0.340534
EXTENSION        125 0.377625 0.243089
REFLECTION       127 0.340534 0.226575
REFLECTION       129 0.243089 0.180213
HI-REDUCTION     131 0.226575 0.180213
EXTENSION        133 0.204666 0.123935
HI-REDUCTION     135 0.180213 0.123935
LO-REDUCTION     137 0.164902 0.123935
REFLECTION       139 0.126595 0.088760
HI-REDUCTION     141 0.123935 0.088760
EXTENSION        143 0.109726 0.075099
EXTENSION        145 0.088760 0.050955
EXTENSION        147 0.075099 0.022726
HI-REDUCTION     149 0.050955 0.022726
LO-REDUCTION     151 0.038467 0.017697
HI-REDUCTION     153 0.022726 0.017697
HI-REDUCTION     155 0.022600 0.013923
REFLECTION       157 0.017697 0.008524
HI-REDUCTION     159 0.013923 0.008524
EXTENSION        161 0.012718 0.008024
EXTENSION        163 0.008524 0.002530
EXTENSION        165 0.008024 0.000463
HI-REDUCTION     167 0.002530 0.000463
HI-REDUCTION     169 0.002405 0.000351
HI-REDUCTION     171 0.000710 0.000351
HI-REDUCTION     173 0.000463 0.000183
HI-REDUCTION     175 0.000351 0.000044
HI-REDUCTION     177 0.000183 0.000044
LO-REDUCTION     179 0.000082 0.000002
HI-REDUCTION     181 0.000044 0.000002
HI-REDUCTION     183 0.000012 0.000002
HI-REDUCTION     185 0.000009 0.000002
HI-REDUCTION     187 0.000002 0.000002
HI-REDUCTION     189 0.000002 0.000000
HI-REDUCTION     191 0.000002 0.000000
LO-REDUCTION     193 0.000001 0.000000
Exiting from Nelder Mead minimizer
    195 function evaluations used

> mynmf
$par
[1] 1.000260 1.000506

$value
[1] 8.825241e-08

$counts
function gradient 
     195       NA 

$convergence
[1] 0

$message
NULL


> library(lbfgsb3c)

> cat("try min")
try min
> myopR <- lbfgsb3c(x0, rrosen, gr=NULL, control=list(trace=1))
At iteration 0 f=24.200000 
At iteration 2 f=171.335959 
At iteration 3 f=4.225209 
At iteration 4 f=4.127276 
At iteration 5 f=4.120517 
At iteration 6 f=4.115575 
At iteration 7 f=4.087545 
At iteration 8 f=4.032007 
At iteration 9 f=3.917829 
At iteration 10 f=3.775253 
At iteration 11 f=3.451302 
At iteration 12 f=2.743645 
At iteration 13 f=7.136647 
At iteration 14 f=2.314771 
At iteration 15 f=4.974153 
At iteration 16 f=2.206918 
At iteration 17 f=2.093263 
At iteration 18 f=1.834870 
At iteration 19 f=1.457101 
At iteration 20 f=1.332676 
At iteration 21 f=1.303252 
At iteration 22 f=0.958191 
At iteration 23 f=0.874145 
At iteration 24 f=0.635020 
At iteration 25 f=0.610113 
At iteration 26 f=105.549878 
At iteration 27 f=0.546422 
At iteration 28 f=0.517374 
At iteration 29 f=0.498255 
At iteration 30 f=0.468898 
At iteration 31 f=0.405481 
At iteration 32 f=0.290616 
At iteration 33 f=0.566440 
At iteration 34 f=0.229204 
At iteration 35 f=0.172410 
At iteration 36 f=0.101578 
At iteration 37 f=0.066809 
At iteration 38 f=0.047129 
At iteration 39 f=0.022114 
At iteration 40 f=0.010238 
At iteration 41 f=0.006164 
At iteration 42 f=0.001247 
At iteration 43 f=0.000193 
At iteration 44 f=0.000044 
At iteration 45 f=0.000001 
At iteration 46 f=0.000000 
At iteration 47 f=0.000000 

> myopR
$par
[1] 0.9999997 0.9999995

$grad
[1] -3.276165e-05  1.607110e-05

$value
[1] 7.416322e-13

$counts
[1] 47 47

$convergence
[1] 0

$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH"


> myop <- lbfgsb3c(x0, frosen, gr=NULL, control=list(trace=1))
At iteration 0 f=24.200000 
At iteration 2 f=171.335959 
At iteration 3 f=4.225209 
At iteration 4 f=4.127276 
At iteration 5 f=4.120517 
At iteration 6 f=4.115575 
At iteration 7 f=4.087545 
At iteration 8 f=4.032007 
At iteration 9 f=3.917829 
At iteration 10 f=3.775253 
At iteration 11 f=3.451302 
At iteration 12 f=2.743645 
At iteration 13 f=7.136647 
At iteration 14 f=2.314771 
At iteration 15 f=4.974153 
At iteration 16 f=2.206918 
At iteration 17 f=2.093263 
At iteration 18 f=1.834870 
At iteration 19 f=1.457101 
At iteration 20 f=1.332676 
At iteration 21 f=1.303252 
At iteration 22 f=0.958191 
At iteration 23 f=0.874145 
At iteration 24 f=0.635020 
At iteration 25 f=0.610113 
At iteration 26 f=105.549878 
At iteration 27 f=0.546422 
At iteration 28 f=0.517374 
At iteration 29 f=0.498255 
At iteration 30 f=0.468898 
At iteration 31 f=0.405481 
At iteration 32 f=0.290616 
At iteration 33 f=0.566440 
At iteration 34 f=0.229204 
At iteration 35 f=0.172410 
At iteration 36 f=0.101578 
At iteration 37 f=0.066809 
At iteration 38 f=0.047129 
At iteration 39 f=0.022114 
At iteration 40 f=0.010238 
At iteration 41 f=0.006164 
At iteration 42 f=0.001247 
At iteration 43 f=0.000193 
At iteration 44 f=0.000044 
At iteration 45 f=0.000001 
At iteration 46 f=0.000000 
At iteration 47 f=0.000000 

> myop
$par
[1] 0.9999997 0.9999995

$grad
[1] -3.276165e-05  1.607110e-05

$value
[1] 7.416322e-13

$counts
[1] 47 47

$convergence
[1] 0

$message
[1] "CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH"

References



Try the lbfgsb3c package in your browser

Any scripts or data that you put into this service are public.

lbfgsb3c documentation built on May 2, 2019, 4:59 p.m.