UMRgradDesc_PC: Gradient Descent implemented for Piecewise Constant functions

Description Usage Arguments Details Examples

Description

Gradient Descent implemented for Piecewise Constant functions

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
UMRgradDesc_PC(
  yy,
  grad,
  init,
  stepsize,
  MM,
  eps,
  JJ = 50,
  printevery,
  filename
)

Arguments

yy

Y (response) observation vector (numeric)

grad

a function(yy, mm) where mm may be shorter length than yy and is the previous iterate value (i.e., the estimate vector).

init

Initial value of estimate ('mm'). I.e., numeric vector usually of same length as yy.

stepsize

Gradient descent stepsize. Set carefully!

MM

Number of iterations in which "support reduction" (combining of approximately equal values into a region of constancy) is done (see details and paper).

eps

Roughly, points that are eps apart are considered to be equal and are thus collapsed into a single region of piecewise constancy of the output. (This is not precisely true because one can have a long sorted-increasing vector of points that are each eps from their two neighboring points but such that the first and last points are not eps apart. See algorithm description in paper for details. )

JJ

Total number of gradient steps is MM*JJ. JJ gradient steps are taken for each of the MM steps.

printevery

integer value (generally << MM). Every 'printevery' iterations, a count will be printed and the output saved.

filename

path1/path2/filename to save output to.

Details

Implements a gradient descent. See paper for details. Right now stepsize is fixed. Right now: init gets sorted in gradDesc_PC so does not need to be sorted on input. Roughly, the difference between this algorithm and gradDesc() (which is just vanilla gradient descent on this problem) is that: if mm is the current value of the output estimate, then gradDesc_PC 'collapses' or combines values of mm that are (roughly, up to tolerance 'eps') equal. Because the solution is generally piecewise constant with a relatively small number of constant regions this enormously speeds up the later stages of the algorithm. Note that once points are combined/collapsed they contribute identically to the objective function, so they will never be "uncombined".

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#'
#### Set up the gradient function 
 mysig <- 1 ##  std dev
errdist <- distr::Norm(0, sd=mysig)
modeldistname <- truedistname <- "Gauss" ## used for savefile name
mm0 <- function(xx){xx}
nn <- 300
xx <- sort(runif(n=nn, 0, 7))
yy <- mm0(xx) + errdist@r(nn)
## plot(xx,yy)

myScale <- mysig

AAfunc_Gauss <- purrr::partial(AAfunc_Gauss_generic, sig=!!mysig)
AA_Gauss <- purrr::partial(AA, func=!!AAfunc_Gauss)
BBfunc_Gauss <- purrr::partial(BBfunc_Gauss_generic, sig=!!mysig)
BB_Gauss <- purrr::partial(BB, func=!!BBfunc_Gauss)
mygradSIR <- 
    grad_SIR_Gauss <- ## just for ease of reference
        purrr::partial(grad_SIR_generic,
                       rescale=TRUE, ## factor of nn/2
                       AAfunc=!!AA_Gauss, BBfunc=!!BB_Gauss)

 ## Now run the gradient descent 
savefilenameUnique <- paste("graddesc_", modeldistname, "_", truedistname,
                            "_n", nn,
                            "_", format(Sys.time(), "%Y-%m-%d-%T"), ".rsav", sep="")
print(paste("The unique save file name for this run is", savefilenameUnique))
stepsize <- nn^(1/2) ## Has to be tuned
MM <-  200 ## Total number iterations is MM * JJ 
JJ <- 2
eps <- (max(yy)-min(yy)) / (1000 * nn^(1/5) * myScale)
## print *and* SAVE every 'printevery' iterations;
## here no save occurs, printevery > MM
printevery <- 1000 
init <- yy


 mmhat <- UMRgradDesc_PC(yy=yy, grad=mygradSIR, ## from settings file
                      init=init,
                      stepsize=stepsize, MM=MM,
                      JJ=JJ, eps=eps,
                      printevery=printevery,
                      filename=paste0("../saves/", savefilenameUnique))
####  some classical/matched [oracle] estimators
isoreg_std <- Iso::ufit(y=yy, x=xx, lmode=Inf)
mmhat_std = isoreg_std$y ## Isotonic regression
linreg_std <- lm(yy~xx)

UMR documentation built on Aug. 14, 2021, 9:09 a.m.

Related to UMRgradDesc_PC in UMR...