gradDesc_fixed_df: Gradient Descent with a fixed number of constant pieces...

Description Usage Arguments Details

View source: R/gradDesc_fixed_df.R

Description

Gradient Descent with a fixed number of constant pieces (degrees of freedom)

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
gradDesc_fixed_df(
  yy,
  grad,
  init = stats::median(yy),
  counts = length(yy),
  stepsize,
  MM,
  tol = 1e-07,
  printevery = Inf,
  filename
)

Arguments

yy

Y (response) observation vector (numeric)

grad

a function(yy, mm) where mm may be shorter length than yy and is the previous iterate value (i.e., the estimate vector).

init

Initial value of estimate ('mm'). I.e., numeric vector of length <= length(mm). The output will be of length length(init).

counts

Vector of length length(init); each entry indicates how many values of yy the corresponding value of init (and output) corresponds to. Alternatively, can think of counts as a vector of weights for each estimator value.

stepsize

Gradient descent stepsize. Set carefully!

MM

Number of iterations in which "support reduction" (combining of approximately equal values into a region of constancy) is done (see details and paper). Depending on tol, may not use all MM iterations.

tol

Tolerance: end algorithm once sum(abs(mm-mmprev)) < tol or you hit MM iterations.

printevery

integer value (generally << MM). Every 'printevery' iterations, a count will be printed and the output saved.

filename

path1/path2/filename to save output to.

Details

Prefer using UMRgradDesc_fixed_df now; this function deprecated.

xxxx Implements a gradient descent. See paper for details. Right now stepsize is fixed. Right now: init gets sorted in gradDesc_PC so does not need to be sorted on input. Roughly, the difference between this algorithm and gradDesc() (which is just vanilla gradient descent on this problem) is that: if mm is the current value of the output estimate, then gradDesc_PC 'collapses' or combines values of mm that are (roughly, up to tolerance 'eps') equal. Because the solution is generally piecewise constant with a relatively small number of constant regions this enormously speeds up the later stages of the algorithm. Note that once points are combined/collapsed they contribute identically to the objective function, so they will never be "uncombined".


UMR documentation built on Aug. 14, 2021, 9:09 a.m.

Related to gradDesc_fixed_df in UMR...