sag: Stochastic Average Gradient with warm-starting

Usage Arguments Value

Usage

1
2
3
4
5
sag(X, y, lambdas, maxiter = NULL, w = NULL, alpha = NULL,
  stepSizeType = 1, Li = NULL, Lmax = NULL, increasing = TRUE,
  d = NULL, g = NULL, covered = NULL, standardize = FALSE,
  tol = 0.001, family = "binomial", fit_alg = "constant",
  user_loss_function = NULL, ...)

Arguments

X

Matrix, possibly sparse of features.

y

Matrix of targets.

lambdas

Vector. Vector of L2 regularization parameters.

maxiter

Maximum number of iterations.

w

Matrix of weights.

alpha

constant step-size. Used only when fit_alg == "constant"

stepSizeType

scalar default is 1 to use 1/L, set to 2 to use 2/(L + n*myu). Only used when fit_alg="linesearch"

Li

Scalar or Matrix.Initial individual Lipschitz approximation.

Lmax

Initial global Lipschitz approximation.

increasing

Boolean. TRUE allows increase of Lipschitz coeffecient. False allows only decrease.

d

Initial approximation of cost function gradient.

g

Initial approximation of individual losses gradient.

covered

Matrix of covered samples.

standardize

Boolean. Scales the data if True

tol

Real. Miminal required approximate gradient norm before convergence.

family

One of "binomial", "gaussian", "exponential" or "poisson"

fit_alg

One of "constant", "linesearch" (default), or "adaptive".

user_loss_function

User supplied R or C loss and gradient functions

...

Any other pass-through parameters.

Value

object of class SAG


IshmaelBelghazi/bigoptim documentation built on May 7, 2019, 6:44 a.m.