s_SGD: Stochastic Gradient Descent (SGD) (C, R)

View source: R/s_SGD.R

s_SGDR Documentation

Stochastic Gradient Descent (SGD) (C, R)

Description

Train a model by Stochastic Gradient Descent using sgd::sgd

Usage

s_SGD(
  x,
  y = NULL,
  x.test = NULL,
  y.test = NULL,
  x.name = NULL,
  y.name = NULL,
  model = NULL,
  model.control = list(lambda1 = 0, lambda2 = 0),
  sgd.control = list(method = "ai-sgd"),
  upsample = FALSE,
  downsample = FALSE,
  resample.seed = NULL,
  print.plot = FALSE,
  plot.fitted = NULL,
  plot.predicted = NULL,
  plot.theme = rtTheme,
  question = NULL,
  verbose = TRUE,
  outdir = NULL,
  save.mod = ifelse(!is.null(outdir), TRUE, FALSE),
  ...
)

Arguments

x

Numeric vector or matrix / data frame of features i.e. independent variables

y

Numeric vector of outcome, i.e. dependent variable

x.test

Numeric vector or matrix / data frame of testing set features Columns must correspond to columns in x

y.test

Numeric vector of testing set outcome

x.name

Character: Name for feature set

y.name

Character: Name for outcome

model

character specifying the model to be used: "lm" (linear model), "glm" (generalized linear model), "cox" (Cox proportional hazards model), "gmm" (generalized method of moments), "m" (M-estimation). See ‘Details’.

model.control

a list of parameters for controlling the model.

family ("glm")

a description of the error distribution and link function to be used in the model. This can be a character string naming a family function, a family function or the result of a call to a family function. (See family for details of family functions.)

rank ("glm")

logical. Should the rank of the design matrix be checked?

fn ("gmm")

a function g(\theta,x) which returns a k-vector corresponding to the k moment conditions. It is a required argument if gr not specified.

gr ("gmm")

a function to return the gradient. If unspecified, a finite-difference approximation will be used.

nparams ("gmm")

number of model parameters. This is automatically determined for other models.

type ("gmm")

character specifying the generalized method of moments procedure: "twostep" (Hansen, 1982), "iterative" (Hansen et al., 1996). Defaults to "iterative".

wmatrix ("gmm")

weighting matrix to be used in the loss function. Defaults to the identity matrix.

loss ("m")

character specifying the loss function to be used in the estimating equation. Default is the Huber loss.

lambda1

L1 regularization parameter. Default is 0.

lambda2

L2 regularization parameter. Default is 0.

sgd.control

an optional list of parameters for controlling the estimation.

method

character specifying the method to be used: "sgd", "implicit", "asgd", "ai-sgd", "momentum", "nesterov". Default is "ai-sgd". See ‘Details’.

lr

character specifying the learning rate to be used: "one-dim", "one-dim-eigen", "d-dim", "adagrad", "rmsprop". Default is "one-dim". See ‘Details’.

lr.control

vector of scalar hyperparameters one can set dependent on the learning rate. For hyperparameters aimed to be left as default, specify NA in the corresponding entries. See ‘Details’.

start

starting values for the parameter estimates. Default is random initialization around zero.

size

number of SGD estimates to store for diagnostic purposes (distributed log-uniformly over total number of iterations)

reltol

relative convergence tolerance. The algorithm stops if it is unable to change the relative mean squared difference in the parameters by more than the amount. Default is 1e-05.

npasses

the maximum number of passes over the data. Default is 3.

pass

logical. Should tol be ignored and run the algorithm for all of npasses?

shuffle

logical. Should the algorithm shuffle the data set including for each pass?

verbose

logical. Should the algorithm print progress?

upsample

Logical: If TRUE, upsample cases to balance outcome classes (for Classification only) Note: upsample will randomly sample with replacement if the length of the majority class is more than double the length of the class you are upsampling, thereby introducing randomness

downsample

Logical: If TRUE, downsample majority class to match size of minority class

resample.seed

Integer: If provided, will be used to set the seed during upsampling. Default = NULL (random seed)

print.plot

Logical: if TRUE, produce plot using mplot3 Takes precedence over plot.fitted and plot.predicted.

plot.fitted

Logical: if TRUE, plot True (y) vs Fitted

plot.predicted

Logical: if TRUE, plot True (y.test) vs Predicted. Requires x.test and y.test

plot.theme

Character: "zero", "dark", "box", "darkbox"

question

Character: the question you are attempting to answer with this model, in plain language.

verbose

Logical: If TRUE, print summary to screen.

outdir

Path to output directory. If defined, will save Predicted vs. True plot, if available, as well as full model output, if save.mod is TRUE

save.mod

Logical: If TRUE, save all output to an RDS file in outdir save.mod is TRUE by default if an outdir is defined. If set to TRUE, and no outdir is defined, outdir defaults to paste0("./s.", mod.name)

...

Additional arguments to be passed to sgd.control

Details

From sgd::sgd: "Models: The Cox model assumes that the survival data is ordered when passed in, i.e., such that the risk set of an observation i is all data points after it."

Value

Object of class rtemis

Author(s)

E.D. Gennatas

See Also

train_cv for external cross-validation

Other Supervised Learning: s_AdaBoost(), s_AddTree(), s_BART(), s_BRUTO(), s_BayesGLM(), s_C50(), s_CART(), s_CTree(), s_EVTree(), s_GAM(), s_GBM(), s_GLM(), s_GLMNET(), s_GLMTree(), s_GLS(), s_H2ODL(), s_H2OGBM(), s_H2ORF(), s_HAL(), s_Isotonic(), s_KNN(), s_LDA(), s_LM(), s_LMTree(), s_LightCART(), s_LightGBM(), s_MARS(), s_MLRF(), s_NBayes(), s_NLA(), s_NLS(), s_NW(), s_PPR(), s_PolyMARS(), s_QDA(), s_QRNN(), s_RF(), s_RFSRC(), s_Ranger(), s_SDA(), s_SPLS(), s_SVM(), s_TFN(), s_XGBoost(), s_XRF()


egenn/rtemis documentation built on Dec. 17, 2024, 6:16 p.m.