SAGD: Stochastic Average Gradient Descent (SAGD) Method Learning...

Description Usage Arguments Details Value References See Also Examples

View source: R/gradDescentR.Methods.R

Description

A function to build prediction model using Stochastic Average Gradient Descent (SAGD) method.

Usage

1
SAGD(dataTrain, alpha = 0.1, maxIter = 10, seed = NULL)

Arguments

dataTrain

a data.frame that representing training data (m \times n), where m is the number of instances and n is the number of variables where the last column is the output variable. dataTrain must have at least two columns and ten rows of data that contain only numbers (integer or float).

alpha

a float value representing learning rate. Default value is 0.1

maxIter

the maximal number of iterations.

seed

a integer value for static random. Default value is NULL, which means the function will not do static random.

Details

This function based on SGD that only compute one instances of of training data stochasticaly. But SAGD has an averaging control optimization to decide between do the coefficient update or not randomly. This optimization will speed-up the learning, if it doesn't perform computation and update the coefficient.

Value

a vector matrix of theta (coefficient) for linear model.

References

M. Schmidt, N. Le Roux, F. Bach Minimizing Finite Sums with the Stochastic Average Gradient, INRIA-SIERRA Project - Team Departement d'informatique de l'Ecole Normale Superieure, (2013)

See Also

SGD

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
##################################
## Learning and Build Model with SAGD
## load R Package data
data(gradDescentRData)
## get z-factor data
dataSet <- gradDescentRData$CompressilbilityFactor
## split dataset
splitedDataSet <- splitData(dataSet)
## build model with SAGD
SAGDmodel <- SAGD(splitedDataSet$dataTrain)
#show result
print(SAGDmodel)

computer-science-upi/gradDescent documentation built on May 29, 2019, 4:46 a.m.