MBGD: Mini-Batch Gradient Descent (MBGD) Method Learning Function

Description Usage Arguments Details Value References See Also Examples

Description

A function to build prediction model using Mini-Batch Gradient Descent (MBGD) method.

Usage

1
MBGD(dataTrain, alpha = 0.1, maxIter = 10, nBatch = 2, seed = NULL)

Arguments

dataTrain

a data.frame that representing training data (m \times n), where m is the number of instances and n is the number of variables where the last column is the output variable. dataTrain must have at least two columns and ten rows of data that contain only numbers (integer or float).

alpha

a float value representing learning rate. Default value is 0.1

maxIter

the maximal number of iterations.

nBatch

a integer value representing the training data batch.

seed

a integer value for static random. Default value is NULL, which means the function will not do static random.

Details

This function based on GD method with optimization to use the training data partially. MBGD has a parameter named batchRate that represent the instances percentage of training data.

Value

a vector matrix of theta (coefficient) for linear model.

References

A. Cotter, O. Shamir, N. Srebro, K. Sridharan Better Mini-Batch Algoritms via Accelerated Gradient Methods, NIPS, pp. 1647- (2011)

See Also

GD

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
##################################
## Learning and Build Model with MBGD
## load R Package data
data(gradDescentRData)
## get z-factor data
dataSet <- gradDescentRData$CompressilbilityFactor
## split dataset
splitedDataSet <- splitData(dataSet)
## build model with 0.8 batch rate MBGD
MBGDmodel <- MBGD(splitedDataSet$dataTrain, nBatch=2)
#show result
print(MBGDmodel)

cs-upi/gradDescent documentation built on May 12, 2019, 5:45 a.m.