MBGD: Mini-Batch Gradient Descent

Description Usage Arguments Value Examples

View source: R/StatComp21088R.R

Description

It is a compromise between batch gradient descent and stochastic gradient descent.The idea is to use batch_size samples per iteration to update the parameters.

Usage

1
MBGD(input_data, real_result, batch_size, alpha, theta)

Arguments

input_data

Input_data matrix after adding constant 1 column

real_result

Real_result vector whose length is equal to the column number of data.

batch_size

Batch_size parameter constant

alpha

Learning rate

theta

The initial parameters of linear regression

Value

theta after iterations theta

Examples

1
2
3
4
5
6
7
8
9
## Not run: 
x <- seq(0.1,10,0.002)
n <- length(x)
y <- 2*x+5+rnorm(n)
z <- as.matrix(data.frame(rep(1,n),x))
theta <- MBGD(z, y, 100,0.002,c(1,1))
print(theta)

## End(Not run)

Sakoylf/StatComp21088 documentation built on Dec. 23, 2021, 10:22 p.m.