gradient_descent_os: An alternative version of gradeint descent method with loss...

Description Usage Arguments Value Examples

View source: R/gradient_descent_os.R

Description

Implement gradient descent for ordinary least square. The loss used this time is the out of sample accuracy calculated by the k folds cross-validation. The function may take longer time to run due to the more complex structure Gradient descent can only handle design matrix with full rank. For design matrix with problem of collinearity, if perfect collinearity presents, the OLS estimate computed by gradient descent contains redundant estimate corresponding to variables should be omitted; For other cases with strong collinearity, the method may not be convergent (i.e. the lm_patho data). This function will pass the data to the the function "linear_model" when it cannot be able to handle the problem

Usage

1
2
3
4
5
6
7
8
9
gradient_descent_os(
  formula,
  data,
  contrasts = NULL,
  gamma = 1e-04,
  fold.num = 10,
  maxiter = 1e+06,
  tolt = 1e-08
)

Arguments

formula

A symbolic description of the model to be fitted. This should be a formula class argument.

data

Specification of a dataframe that contains the variables in the model.

contrasts

A list of contrasts.

gamma

Specification of a learning rate that adjust the OLS estimates along gradient.

fold.num

number of folds specified to conduct the cross-validation

maxiter

Maximum number of iterations for the updating process of OLS estimates.

tolt

A tolerance that bounds the difference between the current SSR and the updated SSR.

Value

A list of component that imitates the output of lm() function. Including estimated coefficients for predictors specified in the formula. also may return a warning if the iterations exceed the maximum iteration number.

Examples

1
2
data(iris)
gradient_descent_os(Sepal.Length ~ ., iris)

Zebedial/bis557 documentation built on Dec. 21, 2020, 2:16 a.m.