LMSquareLossL2: Linear model L2 regularization with square loss

Description Usage Arguments Value Examples

Description

Training by using L2 regularization on a linear model with square loss . Return the optimal weight vector for the given threshold and penalty.

Usage

1
2
LMSquareLossL2(X.scaled.mat, y.vec, penalty, opt.thresh = 0.5,
  initial.weight.vec, step.size = 0.01)

Arguments

X.scaled.mat

a numeric matrix of size [n x p]

y.vec

a numeric matrix of length nrow(X.scaled.mat)

penalty

a non-negative numeric scalar

opt.thresh

a positive numeric scalar

initial.weight.vec

a numeric vector of size ncol(X.scaled.mat)

step.size

a numeric scalar, which is also greater than 0

Value

opt.weight the optimal weight vector of length ncol(X.scaled)

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
data(ozone, package = "ElemStatLearn")
y.vec <- ozone[, 1]
X.mat <- as.matrix(ozone[,-1])
num.train <- dim(X.mat)[1]
num.feature <- dim(X.mat)[2]
X.mean.vec <- colMeans(X.mat)
X.std.vec <- sqrt(rowSums((t(X.mat) - X.mean.vec) ^ 2) / num.train)
X.std.mat <- diag(num.feature) * (1 / X.std.vec)
X.scaled.mat <- t((t(X.mat) - X.mean.vec) / X.std.vec)
optimal.weight.vec <- LMSquareLossL2(X.scaled.mat, y.vec, penalty = 0.5, initial.weight.vec = c(rep(0, ncol(X.mat) + 1)))

SixianZhang/CS499-Coding-Project-2 documentation built on May 26, 2019, 3:31 p.m.