NNetIterations: Neural networks for regression and binary classification

Description Usage Arguments Value Examples

Description

Training by using nerual network with gradient descending (real numbers for regression, probabilities for binary classification).

Usage

1
2
NNetIterations(X.mat, y.vec, max.iterations, step.size, n.hidden.units,
  is.train)

Arguments

X.mat

(feature matrix, n_observations x n_features)

y.vec

(label vector, n_observations x 1)

max.iterations

(int scalar > 1)

n.hidden.units

(number of hidden units)

is.train

(logical vector of size n_observations, TRUE if the observation is in the train set, FALSE for the validation set)

Value

pred.mat (n_observations x max.iterations matrix of predicted values or n x k)

W.mat:final weight matrix(n_features+1 x n.hidden.units or p+1 x u)

v.vec: final weight vector (n.hidden.units+1 or u+1).

predict(testX.mat): a function that takes a test features matrix and returns a vector of predictions (real numbers for regression, probabilities for binary classification) The first row of W.mat should be the intercept terms; the first element of v.vec should be the intercept term.

Examples

1
2
3
4
5
6
7
8
9
data(ozone, package = "ElemStatLearn")
y.vec <- ozone[, 1]
X.mat <- as.matrix(ozone[,-1])
num.train <- dim(X.mat)[1]
num.feature <- dim(X.mat)[2]
X.mean.vec <- colMeans(X.mat)
X.std.vec <- sqrt(rowSums((t(X.mat) - X.mean.vec) ^ 2) / num.train)
X.std.mat <- diag(num.feature) * (1 / X.std.vec)
X.scaled.mat <- t((t(X.mat) - X.mean.vec) / X.std.vec)

gotlr/CS499-machinelearning documentation built on May 14, 2019, 3:06 a.m.