DeepLearning: Fit a neural network model

View source: R/deeplearning.R

DeepLearningR Documentation

Fit a neural network model

Description

Fit a neural network model

Usage

DeepLearning(
  formula,
  data = NULL,
  subset = NULL,
  weights = NULL,
  output = "Accuracy",
  missing = "Exclude cases with missing data",
  normalize = TRUE,
  seed = 12321,
  rand.verbose = FALSE,
  show.labels = FALSE,
  hidden.nodes = 10,
  max.epochs = 100
)

Arguments

formula

A formula of the form groups ~ x1 + x2 + ... That is, the response is the grouping factor and the right hand side specifies the (non-factor) discriminators, and any transformations, interactions, or other non-additive operators apart from . will be ignored.

data

A data.frame from which variables specified in formula are preferentially to be taken.

subset

An optional vector specifying a subset of observations to be used in the fitting process, or, the name of a variable in data. It may not be an expression.

weights

An optional vector of sampling weights, or the name of a variable in data. It may not be an expression.

output

One of "Accuracy", "Prediction-Accuracy Table", "Cross Validation" or "Network Layers".

missing

How missing data is to be treated. Options: "Error if missing data", "Exclude cases with missing data", or "Imputation (replace missing values with estimates)".

normalize

Logical; if TRUE all predictor variables are normalized to have zero mean and unit variance.

seed

The random number seed.

rand.verbose

Prints extra info for checking the random number generation

show.labels

Shows the variable labels, as opposed to the labels, in the outputs, where a variables label is an attribute (e.g., attr(foo, "label")).

hidden.nodes

A vector that specifies the number of hidden nodes in each hidden layer (and hence implicitly the number of hidden layers). Alternatively, a comma-delimited string of integers may be provided.

max.epochs

Integer; the maximum number of epochs for which to train the network.

Details

Categorical predictor variables are converted to binary (dummy) variables.

The model is trained first using a random 70 cross-validation loss on the remaining 30 max.epochs and 3 epochs of no improvement in cross-validation loss. The final model is then retrained on all data (after any "subset").


Displayr/flipMultivariates documentation built on Feb. 26, 2024, 12:39 a.m.