d_H2OAE: Autoencoder using H2O

View source: R/d_H2OAE.R

d_H2OAER Documentation

Autoencoder using H2O

Description

Train an Autoencoder using h2o::h2o.deeplearning Check out the H2O Flow at ⁠[ip]:[port]⁠, Default IP:port is "localhost:54321" e.g. if running on localhost, point your web browser to localhost:54321

Usage

d_H2OAE(
  x,
  x.test = NULL,
  x.valid = NULL,
  ip = "localhost",
  port = 54321,
  n.hidden.nodes = c(ncol(x), 3, ncol(x)),
  extract.layer = ceiling(length(n.hidden.nodes)/2),
  epochs = 5000,
  activation = "Tanh",
  loss = "Automatic",
  input.dropout.ratio = 0,
  hidden.dropout.ratios = rep(0, length(n.hidden.nodes)),
  learning.rate = 0.005,
  learning.rate.annealing = 1e-06,
  l1 = 0,
  l2 = 0,
  stopping.rounds = 50,
  stopping.metric = "AUTO",
  scale = TRUE,
  center = TRUE,
  n.cores = rtCores,
  verbose = TRUE,
  save.mod = FALSE,
  outdir = NULL,
  ...
)

Arguments

x

Vector / Matrix / Data Frame: Training set Predictors

x.test

Vector / Matrix / Data Frame: Testing set Predictors

x.valid

Vector / Matrix / Data Frame: Validation set Predictors

ip

Character: IP address of H2O server. Default = "localhost"

port

Integer: Port number for server. Default = 54321

n.hidden.nodes

Integer vector of length equal to the number of hidden layers you wish to create

extract.layer

Integer: Which layer to extract. For regular autoencoder, this is the middle layer. Default = ceiling(length(n.hidden.nodes)/2)

epochs

Integer: How many times to iterate through the dataset. Default = 5000

activation

Character: Activation function to use: "Tanh" (Default), "TanhWithDropout", "Rectifier", "RectifierWithDropout", "Maxout", "MaxoutWithDropout"

loss

Character: "Automatic" (Default), "CrossEntropy", "Quadratic", "Huber", "Absolute"

input.dropout.ratio

Float (0, 1): Dropout ratio for inputs

hidden.dropout.ratios

Vector, Float (0, 2): Dropout ratios for hidden layers

learning.rate

Float: Learning rate. Default = .005

learning.rate.annealing

Float: Learning rate annealing. Default = 1e-06

l1

Float (0, 1): L1 regularization (introduces sparseness; i.e. sets many weights to 0; reduces variance, increases generalizability)

l2

Float (0, 1): L2 regularization (prevents very large absolute weights; reduces variance, increases generalizability)

stopping.rounds

Integer: Stop if simple moving average of length stopping.rounds of the stopping.metric does not improve. Set to 0 to disable. Default = 50

stopping.metric

Character: Stopping metric to use: "AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "lift_top_group", "misclassification", "mean_per_class_error". Default = "AUTO" ("logloss" for Classification, "deviance" for Regression)

scale

Logical: If TRUE, scale input before training autoencoder. Default = TRUE

center

Logical: If TRUE, center input before training autoencoder. Default = TRUE

n.cores

Integer: Number of cores to use

verbose

Logical: If TRUE, print summary to screen.

save.mod

Logical: If TRUE, save all output to an RDS file in outdir save.mod is TRUE by default if an outdir is defined. If set to TRUE, and no outdir is defined, outdir defaults to paste0("./s.", mod.name)

outdir

Path to output directory. If defined, will save Predicted vs. True plot, if available, as well as full model output, if save.mod is TRUE

...

Additional arguments to pass to h2p::h2o.deeplearning

Value

rtDecom object

Author(s)

E.D. Gennatas

See Also

decom

Other Decomposition: d_H2OGLRM(), d_ICA(), d_Isomap(), d_KPCA(), d_LLE(), d_MDS(), d_NMF(), d_PCA(), d_SPCA(), d_SVD(), d_TSNE(), d_UMAP()

Other Deep Learning: s_H2ODL(), s_TFN()


egenn/rtemis documentation built on Dec. 17, 2024, 6:16 p.m.