d_H2OAE | R Documentation |
Train an Autoencoder using h2o::h2o.deeplearning
Check out the H2O Flow at [ip]:[port]
, Default IP:port is "localhost:54321"
e.g. if running on localhost, point your web browser to localhost:54321
d_H2OAE(
x,
x.test = NULL,
x.valid = NULL,
ip = "localhost",
port = 54321,
n.hidden.nodes = c(ncol(x), 3, ncol(x)),
extract.layer = ceiling(length(n.hidden.nodes)/2),
epochs = 5000,
activation = "Tanh",
loss = "Automatic",
input.dropout.ratio = 0,
hidden.dropout.ratios = rep(0, length(n.hidden.nodes)),
learning.rate = 0.005,
learning.rate.annealing = 1e-06,
l1 = 0,
l2 = 0,
stopping.rounds = 50,
stopping.metric = "AUTO",
scale = TRUE,
center = TRUE,
n.cores = rtCores,
verbose = TRUE,
save.mod = FALSE,
outdir = NULL,
...
)
x |
Vector / Matrix / Data Frame: Training set Predictors |
x.test |
Vector / Matrix / Data Frame: Testing set Predictors |
x.valid |
Vector / Matrix / Data Frame: Validation set Predictors |
ip |
Character: IP address of H2O server. Default = "localhost" |
port |
Integer: Port number for server. Default = 54321 |
Integer vector of length equal to the number of hidden layers you wish to create | |
extract.layer |
Integer: Which layer to extract. For regular autoencoder, this is the middle layer.
Default = |
epochs |
Integer: How many times to iterate through the dataset. Default = 5000 |
activation |
Character: Activation function to use: "Tanh" (Default), "TanhWithDropout", "Rectifier", "RectifierWithDropout", "Maxout", "MaxoutWithDropout" |
loss |
Character: "Automatic" (Default), "CrossEntropy", "Quadratic", "Huber", "Absolute" |
input.dropout.ratio |
Float (0, 1): Dropout ratio for inputs |
Vector, Float (0, 2): Dropout ratios for hidden layers | |
learning.rate |
Float: Learning rate. Default = .005 |
learning.rate.annealing |
Float: Learning rate annealing. Default = 1e-06 |
l1 |
Float (0, 1): L1 regularization (introduces sparseness; i.e. sets many weights to 0; reduces variance, increases generalizability) |
l2 |
Float (0, 1): L2 regularization (prevents very large absolute weights; reduces variance, increases generalizability) |
stopping.rounds |
Integer: Stop if simple moving average of length |
stopping.metric |
Character: Stopping metric to use: "AUTO", "deviance", "logloss", "MSE", "RMSE", "MAE", "RMSLE", "AUC", "lift_top_group", "misclassification", "mean_per_class_error". Default = "AUTO" ("logloss" for Classification, "deviance" for Regression) |
scale |
Logical: If TRUE, scale input before training autoencoder. Default = TRUE |
center |
Logical: If TRUE, center input before training autoencoder. Default = TRUE |
n.cores |
Integer: Number of cores to use |
verbose |
Logical: If TRUE, print summary to screen. |
save.mod |
Logical: If TRUE, save all output to an RDS file in |
outdir |
Path to output directory.
If defined, will save Predicted vs. True plot, if available,
as well as full model output, if |
... |
Additional arguments to pass to |
rtDecom
object
E.D. Gennatas
decom
Other Decomposition:
d_H2OGLRM()
,
d_ICA()
,
d_Isomap()
,
d_KPCA()
,
d_LLE()
,
d_MDS()
,
d_NMF()
,
d_PCA()
,
d_SPCA()
,
d_SVD()
,
d_TSNE()
,
d_UMAP()
Other Deep Learning:
s_H2ODL()
,
s_TFN()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.