View source: R/2-5-block-dnn.R
dnnet_block | R Documentation |
Fit a Blocked Feedforward Deep Neural Network Model for Regression or Classification
dnnet_block( train, validate = NULL, load.param = FALSE, initial.param = NULL, norm.x = TRUE, norm.y = ifelse(is.factor(train@y), FALSE, TRUE), activate = "relu", n.hidden = list(dim(train@x)[2], 10, 10), learning.rate = ifelse(learning.rate.adaptive %in% c("adam"), 0.001, 0.01), l1.reg = 0, l2.reg = 0, n.batch = 100, n.epoch = 100, early.stop = ifelse(is.null(validate), FALSE, TRUE), early.stop.det = 1000, plot = FALSE, accel = c("rcpp", "none")[1], learning.rate.adaptive = c("constant", "adadelta", "adagrad", "momentum", "adam")[5], rho = c(0.9, 0.95, 0.99, 0.999)[ifelse(learning.rate.adaptive == "momentum", 1, 3)], epsilon = c(10^-10, 10^-8, 10^-6, 10^-4)[2], beta1 = 0.9, beta2 = 0.999, loss.f = ifelse(is.factor(train@y), "logit", "mse") )
train |
A |
validate |
A |
load.param |
Whether initial parameters are loaded into the model. |
initial.param |
The initial parameters to be loaded. |
norm.x |
A boolean variable indicating whether to normalize the input matrix. |
norm.y |
A boolean variable indicating whether to normalize the response (if continuous). |
activate |
Activation Function. One of the following, "sigmoid", "tanh", "relu", "prelu", "elu", "celu". |
n.hidden |
A list of numeric vectors for the blocked hidden structures, starting from the input layer. |
learning.rate |
Initial learning rate, 0.001 by default; If "adam" is chosen as an adaptive learning rate adjustment method, 0.1 by defalut. |
l1.reg |
weight for l1 regularization, optional. |
l2.reg |
weight for l2 regularization, optional. |
n.batch |
Batch size for batch gradient descent. |
n.epoch |
Maximum number of epochs. |
early.stop |
Indicate whether early stop is used (only if there exists a validation set). |
early.stop.det |
Number of epochs of increasing loss to determine the early stop. |
plot |
Indicate whether to plot the loss. |
accel |
"rcpp" to use the Rcpp version and "none" (default) to use the R version for back propagation. |
learning.rate.adaptive |
Adaptive learning rate adjustment methods, one of the following, "constant", "adadelta", "adagrad", "momentum", "adam". |
rho |
A parameter used in momentum. |
epsilon |
A parameter used in Adagrad and Adam. |
beta1 |
A parameter used in Adam. |
beta2 |
A parameter used in Adam. |
loss.f |
Loss function of choice. |
Returns a DnnModelObj
object.
dnnet-class
dnnetInput-class
actF
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.