s_TFN | R Documentation |
Train an Feedforward Neural Network using keras and tensorflow
s_TFN(
x,
y = NULL,
x.test = NULL,
y.test = NULL,
class.weights = NULL,
ifw = TRUE,
ifw.type = 2,
upsample = FALSE,
downsample = FALSE,
resample.seed = NULL,
net = NULL,
n.hidden.nodes = NULL,
initializer = c("glorot_uniform", "glorot_normal", "he_uniform", "he_normal",
"lecun_uniform", "lecun_normal", "random_uniform", "random_normal",
"variance_scaling", "truncated_normal", "orthogonal", "zeros", "ones", "constant"),
initializer.seed = NULL,
dropout = 0,
activation = c("relu", "selu", "elu", "sigmoid", "hard_sigmoid", "tanh", "exponential",
"linear", "softmax", "softplus", "softsign"),
kernel_l1 = 0.1,
kernel_l2 = 0,
activation_l1 = 0,
activation_l2 = 0,
batch.normalization = TRUE,
output = NULL,
loss = NULL,
optimizer = c("rmsprop", "adadelta", "adagrad", "adam", "adamax", "nadam", "sgd"),
learning.rate = NULL,
metric = NULL,
epochs = 100,
batch.size = NULL,
validation.split = 0.2,
callback = keras::callback_early_stopping(patience = 150),
scale = TRUE,
x.name = NULL,
y.name = NULL,
print.plot = FALSE,
plot.fitted = NULL,
plot.predicted = NULL,
plot.theme = rtTheme,
question = NULL,
verbose = TRUE,
outdir = NULL,
save.mod = ifelse(!is.null(outdir), TRUE, FALSE),
...
)
x |
Numeric vector or matrix / data frame of features i.e. independent variables |
y |
Numeric vector of outcome, i.e. dependent variable |
x.test |
Numeric vector or matrix / data frame of testing set features
Columns must correspond to columns in |
y.test |
Numeric vector of testing set outcome |
class.weights |
Numeric vector: Class weights for training. |
ifw |
Logical: If TRUE, apply inverse frequency weighting
(for Classification only).
Note: If |
ifw.type |
Integer 0, 1, 2 1: class.weights as in 0, divided by min(class.weights) 2: class.weights as in 0, divided by max(class.weights) |
upsample |
Logical: If TRUE, upsample cases to balance outcome classes (for Classification only) Note: upsample will randomly sample with replacement if the length of the majority class is more than double the length of the class you are upsampling, thereby introducing randomness |
downsample |
Logical: If TRUE, downsample majority class to match size of minority class |
resample.seed |
Integer: If provided, will be used to set the seed during upsampling. Default = NULL (random seed) |
net |
Pre-defined keras network to be trained (optional) |
Integer vector: Length must be equal to the number of hidden layers you wish to create. Can be zero, in which case you get a linear model. Default = N of features, i.e. NCOL(x) | |
initializer |
Character: Initializer to use for each layer: "glorot_uniform", "glorot_normal", "he_uniform", "he_normal", "cun_uniform", "lecun_normal", "random_uniform", "random_normal", "variance_scaling", "truncated_normal", "orthogonal", "zeros", "ones", "constant". Glorot is also known as Xavier initialization. |
initializer.seed |
Integer: Seed to use for each initializer for reproducibility. |
dropout |
Floar, vector, (0, 1): Probability of dropping nodes. Can be a vector of length equal to N of layers, otherwise will be recycled. Default = 0 |
activation |
String vector: Activation type to use: "relu", "selu", "elu", "sigmoid", "hard_sigmoid", "tanh", "exponential", "linear", "softmax", "softplus", "softsign". Defaults to "relu" for Classification and "tanh" for Regression |
kernel_l1 |
Float: l1 penalty on weights. |
kernel_l2 |
Float: l2 penalty on weights. |
activation_l1 |
Float: l1 penalty on layer output. |
activation_l2 |
Float: l2 penalty on layer output. |
batch.normalization |
Logical: If TRUE, batch normalize after each hidden layer. |
output |
Character: Activation to use for output layer. Can be any as in |
loss |
Character: Loss to use: Default = "mean_squared_error" for regression, "binary_crossentropy" for binary classification, "sparse_categorical_crossentropy" for multiclass |
optimizer |
Character: Optimization to use: "rmsprop", "adadelta", "adagrad", "adam", "adamax", "nadam", "sgd". Default = "rmsprop" |
learning.rate |
Float: learning rate. Defaults depend on |
metric |
Character: Metric used for evaluation during train. Default = "mse" for regression, "accuracy" for classification. |
epochs |
Integer: Number of epochs. Default = 100 |
batch.size |
Integer: Batch size. Default = N of cases |
validation.split |
Float (0, 1): proportion of training data to use for validation. Default = .2 |
callback |
Function to be called by keras during fitting.
Default = |
scale |
Logical: If TRUE, scale featues before training.
column means and standard deviation will be saved in |
x.name |
Character: Name for feature set |
y.name |
Character: Name for outcome |
print.plot |
Logical: if TRUE, produce plot using |
plot.fitted |
Logical: if TRUE, plot True (y) vs Fitted |
plot.predicted |
Logical: if TRUE, plot True (y.test) vs Predicted.
Requires |
plot.theme |
Character: "zero", "dark", "box", "darkbox" |
question |
Character: the question you are attempting to answer with this model, in plain language. |
verbose |
Logical: If TRUE, print summary to screen. |
outdir |
Path to output directory.
If defined, will save Predicted vs. True plot, if available,
as well as full model output, if |
save.mod |
Logical: If TRUE, save all output to an RDS file in |
... |
Additional parameters |
For more information on arguments and hyperparameters, see (https://keras.rstudio.com/) and (https://keras.io/) It is important to define network structure and adjust hyperparameters based on your problem. You cannot expect defaults to work on any given dataset.
E.D. Gennatas
train_cv for external cross-validation
Other Supervised Learning:
s_AdaBoost()
,
s_AddTree()
,
s_BART()
,
s_BRUTO()
,
s_BayesGLM()
,
s_C50()
,
s_CART()
,
s_CTree()
,
s_EVTree()
,
s_GAM()
,
s_GBM()
,
s_GLM()
,
s_GLMNET()
,
s_GLMTree()
,
s_GLS()
,
s_H2ODL()
,
s_H2OGBM()
,
s_H2ORF()
,
s_HAL()
,
s_KNN()
,
s_LDA()
,
s_LM()
,
s_LMTree()
,
s_LightCART()
,
s_LightGBM()
,
s_MARS()
,
s_MLRF()
,
s_NBayes()
,
s_NLA()
,
s_NLS()
,
s_NW()
,
s_PPR()
,
s_PolyMARS()
,
s_QDA()
,
s_QRNN()
,
s_RF()
,
s_RFSRC()
,
s_Ranger()
,
s_SDA()
,
s_SGD()
,
s_SPLS()
,
s_SVM()
,
s_XGBoost()
,
s_XRF()
Other Deep Learning:
d_H2OAE()
,
s_H2ODL()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.