Description Usage Arguments Value
Deep Learning Regression with Automated Parameter Tuning
1 2 3 4 |
x |
training feature matrix |
y |
target matrix |
option |
either local or google for hyper-parameter tuning |
num_layer |
a vector of integers indicating the number of hidden layers to test. Default to seq(1,5,1) |
max_units |
the maximum number of hidden units in a layer. Default to an optimized value based on data |
start_unit |
the minimum number of hiddent units in a layer. Default to 5 |
max_dropout |
A number between 0 and 1 indicating the maximum dropoff rate in a layer. Default to 0.2 |
min_dropout |
A number between 0 and 1 indicating the minimum dropoff rate in a layer. Default to 0 |
max_lr |
maximum learning rate in a run. Default to 0.2 |
min_lr |
minimum learning rate in a run. Default to 0.001 |
iteration_per_layer |
Number of parameter randomizations for a given number of hidden layers. More iterations will explore a larger parameter space |
validation_split |
Percent of data used for validation. Default to 20 percent |
num_epoch |
number of epoches to go through during training |
num_patience |
number of patience in early stopping criteria |
machine_type |
type of server to use. Could be standard, standard_gpu, standard_p100. For more visit https://cloud.google.com/ml-engine/docs/training-overview#machine_type_table |
returns a list object with two values:
train_performance: A table with parameters and model performance metrics
best_model: a keras_model object with the optimal structure
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.