deep_lm: Deep Learning Regression with Automated Parameter Tuning

Description Usage Arguments Value

Description

Deep Learning Regression with Automated Parameter Tuning

Usage

1
2
3
4
deep_lm(x, y, option, num_layer = seq(1, 5, 1), max_units = NULL,
  start_unit = 5, max_dropout = 0.2, min_dropout = 0, max_lr = 0.2,
  min_lr = 0.001, iteration_per_layer = 5, validation_split = 0.2,
  num_epoch = 5, num_patience = 3, machine_type = "standard")

Arguments

x

training feature matrix

y

target matrix

option

either local or google for hyper-parameter tuning

num_layer

a vector of integers indicating the number of hidden layers to test. Default to seq(1,5,1)

max_units

the maximum number of hidden units in a layer. Default to an optimized value based on data

start_unit

the minimum number of hiddent units in a layer. Default to 5

max_dropout

A number between 0 and 1 indicating the maximum dropoff rate in a layer. Default to 0.2

min_dropout

A number between 0 and 1 indicating the minimum dropoff rate in a layer. Default to 0

max_lr

maximum learning rate in a run. Default to 0.2

min_lr

minimum learning rate in a run. Default to 0.001

iteration_per_layer

Number of parameter randomizations for a given number of hidden layers. More iterations will explore a larger parameter space

validation_split

Percent of data used for validation. Default to 20 percent

num_epoch

number of epoches to go through during training

num_patience

number of patience in early stopping criteria

machine_type

type of server to use. Could be standard, standard_gpu, standard_p100. For more visit https://cloud.google.com/ml-engine/docs/training-overview#machine_type_table

Value

returns a list object with two values:


tianwei-zhang/easyAI documentation built on May 14, 2019, 12:48 p.m.