dnnet.backprop.r: Back Propagation

View source: R/2-2-dnnet.R

dnnet.backprop.rR Documentation

Back Propagation

Description

Back Propagation

Usage

dnnet.backprop.r(
  n.hidden,
  w.ini,
  load.param,
  initial.param,
  x,
  y,
  w,
  valid,
  x.valid,
  y.valid,
  w.valid,
  activate,
  activate_,
  n.epoch,
  n.batch,
  model.type,
  learning.rate,
  l1.reg,
  l2.reg,
  early.stop,
  early.stop.det,
  learning.rate.adaptive,
  rho,
  epsilon,
  beta1,
  beta2,
  loss.f
)

Arguments

n.hidden

A numeric vector for numbers of nodes for all hidden layers.

w.ini

Initial weight parameter.

load.param

Whether initial parameters are loaded into the model.

initial.param

The initial parameters to be loaded.

x

x

y

y

w

w

valid

If exists the validation set

x.valid

x-valid

y.valid

y-valid

w.valid

w-valid

activate

Activation Function.

activate_

The forst derivative of the activation function.

n.epoch

Maximum number of epochs.

n.batch

Batch size for batch gradient descent.

model.type

Type of model.

learning.rate

Initial learning rate, 0.001 by default; If "adam" is chosen as an adaptive learning rate adjustment method, 0.1 by defalut.

l1.reg

weight for l1 regularization, optional.

l2.reg

weight for l2 regularization, optional.

early.stop

Indicate whether early stop is used (only if there exists a validation set).

early.stop.det

Number of epochs of increasing loss to determine the early stop.

learning.rate.adaptive

Adaptive learning rate adjustment methods, one of the following, "constant", "adadelta", "adagrad", "momentum", "adam".

rho

A parameter used in momentum.

epsilon

A parameter used in Adagrad and Adam.

beta1

A parameter used in Adam.

beta2

A parameter used in Adam.

loss.f

Loss function of choice.

Value

Returns a list of results to dnnet.


SkadiEye/deepTL documentation built on Nov. 17, 2022, 1:41 p.m.