SquaredLoss: Squared loss

Description Super class Public fields Methods See Also

Description

Each loss module has a forward method that takes in a batch of predictions Ypred (from the previous layer) and labels Y and returns a scalar loss value.

The NLL module has a backward method that returns dLdZ, the gradient with respect to the preactivation to SoftMax (note: not the activation!), since we are always pairing SoftMax activation with NLL loss

Super class

neuralnetr::ClassModule -> SquaredLoss

Public fields

Ypred

the predicted Y vector

Y

the actual Y vector

Methods

Public methods

Inherited methods

Method forward()

Calculate the loss.

Usage
SquaredLoss$forward(Ypred, Y)
Arguments
Ypred

the predicted Y vector.

Y

the actual Y vector.

Returns

loss as a scalar


Method backward()

calculate the gradient w.r.t. the loss

Usage
SquaredLoss$backward()
Returns

dLdZ (? x b)


Method clone()

The objects of this class are cloneable with this method.

Usage
SquaredLoss$clone(deep = FALSE)
Arguments
deep

Whether to make a deep clone.

See Also

Other loss: NLL


frhl/neuralnetr documentation built on Nov. 9, 2020, 2:24 p.m.