loss: Loss functions

Description Arguments

Description

Loss functions are minimized during model training. loss objects contain a loss function as well as a grad function, specifying the gradient. loss classes like the negative binomial can also store parameters that can be updated during training.

bernoulliLoss: cross-entropy for 0-1 data. Equal to -(y * log(yhat) + (1 - y) * log(1 - yhat))

bernoulliRegLoss: cross-entropy loss, regularized by a beta-distributed prior. Note that a and b are not

poissonLoss: loss based on the Poisson likelihood. See dpois

nbLoss: loss based on the negative binomial likelihood See dnbinom

nbRegLoss: loss based on the negative binomial likelihood with a lognormal prior on mu See dnbinom

squaredLoss: Squared error, for linear models

binomialLoss: loss for binomial responses.

Arguments

a

the a shape parameter in dbeta

b

the b shape parameter in dbeta

n

specifies the number of Bernoulli trials (size in dbinom)


davharris/mistnet documentation built on May 14, 2019, 9:28 p.m.