Loss functions are minimized during model training. loss objects contain
a loss function as well as a grad function, specifying the gradient.
loss classes like the negative binomial can also store parameters that can be
updated during training.
bernoulliLoss:
cross-entropy for 0-1 data. Equal to
-(y * log(yhat) + (1 - y) * log(1 - yhat))
bernoulliRegLoss: cross-entropy loss, regularized by a
beta-distributed prior.
Note that a and b are not
poissonLoss: loss based on the Poisson likelihood.
See dpois
nbLoss: loss based on the negative binomial likelihood
See dnbinom
nbRegLoss: loss based on the negative binomial likelihood with a lognormal prior on mu
See dnbinom
squaredLoss: Squared error, for linear models
binomialLoss: loss for binomial responses.
a |
the |
b |
the |
n |
specifies the number of Bernoulli trials ( |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.