wmultinomial: wmultinomial function

View source: R/cost_functions.R

wmultinomialR Documentation

wmultinomial function

Description

A function to evaluate the weighted multinomial loss function and the derivative of this function to be used when training a neural network. This is eqivalent to a multinomial cost function employing a Dirichlet prior on the probabilities. Its effect is to regularise the estimation so that in the case where we apriori expect more of one particular category compared to another then this can be included in the objective.

Usage

wmultinomial(w, batchsize)

Arguments

w

a vector of weights, adding up whose length is equal to the output length of the net

batchsize

of batch used in inference WARNING: ensure this matches with actual batchsize used!

Value

a list object with elements that are functions, evaluating the loss and the derivative

References

  1. Ian Goodfellow, Yoshua Bengio, Aaron Courville, Francis Bach. Deep Learning. (2016)

  2. Terrence J. Sejnowski. The Deep Learning Revolution (The MIT Press). (2018)

  3. Neural Networks YouTube playlist by 3brown1blue: https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

  4. http://neuralnetworksanddeeplearning.com/

See Also

network, train, backprop_evaluate, MLP_net, backpropagation_MLP, Qloss, no_regularisation, L1_regularisation, L2_regularisation


deepNN documentation built on Aug. 25, 2023, 5:14 p.m.