# textmodel_nb: Naive Bayes classifier for texts In quanteda: Quantitative Analysis of Textual Data

## Description

Fit a multinomial or Bernoulli Naive Bayes model, given a dfm and some training labels.

## Usage

 1 2 textmodel_nb(x, y, smooth = 1, prior = c("uniform", "docfreq", "termfreq"), distribution = c("multinomial", "Bernoulli"))

## Arguments

 x the dfm on which the model will be fit. Does not need to contain only the training documents. y vector of training labels associated with each document identified in train. (These will be converted to factors if not already factors.) smooth smoothing parameter for feature counts by class prior prior distribution on texts; one of "uniform", "docfreq", or "termfreq". See Prior Distributions below. distribution count model for text features, can be multinomial or Bernoulli. To fit a "binary multinomial" model, first convert the dfm to a binary matrix using dfm_weight(x, scheme = "boolean").

## Value

textmodel_nb() returns a list consisting of the following (where I is the total number of documents, J is the total number of features, and k is the total number of training classes):

 call original function call PwGc k \times J; probability of the word given the class (empirical likelihood) Pc k-length named numeric vector of class prior probabilities PcGw k \times J; posterior class probability given the word Pw J \times 1; baseline probability of the word x the I \times J training dfm x y the I-length y training class vector distribution the distribution argument prior the prior argument smooth the value of the smoothing parameter

## Prior distributions

Prior distributions refer to the prior probabilities assigned to the training classes, and the choice of prior distribution affects the calculation of the fitted probabilities. The default is uniform priors, which sets the unconditional probability of observing the one class to be the same as observing any other class.

"Document frequency" means that the class priors will be taken from the relative proportions of the class documents used in the training set. This approach is so common that it is assumed in many examples, such as the worked example from Manning, Raghavan, and Schütze (2008) below. It is not the default in quanteda, however, since there may be nothing informative in the relative numbers of documents used to train a classifier other than the relative availability of the documents. When training classes are balanced in their number of documents (usually advisable), however, then the empirically computed "docfreq" would be equivalent to "uniform" priors.

Setting prior to "termfreq" makes the priors equal to the proportions of total feature counts found in the grouped documents in each training class, so that the classes with the largest number of features are assigned the largest priors. If the total count of features in each training class was the same, then "uniform" and "termfreq" would be the same.

Kenneth Benoit

## References

Manning, C. D., Raghavan, P., & Schütze, H. (2008). Introduction to Information Retrieval. Cambridge University Press. https://nlp.stanford.edu/IR-book/pdf/irbookonlinereading.pdf

Jurafsky, Daniel and James H. Martin. (2018). "Chapter 4, Naive Bayes and Sentiment Classification." from Speech and Language Processing. Draft of September 23, 2018. https://web.stanford.edu/~jurafsky/slp3/4.pdf