bind_tf_idf | Bind the term frequency and inverse document frequency of a... |
cast_sparse | Create a sparse matrix from row names, column names, and... |
corpus_tidiers | Tidiers for a corpus object from the quanteda package |
dictionary_tidiers | Tidy dictionary objects from the quanteda package |
document_term_casters | Casting a data frame to a DocumentTermMatrix,... |
get_sentiments | Get a tidy data frame of a single sentiment lexicon |
get_stopwords | Get a tidy data frame of a single stopword lexicon |
lda_tidiers | Tidiers for LDA and CTM objects from the topicmodels package |
mallet_tidiers | Tidiers for Latent Dirichlet Allocation models from the... |
nma_words | English negators, modals, and adverbs |
parts_of_speech | Parts of speech for English words from the Moby Project |
reexports | Objects exported from other packages |
reorder_within | Reorder an x or y axis within facets |
sentiments | Sentiment lexicon from Bing Liu and collaborators |
stm_tidiers | Tidiers for Structural Topic Models from the stm package |
stop_words | Various lexicons for English stop words |
tdm_tidiers | Tidy DocumentTermMatrix, TermDocumentMatrix, and related... |
tidy.Corpus | Tidy a Corpus object from the tm package |
tidytext-package | tidytext: Text Mining using 'dplyr', 'ggplot2', and Other... |
tidy_triplet | Utility function to tidy a simple triplet matrix |
unnest_character | Wrapper around unnest_tokens for characters and character... |
unnest_ngrams | Wrapper around unnest_tokens for n-grams |
unnest_ptb | Wrapper around unnest_tokens for Penn Treebank Tokenizer |
unnest_regex | Wrapper around unnest_tokens for regular expressions |
unnest_sentences | Wrapper around unnest_tokens for sentences, lines, and... |
unnest_tokens | Split a column into tokens |
unnest_tweets | Wrapper around unnest_tokens for tweets |
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.