knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)

Package overview

In natural language processing, tokenization is the process of breaking human-readable text into machine readable components. The most obvious way to tokenize a text is to split the text into words. But there are many other ways to tokenize a text, the most useful of which are provided by this package.

The tokenizers in this package have a consistent interface. They all take either a character vector of any length, or a list where each element is a character vector of length one. The idea is that each element comprises a text. Then each function returns a list with the same length as the input vector, where each element in the list contains the tokens generated by the function. If the input character vector or list is named, then the names are preserved, so that the names can serve as identifiers.

Using the following sample text, the rest of this vignette demonstrates the different kinds of tokenizers in this package.

library(tokenizers)
options(max.print = 25)

james <- paste0(
  "The question thus becomes a verbal one\n",
  "again; and our knowledge of all these early stages of thought and feeling\n",
  "is in any case so conjectural and imperfect that farther discussion would\n",
  "not be worth while.\n",
  "\n",
  "Religion, therefore, as I now ask you arbitrarily to take it, shall mean\n",
  "for us _the feelings, acts, and experiences of individual men in their\n",
  "solitude, so far as they apprehend themselves to stand in relation to\n",
  "whatever they may consider the divine_. Since the relation may be either\n",
  "moral, physical, or ritual, it is evident that out of religion in the\n",
  "sense in which we take it, theologies, philosophies, and ecclesiastical\n",
  "organizations may secondarily grow.\n"
)

Character and character-shingle tokenizers

The character tokenizer splits texts into individual characters.

tokenize_characters(james)[[1]] 

You can also tokenize into character-based shingles.

tokenize_character_shingles(james, n = 3, n_min = 3, 
                            strip_non_alphanum = FALSE)[[1]][1:20]

Word and word-stem tokenizers

The word tokenizer splits texts into words.

tokenize_words(james)

Word stemming is provided by the SnowballC package.

tokenize_word_stems(james)

You can also provide a vector of stopwords which will be omitted. The stopwords package, which contains stopwords for many languages from several sources, is recommended. This argument also works with the n-gram and skip n-gram tokenizers.

library(stopwords)
tokenize_words(james, stopwords = stopwords::stopwords("en"))

An alternative word stemmer often used in NLP that preserves punctuation and separates common English contractions is the Penn Treebank tokenizer.

tokenize_ptb(james)

N-gram and skip n-gram tokenizers

An n-gram is a contiguous sequence of words containing at least n_min words and at most n words. This function will generate all such combinations of n-grams, omitting stopwords if desired.

tokenize_ngrams(james, n = 5, n_min = 2,
                stopwords = stopwords::stopwords("en"))

A skip n-gram is like an n-gram in that it takes the n and n_min parameters. But rather than returning contiguous sequences of words, it will also return sequences of n-grams skipping words with gaps between 0 and the value of k. This function generates all such sequences, again omitting stopwords if desired. Note that the number of tokens returned can be very large.

tokenize_skip_ngrams(james, n = 5, n_min = 2, k = 2,
                     stopwords = stopwords::stopwords("en"))

Sentence and paragraph tokenizers

Sometimes it is desirable to split texts into sentences or paragraphs prior to tokenizing into other forms.

tokenize_sentences(james) 
tokenize_paragraphs(james)

Text chunking

When one has a very long document, sometimes it is desirable to split the document into smaller chunks, each with the same length. This function chunks a document and gives it each of the chunks an ID to show their order. These chunks can then be further tokenized.

chunks <- chunk_text(mobydick, chunk_size = 100, doc_id = "mobydick")
length(chunks)
chunks[5:6]
tokenize_words(chunks[5:6])

Counting words, characters, sentences

The package also offers functions for counting words, characters, and sentences in a format which works nicely with the rest of the functions.

count_words(mobydick)
count_characters(mobydick)
count_sentences(mobydick)


lmullen/tokenizers documentation built on March 28, 2024, 11:12 a.m.