lmullen/tokenizers: Fast, Consistent Tokenization of Natural Language Text

Convert natural language text into tokens. Includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the 'stringi' and 'Rcpp' packages for fast yet correct tokenization in 'UTF-8'.

Getting started

Package details

Maintainer
LicenseMIT + file LICENSE
Version0.2.1
URL https://lincolnmullen.com/software/tokenizers/ https://github.com/ropensci/tokenizers
Package repositoryView on GitHub
Installation Install the latest version of this package by entering the following in R:
install.packages("remotes")
remotes::install_github("lmullen/tokenizers")
lmullen/tokenizers documentation built on Oct. 26, 2018, 1:34 a.m.