tokenizers: Fast, Consistent Tokenization of Natural Language Text
Version 0.2.1

Convert natural language text into tokens. Includes tokenizers for shingled n-grams, skip n-grams, words, word stems, sentences, paragraphs, characters, shingled characters, lines, tweets, Penn Treebank, regular expressions, as well as functions for counting characters, words, and sentences, and a function for splitting longer texts into separate documents, each with the same number of words. The tokenizers have a consistent interface, and the package is built on the 'stringi' and 'Rcpp' packages for fast yet correct tokenization in 'UTF-8'.

Package details

AuthorLincoln Mullen [aut, cre] (<https://orcid.org/0000-0001-5103-6917>), Os Keyes [ctb] (<https://orcid.org/0000-0001-5196-609X>), Dmitriy Selivanov [ctb], Jeffrey Arnold [ctb] (<https://orcid.org/0000-0001-9953-3904>), Kenneth Benoit [ctb] (<https://orcid.org/0000-0002-0797-564X>)
Date of publication2018-03-29 20:07:40 UTC
MaintainerLincoln Mullen <[email protected]>
LicenseMIT + file LICENSE
Version0.2.1
URL https://lincolnmullen.com/software/tokenizers/
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("tokenizers")

Try the tokenizers package in your browser

Any scripts or data that you put into this service are public.

tokenizers documentation built on April 1, 2018, 12:23 p.m.