A simple n-gram (contiguous sequences of n items from a given sequence of text) tokenizer to be used with the 'tm' package with no 'rJava'/'RWeka' dependency.
|Author||Chung-hong Chan <[email protected]>|
|Date of publication||2016-03-10 23:44:11|
|Maintainer||Chung-hong Chan <[email protected]>|
|Package repository||View on CRAN|
Install the latest version of this package by entering the following in R:
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.