unnest_tokens: Split a column into tokens

View source: R/unnest_tokens.R

unnest_tokensR Documentation

Split a column into tokens

Description

Split a column into tokens, flattening the table into one-token-per-row. This function supports non-standard evaluation through the tidyeval framework.

Usage

unnest_tokens(
  tbl,
  output,
  input,
  token = "words",
  format = c("text", "man", "latex", "html", "xml"),
  to_lower = TRUE,
  drop = TRUE,
  collapse = NULL,
  ...
)

Arguments

tbl

A data frame

output

Output column to be created as string or symbol.

input

Input column that gets split as string or symbol.

The output/input arguments are passed by expression and support quasiquotation; you can unquote strings and symbols.

token

Unit for tokenizing, or a custom tokenizing function. Built-in options are "words" (default), "characters", "character_shingles", "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", "regex", and "ptb" (Penn Treebank). If a function, should take a character vector and return a list of character vectors of the same length.

format

Either "text", "man", "latex", "html", or "xml". When the format is "text", this function uses the tokenizers package. If not "text", this uses the hunspell tokenizer, and can tokenize only by "word".

to_lower

Whether to convert tokens to lowercase.

drop

Whether original input column should get dropped. Ignored if the original input and new output column have the same name.

collapse

A character vector of variables to collapse text across, or NULL.

For tokens like n-grams or sentences, text can be collapsed across rows within variables specified by collapse before tokenization. At tidytext 0.2.7, the default behavior for collapse = NULL changed to be more consistent. The new behavior is that text is not collapsed for NULL.

Grouping data specifies variables to collapse across in the same way as collapse but you cannot use both the collapse argument and grouped data. Collapsing applies mostly to token options of "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", or "regex".

...

Extra arguments passed on to tokenizers, such as strip_punct for "words", n and k for "ngrams" and "skip_ngrams", and pattern for "regex".

Details

If format is anything other than "text", this uses the hunspell::hunspell_parse() tokenizer instead of the tokenizers package. This does not yet have support for tokenizing by any unit other than words.

Support for token = "tweets" was removed in tidytext 0.4.0 because of changes in upstream dependencies.

Examples



library(dplyr)
library(janeaustenr)

d <- tibble(txt = prideprejudice)
d

d %>%
  unnest_tokens(word, txt)

d %>%
  unnest_tokens(sentence, txt, token = "sentences")

d %>%
  unnest_tokens(ngram, txt, token = "ngrams", n = 2)

d %>%
  unnest_tokens(chapter, txt, token = "regex", pattern = "Chapter [\\\\d]")

d %>%
  unnest_tokens(shingle, txt, token = "character_shingles", n = 4)

# custom function
d %>%
  unnest_tokens(word, txt, token = stringr::str_split, pattern = " ")

# tokenize HTML
h <- tibble(row = 1:2,
                text = c("<h1>Text <b>is</b>", "<a href='example.com'>here</a>"))

h %>%
  unnest_tokens(word, text, format = "html")


tidytext documentation built on Jan. 8, 2023, 1:12 a.m.