unnest_tokens: Split a column into tokens

View source: R/unnest_tokens.R

unnest_tokens.subtitlesR Documentation

Split a column into tokens

Description

This function extends unnest_tokens to subtitles objects. The main difference with the data.frame method is the possibility to perform timecode remapping according to the split of the input column.

This wrapper turns the function tidytext::unnest_tokens into an S3 generic. The default method (unnest_tokens.default) delegates to the original implementation. See "?unnest_tokens.subtitles" for the subtools specific documentation.

It simply calls the original 'tidytext::unnest_tokens'.

Usage

## S3 method for class 'subtitles'
unnest_tokens(
  tbl,
  output,
  input,
  token = "words",
  format = c("text", "man", "latex", "html", "xml"),
  to_lower = TRUE,
  drop = TRUE,
  collapse = NULL,
  ...,
  time.remapping = TRUE
)

unnest_tokens(
  tbl,
  output,
  input,
  token = "words",
  format = c("text", "man", "latex", "html", "xml"),
  to_lower = TRUE,
  drop = TRUE,
  collapse = NULL,
  ...
)

## Default S3 method:
unnest_tokens(
  tbl,
  output,
  input,
  token = "words",
  format = c("text", "man", "latex", "html", "xml"),
  to_lower = TRUE,
  drop = TRUE,
  collapse = NULL,
  ...
)

Arguments

tbl

A data frame

output

Output column to be created as string or symbol.

input

Input column that gets split as string or symbol.

The output/input arguments are passed by expression and support quasiquotation; you can unquote strings and symbols.

token

Unit for tokenizing, or a custom tokenizing function. Built-in options are "words" (default), "characters", "character_shingles", "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", "regex", and "ptb" (Penn Treebank). If a function, should take a character vector and return a list of character vectors of the same length.

format

Either "text", "man", "latex", "html", or "xml". When the format is "text", this function uses the tokenizers package. If not "text", this uses the hunspell tokenizer, and can tokenize only by "word".

to_lower

Whether to convert tokens to lowercase.

drop

Whether original input column should get dropped. Ignored if the original input and new output column have the same name.

collapse

A character vector of variables to collapse text across, or NULL.

For tokens like n-grams or sentences, text can be collapsed across rows within variables specified by collapse before tokenization. At tidytext 0.2.7, the default behavior for collapse = NULL changed to be more consistent. The new behavior is that text is not collapsed for NULL.

Grouping data specifies variables to collapse across in the same way as collapse but you cannot use both the collapse argument and grouped data. Collapsing applies mostly to token options of "ngrams", "skip_ngrams", "sentences", "lines", "paragraphs", or "regex".

...

Extra arguments passed on to tokenizers, such as strip_punct for "words", n and k for "ngrams" and "skip_ngrams", and pattern for "regex".

time.remapping

a logical. If TRUE (default), subtitle timecodes are recalculated to take into account the split of the input column.

Value

A tibble.

A tibble.

Examples

f <- system.file("extdata", "ex_webvtt.vtt", package = "subtools")
s <- read_subtitles(f, metadata = data.frame(test = "Test"))

#require(tidytext)
unnest_tokens(s)
unnest_tokens(s, Word, Text_content, drop = FALSE)
unnest_tokens(s, Word, Text_content, token = "lines")


subtools documentation built on March 24, 2026, 5:07 p.m.