unnest_regex | R Documentation |
This function is a wrapper around unnest_tokens( token = "regex" )
.
unnest_regex(
tbl,
output,
input,
pattern = "\\s+",
format = c("text", "man", "latex", "html", "xml"),
to_lower = TRUE,
drop = TRUE,
collapse = NULL,
...
)
tbl |
A data frame |
output |
Output column to be created as string or symbol. |
input |
Input column that gets split as string or symbol. The output/input arguments are passed by expression and support quasiquotation; you can unquote strings and symbols. |
pattern |
A regular expression that defines the split. |
format |
Either "text", "man", "latex", "html", or "xml". When the format is "text", this function uses the tokenizers package. If not "text", this uses the hunspell tokenizer, and can tokenize only by "word". |
to_lower |
Whether to convert tokens to lowercase. |
drop |
Whether original input column should get dropped. Ignored if the original input and new output column have the same name. |
collapse |
A character vector of variables to collapse text across,
or For tokens like n-grams or sentences, text can be collapsed across rows
within variables specified by Grouping data specifies variables to collapse across in the same way as
|
... |
Extra arguments passed on to tokenizers |
unnest_tokens()
library(dplyr)
library(janeaustenr)
d <- tibble(txt = prideprejudice)
d %>%
unnest_regex(word, txt, pattern = "Chapter [\\\\d]")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.