Filter Text Based On List of Words Or Regexes
The use case this was written for is filtering profanity and personally identifiable data in comments made in free text questions in customer satisfaction surveys, for display in Shiny dashboards.
The package uses continuous integration to ensure style, no syntax errors and maintain consistency. Documentation is automated via Roxygenise.
devtools::install_github("MarkMc1089/tidyfilter")
Use this in dplyr
pipelines. It is fast and vectorised. Also, fully configurable, requiring both a list of filter words or regular expressions and a replacement character. Each match will be replaced with the same number of replacement characters as the number of characters in the match.
data <- data.frame(
w = c("My phone number is 07421 345 678", "Call me on 01234567890")
x = c("This is some text...", "...containing words."),
y = c("This is more text...", "...containing something."),
z = c("This is some more text...", "...containing more words.")
)
data %>%
filter_text(
c("some", "words", "and", "regex", "[0-9]{3,}[\s0-9]*[0-9]"),
"#",
w, x, y
)
# w x y z
# 1 My phone number is ############# This is #### text... This is more text... This is some more text...
# 2 Call me on ########### ...containing #####. ...containing ####thing. ...containing more words.
Included are 2 word lists for profanity and 1 list of regexes.
pid_regex <- readLines(system.file("extdata", "pid_regex.txt", package = "tidyfilter"))
profanity <- readLines(system.file("extdata", "profane_words_basic.txt", package = "tidyfilter"))
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.