knitr::opts_chunk$set( collapse = TRUE, comment = "#>", message = FALSE, warning = FALSE )
library(LimpiaR)
Quick example of how to run the code for the function - get a dataset with a text variable and:
limpiar_examples %>% limpiar_spam_grams(mention_content, n_gram = 3, min_freq = 2)
Now read the docs!
Extracting useful information from large, unstructured internet datasets is a difficult task even when the datasets are clean. It is an impossible task with datasets riddled by bot network content, spammers, duplicates, and near-duplicates.
Sadly our datasets do not come ready-cleaned, and we often have to remove hundreds and thousands of almost identical documents. Without automated, or semi-automated, assistance, this process is extremely time consuming and quickly becomes intractable as the size of our dataset grows.
To make a bad situation worse, there is no one-size fits all definition of 'near duplicate'. Let's look at two pairs of documents:
Pair 1
i. Arsenal are my favourite team ii. Liverpool are my favourite team
Pair 2
i. @jiryan_2 wants to make you rich! Click here for amazing crypto opportunity www.definitelynotascam.com/get_rich_quick ii. @jiryan_2 wants to make you wealthy! Click here for amazing crypto opportunity www.definitelynotascam.com/get_rich_quick
It should be quite clear that one of these pairs of documents is more problematic than the other, and yet both documents only differ by a single world. So even in principle, we wouldn't want to write some code to 'check if there is another document which only differs by one word, and remove both documents if there is' - we need something a bit more nuanced.
limpiar_spam_grams()
We developed an in-house solution which looks at recurrent n-grams ^[elsewhere called shingles] within a dataset, where grams are words - n = 1, bigrams are n = 2, trigrams are n = 3 and so on. The algorithm is not language specific, so it can be applied across any language which has clear delimiters between words.
The algorithm works as follows:
n_gram
parameter.min_freq
.min_freq
, the remaining data, and the deleted data, for the user to inspect.limpiar_spam_grams()
is an heuristic approach to cleaning, it is simple but effective. The key insight for understanding why it works is to think about perplexity{target="_blank"} in the Information Theory/language modelling sense. Sequences of natural language are high in perplexity - for most ideas that we want to communicate, there are many words we could choose from to communicate the idea. This means that two people describing the same idea are unlikely to use the same words. So when ideas are communicated with the exact same words, it is likely that they have come from the same source, i.e. the process that generated them was not independent. This is how we can recognise unsophisticated spammers and bot networks.
Sentence 1: The unexpected shower forced all the beachgoers to quickly gather their belongings and seek shelter in nearby buildings.
Sentence 2: The sudden downpour prompted the seaside visitors to hastily collect their items and dash into adjacent structures.
The idea communicated is the same, but the words are different.
Let's run through some code to see how the function is applied. We'll save the output of limpiar_spam_grams()
to a variable - this variable will contain our list of \$spam_grams, \$data, \$deleted. We'll use the limpiar_examples
dataset, which is a small, toy dataset of only 10 rows (documents). We can read every document in this dataset, and quickly verify the outputs; so it is handy for learning, but in reality we will be working with much larger datasets so verification will take up more of our time.
We run the code with low values for n_gram
and min_freq
because our dataset is small, and it does not contain many near duplicates. We ask the function to find any substring within a document of 3 words that is seen in the entire dataset that is seen in at least 2 documents.
spam_grams_output <- limpiar_examples %>% limpiar_spam_grams(mention_content, n_gram = 3, min_freq = 2)
We can check what's in the list simply with names()
.
names(spam_grams_output)
We confirm that we have the three elements of the list that we expect. Now we can take a look at spam_grams
to see what ngrams the function is suggesting we delete:
spam_grams_output$spam_grams
If we look at the ngrams
that have been picked out to do the filtering, it's clear that they ar e not the most robust. We could imagine many documents that we do not in fact want to remove from a datset, containing at least one of these ngrams. It's vital that the user inspects these ngrams and considers
whether documents containing them are likely to be spam-like.
If we move to inspecting the actual documents that were deleted, we see that the ngrams in \$spam_grams were coming from two pairs of documents - one is a tweet + a retweet - by authors Don Quijote & Sancho Panza. The way retweets are formulated, it makes sense for them to be treated as near duplicates. We also have a pair of exact duplicates by EL Sordo and Commander Miranda - we would expect duplicates to be flagged as near duplicates. So in this case the function is working as intended.
spam_grams_output$deleted
However, given what we noticed in the \$spam_grams output, we would not want to use these exact parameters on a larger dataset, because we have too many ngrams which could be used in non-spam-like documents.
This raises a question, what should we set as starting parameters for working with limpiar_spam_grams()
on larger datasets?
From experience and early research, we suggest setting ngram = 7
as a starting point, and min_freq
should be scaled with the number of documents in the dataset. Good starting points would be to take the log of the size of the dataset, or the square root.
Why the log or the square root?
Ultimately this decision is empirical - and you should be generating data as you use the function, i.e. if you set min_freq = 10
, you'll need to grab the \$deleted data tibble from the function's output, and sample documents, counting and recording how many you think are truly spam-like and should be removed. Keep doing this until you find the parameter for which ~90% of documents are spam-like. Record the outputs of your experiments.
Setting the parameters results in a trade-off between precision & recall - if you set ngram = 1
, and min_freq =1
, you will remove every single document in your dataset that has 1 word in it. This will result in 100% recall of spam documents - because you remove every document. However, precision would be low.
limpiar_examples %>% limpiar_spam_grams(mention_content, 1, 1)
We have some results from early research which shows this precision / recall trade-off {width=800 height=400}
Sometimes long substrings do not in fact indicate documents that we should want to remove, for example, some long substrings will naturally occur in many documents - for example when researching web browsers "I set Chrome to my default browser" is an ngram =7
which may occur many times without indicating spam-like documents.
Likewise with quotes, you have probably seen this quote before many times:
'Life is like riding a bicycle. To keep your balance, you must keep moving.'
It's often used in blogs, articles, explainers etc. as a hook or as supporting evidence. It is much longer than we would usually set the value of our ngram =
parameter, meaning if it is seen across our documents more than min_freq
times, then all of those documents would be deleted. Although in this case it's actually quite unlikely to be a problem, the general principle is a limitation of the approach.
Another limitation is that the function scales somewhat poorly with the size of documents - if we have many long documents in a large corpus of text, we will need a lot of time and memory to use limpiar_spam_grams
as it currently works. As we often need to iterate over the parameters, this can be a barrier to usage.
Coming soon:
Approximate Nearest Neighbours, Locality Sensitive Hashing, and Jaccard Similarity
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.