TfIdfVectorizer | R Documentation |
Creates a tf-idf matrix
Given a list of text, it creates a sparse matrix consisting of tf-idf score for tokens from the text.
superml::CountVectorizer
-> TfIdfVectorizer
sentences
a list containing sentences
max_df
When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold, value lies between 0 and 1.
min_df
When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold, value lies between 0 and 1.
max_features
use top features sorted by count to be used in bag of words matrix.
ngram_range
The lower and upper boundary of the range of n-values for different word n-grams or char n-grams to be extracted. All values of n such such that min_n <= n <= max_n will be used. For example an ngram_range of c(1, 1) means only unigrams, c(1, 2) means unigrams and bigrams, and c(2, 2) means only bigrams.
split
splitting criteria for strings, default: " "
lowercase
convert all characters to lowercase before tokenizing
regex
regex expression to use for text cleaning.
remove_stopwords
a list of stopwords to use, by default it uses its inbuilt list of standard stopwords
smooth_idf
logical, to prevent zero division, adds one to document frequencies, as if an extra document was seen containing every term in the collection exactly once
norm
logical, if TRUE, each output row will have unit norm ‘l2’: Sum of squares of vector elements is 1. if FALSE returns non-normalized vectors, default: TRUE
new()
TfIdfVectorizer$new( min_df, max_df, max_features, ngram_range, regex, remove_stopwords, split, lowercase, smooth_idf, norm )
min_df
numeric, When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold, value lies between 0 and 1.
max_df
numeric, When building the vocabulary ignore terms that have a document frequency strictly higher than the given threshold, value lies between 0 and 1.
max_features
integer, Build a vocabulary that only consider the top max_features ordered by term frequency across the corpus.
ngram_range
vector, The lower and upper boundary of the range of n-values for different word n-grams or char n-grams to be extracted. All values of n such such that min_n <= n <= max_n will be used. For example an ngram_range of c(1, 1) means only unigrams, c(1, 2) means unigrams and bigrams, and c(2, 2) means only bigrams.
regex
character, regex expression to use for text cleaning.
remove_stopwords
list, a list of stopwords to use, by default it uses its inbuilt list of standard english stopwords
split
character, splitting criteria for strings, default: " "
lowercase
logical, convert all characters to lowercase before tokenizing, default: TRUE
smooth_idf
logical, to prevent zero division, adds one to document frequencies, as if an extra document was seen containing every term in the collection exactly once
norm
logical, if TRUE, each output row will have unit norm ‘l2’: Sum of squares of vector elements is 1. if FALSE returns non-normalized vectors, default: TRUE
parallel
logical, speeds up ngrams computation using n-1 cores, defaults: TRUE
Create a new 'TfIdfVectorizer' object.
A 'TfIdfVectorizer' object.
TfIdfVectorizer$new()
fit()
TfIdfVectorizer$fit(sentences)
sentences
a list of text sentences
Fits the TfIdfVectorizer model on sentences
NULL
sents = c('i am alone in dark.','mother_mary a lot', 'alone in the dark?', 'many mothers in the lot....') tf = TfIdfVectorizer$new(smooth_idf = TRUE, min_df = 0.3) tf$fit(sents)
fit_transform()
TfIdfVectorizer$fit_transform(sentences)
sentences
a list of text sentences
Fits the TfIdfVectorizer model and returns a sparse matrix of count of tokens
a sparse matrix containing tf-idf score for tokens in each given sentence
\dontrun{ sents <- c('i am alone in dark.','mother_mary a lot', 'alone in the dark?', 'many mothers in the lot....') tf <- TfIdfVectorizer$new(smooth_idf = TRUE, min_df = 0.1) tf_matrix <- tf$fit_transform(sents) }
transform()
TfIdfVectorizer$transform(sentences)
sentences
a list of new text sentences
Returns a matrix of tf-idf score of tokens
a sparse matrix containing tf-idf score for tokens in each given sentence
\dontrun{ sents = c('i am alone in dark.','mother_mary a lot', 'alone in the dark?', 'many mothers in the lot....') new_sents <- c("dark at night",'mothers day') tf = TfIdfVectorizer$new(min_df=0.1) tf$fit(sents) tf_matrix <- tf$transform(new_sents) }
clone()
The objects of this class are cloneable with this method.
TfIdfVectorizer$clone(deep = FALSE)
deep
Whether to make a deep clone.
## ------------------------------------------------
## Method `TfIdfVectorizer$new`
## ------------------------------------------------
TfIdfVectorizer$new()
## ------------------------------------------------
## Method `TfIdfVectorizer$fit`
## ------------------------------------------------
sents = c('i am alone in dark.','mother_mary a lot',
'alone in the dark?', 'many mothers in the lot....')
tf = TfIdfVectorizer$new(smooth_idf = TRUE, min_df = 0.3)
tf$fit(sents)
## ------------------------------------------------
## Method `TfIdfVectorizer$fit_transform`
## ------------------------------------------------
## Not run:
sents <- c('i am alone in dark.','mother_mary a lot',
'alone in the dark?', 'many mothers in the lot....')
tf <- TfIdfVectorizer$new(smooth_idf = TRUE, min_df = 0.1)
tf_matrix <- tf$fit_transform(sents)
## End(Not run)
## ------------------------------------------------
## Method `TfIdfVectorizer$transform`
## ------------------------------------------------
## Not run:
sents = c('i am alone in dark.','mother_mary a lot',
'alone in the dark?', 'many mothers in the lot....')
new_sents <- c("dark at night",'mothers day')
tf = TfIdfVectorizer$new(min_df=0.1)
tf$fit(sents)
tf_matrix <- tf$transform(new_sents)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.