View source: R/stop_words_cleaner.R
nlp_stop_words_cleaner | R Documentation |
Spark ML transformer that excludes from a sequence of strings (e.g. the output of a Tokenizer, Normalizer, Lemmatizer, and Stemmer) and drops all the stop words from the input sequences. See https://nlp.johnsnowlabs.com/docs/en/annotators#stopwordscleaner
nlp_stop_words_cleaner( x, input_cols, output_col, case_sensitive = NULL, locale = NULL, stop_words = NULL, uid = random_string("stop_words_cleaner_") )
x |
A |
input_cols |
Input columns. String array. |
output_col |
Output column. String. |
case_sensitive |
Whether to do a case sensitive comparison over the stop words. |
locale |
Locale of the input for case insensitive matching. Ignored when caseSensitive is true. |
stop_words |
The words to be filtered out. |
uid |
A character string used to uniquely identify the ML estimator. |
The object returned depends on the class of x
.
spark_connection
: When x
is a spark_connection
, the function returns an instance of a ml_estimator
object. The object contains a pointer to
a Spark Estimator
object and can be used to compose
Pipeline
objects.
ml_pipeline
: When x
is a ml_pipeline
, the function returns a ml_pipeline
with
the NLP estimator appended to the pipeline.
tbl_spark
: When x
is a tbl_spark
, an estimator is constructed then
immediately fit with the input tbl_spark
, returning an NLP model.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.