nlp_stop_words_cleaner: Spark NLP StopWordsCleaner

View source: R/stop_words_cleaner.R

nlp_stop_words_cleanerR Documentation

Spark NLP StopWordsCleaner

Description

Spark ML transformer that excludes from a sequence of strings (e.g. the output of a Tokenizer, Normalizer, Lemmatizer, and Stemmer) and drops all the stop words from the input sequences. See https://nlp.johnsnowlabs.com/docs/en/annotators#stopwordscleaner

Usage

nlp_stop_words_cleaner(
  x,
  input_cols,
  output_col,
  case_sensitive = NULL,
  locale = NULL,
  stop_words = NULL,
  uid = random_string("stop_words_cleaner_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_cols

Input columns. String array.

output_col

Output column. String.

case_sensitive

Whether to do a case sensitive comparison over the stop words.

locale

Locale of the input for case insensitive matching. Ignored when caseSensitive is true.

stop_words

The words to be filtered out.

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.