nlp_ngram_generator: Spark NLP NGramGenerator

View source: R/ngram-generator.R

nlp_ngram_generatorR Documentation

Spark NLP NGramGenerator

Description

Spark ML transformer that takes as input a sequence of strings (e.g. the output of a Tokenizer, Normalizer, Stemmer, Lemmatizer, and StopWordsCleaner). The parameter n is used to determine the number of terms in each n-gram. The output will consist of a sequence of n-grams where each n-gram is represented by a space-delimited string of n consecutive words with annotatorType CHUNK same as the Chunker annotator. See https://nlp.johnsnowlabs.com/docs/en/annotators#ngramgenerator

Usage

nlp_ngram_generator(
  x,
  input_cols,
  output_col,
  n = NULL,
  enable_cumulative = NULL,
  delimiter = NULL,
  uid = random_string("ngram_generator_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_cols

Input columns. String array.

output_col

Output column. String.

n

number elements per n-gram (>=1)

enable_cumulative

whether to calculate just the actual n-grams or all n-grams from 1 through n

delimiter

glue character used to join the tokens

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.