nlp_symmetric_delete: Spark NLP SymmetricDeleteApproach

View source: R/symmetric-delete.R

nlp_symmetric_deleteR Documentation

Spark NLP SymmetricDeleteApproach

Description

Spark ML estimator that is a spell checker inspired on Symmetric Delete algorithm. It retrieves tokens and utilizes distance metrics to compute possible derived words. See https://nlp.johnsnowlabs.com/docs/en/annotators#symmetric-spellchecker

Usage

nlp_symmetric_delete(
  x,
  input_cols,
  output_col,
  dictionary_path = NULL,
  dictionary_token_pattern = "\\S+",
  dictionary_read_as = "LINE_BY_LINE",
  dictionary_options = list(format = "text"),
  max_edit_distance = NULL,
  dups_limit = NULL,
  deletes_threshold = NULL,
  frequency_threshold = NULL,
  longest_word_length = NULL,
  max_frequency = NULL,
  min_frequency = NULL,
  uid = random_string("symmetric_delete_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_cols

Input columns. String array.

output_col

Output column. String.

dictionary_path

path to dictionary of properly written words

dictionary_token_pattern

token pattern used in dictionary of properly written words

dictionary_read_as

LINE_BY_LINE or SPARK_DATASET

dictionary_options

options to pass to the Spark reader

max_edit_distance

Maximum edit distance to calculate possible derived words. Defaults to 3.

dups_limit

maximum duplicate of characters in a word to consider.

deletes_threshold

minimum frequency of corrections a word needs to have to be considered from training.

frequency_threshold

minimum frequency of words to be considered from training.

longest_word_length

ength of longest word in corpus

max_frequency

maximum frequency of a word in the corpus

min_frequency

minimum frequency of a word in the corpus

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.