nlp_text_matcher: Spark NLP TextMatcher phrase matching

View source: R/text-matcher.R

nlp_text_matcherR Documentation

Spark NLP TextMatcher phrase matching

Description

Spark ML transformer to match entire phrases (by token) provided in a file against a Document See https://nlp.johnsnowlabs.com/docs/en/annotators#textmatcher

Usage

nlp_text_matcher(
  x,
  input_cols,
  output_col,
  path,
  read_as = "TEXT",
  options = NULL,
  build_from_tokens = TRUE,
  uid = random_string("text_matcher_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_cols

Input columns. String array.

output_col

Output column. String.

path

a path to a file that contains the entities in the specified format.

options

an named list containing additional parameters. Defaults to “format”: “text”.

build_from_tokens

Whether the TextMatcher should take the CHUNK from TOKEN or not. TRUE or FALSE

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.

When x is a spark_connection the function returns a TextMatcher transformer. When x is a ml_pipeline the pipeline with the TextMatcher added. When x is a tbl_spark a transformed tbl_spark (note that the Dataframe passed in must have the input_cols specified).


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.