nlp_document_assembler: Spark NLP DocumentAssembler

View source: R/document-assembler.R

nlp_document_assemblerR Documentation

Spark NLP DocumentAssembler

Description

Spark ML transformer that creates the first annotation of type Document. Can read a column containing either a String or Array[String]. See https://nlp.johnsnowlabs.com/docs/en/annotators#documentassembler-getting-data-in

Usage

nlp_document_assembler(
  x,
  input_col,
  output_col,
  id_col = NULL,
  metadata_col = NULL,
  cleanup_mode = NULL,
  uid = random_string("document_assembler_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_col

Input column. String.

output_col

Output column. String.

id_col

String type column with id information. Optional.

metadata_col

Map type column with metadata information. Optional.

cleanup_mode

Cleaning up options. Optional. Default is "disabled". Possible values:

disabled source kept as original
inplace removes new lines and tabs
inplace_full removes new lines and tabs but also those which were converted to strings
shrink removes new lines and tabs, plus merging multiple spaces and blank lines to a single space.
shrink_full removews new lines and tabs, including stringified values, plus shrinking spaces and blank lines.
uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.