nlp_dependency_parser: Spark NLP DependencyParserApproach

View source: R/dependency-parser.R

nlp_dependency_parserR Documentation

Spark NLP DependencyParserApproach

Description

Spark ML estimator unlabeled parser that finds a grammatical relation between two words in a sentence. Its input is a directory with dependency treebank files. See https://nlp.johnsnowlabs.com/docs/en/annotators#dependency-parser

Usage

nlp_dependency_parser(
  x,
  input_cols,
  output_col,
  n_iterations = NULL,
  tree_bank_path = NULL,
  tree_bank_read_as = "TEXT",
  tree_bank_options = list(format = "text"),
  conll_u_path = NULL,
  conll_u_read_as = "TEXT",
  conll_u_options = list(format = "text"),
  uid = random_string("dependency_parser_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_cols

Input columns. String array.

output_col

Output column. String.

n_iterations

Number of iterations in training, converges to better accuracy

tree_bank_path

Dependency treebank folder with files in Penn Treebank format

tree_bank_read_as

TEXT or SPARK_DATASET

tree_bank_options

options to pass to Spark reader

conll_u_path

Path to a file in CoNLL-U format

conll_u_read_as

TEXT or SPARK_DATASET

conll_u_options

options to pass to Spark reader

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.