nlp_sentence_detector_dl: Spark NLP SentenceDetectorDLApproach

View source: R/sentence_detector_dl.R

nlp_sentence_detector_dlR Documentation

Spark NLP SentenceDetectorDLApproach

Description

Spark ML estimator that See https://nlp.johnsnowlabs.com/docs/en/annotators

Usage

nlp_sentence_detector_dl(
  x,
  input_cols,
  output_col,
  epochs_number = NULL,
  impossible_penultimates = NULL,
  model = NULL,
  output_logs_path = NULL,
  validation_split = NULL,
  explode_sentences = NULL,
  uid = random_string("sentence_detector_dl_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_cols

Input columns. String array.

output_col

Output column. String.

epochs_number

maximum number of epochs to train

impossible_penultimates

impossible penultimates

model

model architecture

output_logs_path

path to folder to output logs

validation_split

choose the proportion of training dataset to be validated agaisnt the model on each epoch

explode_sentences

a flag indicating whether to split sentences into different Dataset rows.

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.