nlp_assertion_logreg: Spark NLP AssertionLogRegApproach

View source: R/assertion_logreg.R

nlp_assertion_logregR Documentation

Spark NLP AssertionLogRegApproach

Description

Spark ML estimator that See https://nlp.johnsnowlabs.com/docs/en/licensed_annotators#assertionlogreg

Usage

nlp_assertion_logreg(
  x,
  input_cols,
  output_col,
  label_column = NULL,
  max_iter = NULL,
  reg = NULL,
  enet = NULL,
  before = NULL,
  after = NULL,
  start_col = NULL,
  end_col = NULL,
  lazy_annotator = NULL,
  uid = random_string("assertion_logreg_")
)

Arguments

x

A spark_connection, ml_pipeline, or a tbl_spark.

input_cols

Input columns. String array.

output_col

Output column. String.

label_column

Column with one label per document

max_iter

Max number of iterations for algorithm

reg

Regularization parameter

enet

Elastic net parameter

before

Amount of tokens from the context before the target

after

Amount of tokens from the context after the target

start_col

Column that contains the token number for the start of the target

end_col

Column that contains the token number for the end of the target

lazy_annotator

a Param in Annotators that allows them to stand idle in the Pipeline and do nothing. Can be called by other Annotators in a RecursivePipeline

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.