nlp_xlm_roberta_token_classification_pretrained: Spark NLP XlmRoBertaForTokenClassification

View source: R/xlm_roberta-for-token-classification.R

nlp_xlm_roberta_token_classification_pretrainedR Documentation

Spark NLP XlmRoBertaForTokenClassification

Description

XlmRoBertaForTokenClassification can load XlmRoBERTa Models with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. See https://nlp.johnsnowlabs.com/docs/en/transformers#xlmrobertafortokenclassification

Usage

nlp_xlm_roberta_token_classification_pretrained(
  sc,
  input_cols,
  output_col,
  batch_size = NULL,
  case_sensitive = NULL,
  max_sentence_length = NULL,
  name = NULL,
  lang = NULL,
  remote_loc = NULL
)

Arguments

input_cols

Input columns. String array.

output_col

Output column. String.

batch_size

Size of every batch (Default depends on model).

case_sensitive

Whether to ignore case in index lookups (Default depends on model)

max_sentence_length

Max sentence length to process (Default: 128)

x

A spark_connection, ml_pipeline, or a tbl_spark.

uid

A character string used to uniquely identify the ML estimator.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the NLP estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, an estimator is constructed then immediately fit with the input tbl_spark, returning an NLP model.


r-spark/sparknlp documentation built on Oct. 15, 2022, 10:50 a.m.