View source: R/sentence_detector_dl.R
| nlp_sentence_detector_dl | R Documentation |
Spark ML estimator that See https://nlp.johnsnowlabs.com/docs/en/annotators
nlp_sentence_detector_dl(
x,
input_cols,
output_col,
epochs_number = NULL,
impossible_penultimates = NULL,
model = NULL,
output_logs_path = NULL,
validation_split = NULL,
explode_sentences = NULL,
uid = random_string("sentence_detector_dl_")
)
x |
A |
input_cols |
Input columns. String array. |
output_col |
Output column. String. |
epochs_number |
maximum number of epochs to train |
impossible_penultimates |
impossible penultimates |
model |
model architecture |
output_logs_path |
path to folder to output logs |
validation_split |
choose the proportion of training dataset to be validated agaisnt the model on each epoch |
explode_sentences |
a flag indicating whether to split sentences into different Dataset rows. |
uid |
A character string used to uniquely identify the ML estimator. |
The object returned depends on the class of x.
spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to
a Spark Estimator object and can be used to compose
Pipeline objects.
ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with
the NLP estimator appended to the pipeline.
tbl_spark: When x is a tbl_spark, an estimator is constructed then
immediately fit with the input tbl_spark, returning an NLP model.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.