| nlp_entity_ruler | R Documentation |
Spark ML estimator that See https://nlp.johnsnowlabs.com/docs/en/annotators#entityruler
nlp_entity_ruler(
x,
input_cols,
output_col,
case_sensitive = NULL,
enable_pattern_regex = NULL,
patterns_resource_path = NULL,
patterns_resource_read_as = NULL,
patterns_resource_options = NULL,
storage_path = NULL,
storage_ref = NULL,
use_storage = NULL,
uid = random_string("entity_ruler_")
)
x |
A |
input_cols |
Input columns. String array. |
output_col |
Output column. String. |
case_sensitive |
Whether to ignore case in index lookups (Default depends on model) |
enable_pattern_regex |
Enables regex pattern match (Default: false). |
patterns_resource_path |
Resource in JSON or CSV format to map entities to patterns (Default: null). |
patterns_resource_read_as |
TEXT or SPARK_DATASET |
patterns_resource_options |
options passed to the reader. (Default: list("format" = "JSON")) |
storage_path |
Path to the external resource. |
storage_ref |
Unique identifier for storage (Default: this.uid) |
use_storage |
Whether to use RocksDB storage to serialize patterns (Default: true). |
uid |
A character string used to uniquely identify the ML estimator. |
The object returned depends on the class of x.
spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to
a Spark Estimator object and can be used to compose
Pipeline objects.
ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with
the NLP estimator appended to the pipeline.
tbl_spark: When x is a tbl_spark, an estimator is constructed then
immediately fit with the input tbl_spark, returning an NLP model.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.