| nlp_normalizer | R Documentation |
Spark ML estimator that removes all dirty characters from text following a regex pattern and transforms words based on a provided dictionary See https://nlp.johnsnowlabs.com/docs/en/annotators#normalizer
nlp_normalizer(
x,
input_cols,
output_col,
cleanup_patterns = NULL,
lowercase = NULL,
dictionary_path = NULL,
dictionary_delimiter = NULL,
dictionary_read_as = "LINE_BY_LINE",
dictionary_options = list(format = "text"),
uid = random_string("normalizer_")
)
x |
A |
input_cols |
Input columns. String array. |
output_col |
Output column. String. |
cleanup_patterns |
Regular expressions list for normalization, defaults (^A-Za-z) |
lowercase |
lowercase tokens, default true |
dictionary_path |
txt file with delimited words to be transformed into something else |
dictionary_delimiter |
delimiter of the dictionary text file |
dictionary_read_as |
LINE_BY_LINE or SPARK_DATASET |
dictionary_options |
options to pass to the Spark reader |
uid |
A character string used to uniquely identify the ML estimator. |
The object returned depends on the class of x.
spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to
a Spark Estimator object and can be used to compose
Pipeline objects.
ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with
a default pretrained NLP model appended to the pipeline.
tbl_spark: When x is a tbl_spark, an estimator is constructed then
immediately fit with the input tbl_spark, returning an NLP model.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.