nlp_text_matcher | R Documentation |
Spark ML transformer to match entire phrases (by token) provided in a file against a Document See https://nlp.johnsnowlabs.com/docs/en/annotators#textmatcher
nlp_text_matcher( x, input_cols, output_col, path, read_as = "TEXT", options = NULL, build_from_tokens = TRUE, uid = random_string("text_matcher_") )
x |
A |
input_cols |
Input columns. String array. |
output_col |
Output column. String. |
path |
a path to a file that contains the entities in the specified format. |
options |
an named list containing additional parameters. Defaults to “format”: “text”. |
build_from_tokens |
Whether the TextMatcher should take the CHUNK from TOKEN or not. TRUE or FALSE |
uid |
A character string used to uniquely identify the ML estimator. |
The object returned depends on the class of x
.
spark_connection
: When x
is a spark_connection
, the function returns an instance of a ml_estimator
object. The object contains a pointer to
a Spark Estimator
object and can be used to compose
Pipeline
objects.
ml_pipeline
: When x
is a ml_pipeline
, the function returns a ml_pipeline
with
the NLP estimator appended to the pipeline.
tbl_spark
: When x
is a tbl_spark
, an estimator is constructed then
immediately fit with the input tbl_spark
, returning an NLP model.
When x
is a spark_connection
the function returns a TextMatcher transformer.
When x
is a ml_pipeline
the pipeline with the TextMatcher added. When x
is a tbl_spark
a transformed tbl_spark
(note that the Dataframe passed in must have the input_cols specified).
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.