nlp_doc2chunk | R Documentation |
Spark ML transformer that Converts DOCUMENT type annotations into CHUNK type with the contents of a chunkCol. Chunk text must be contained within input DOCUMENT. May be either a string or an array of strings (using isArray Param) Useful for annotators that require a CHUNK type input. See https://nlp.johnsnowlabs.com/docs/en/transformers#doc2chunk
nlp_doc2chunk( x, input_cols, output_col, is_array = NULL, chunk_col = NULL, start_col = NULL, start_col_by_token_index = NULL, fail_on_missing = NULL, lowercase = NULL, uid = random_string("doc2chunk_") )
x |
A |
input_cols |
Input columns. String array. |
output_col |
Output column. String. |
is_array |
Whether the target chunkCol is ArrayType<StringType> |
chunk_col |
String or StringArray column with the chunks that belong to the inputCol target |
start_col |
Target INT column pointing to the token index (split by white space) |
start_col_by_token_index |
Whether to use token index by whitespace or character index in startCol |
fail_on_missing |
Whether to fail when a chunk is not found within inputCol |
lowercase |
whether to increase matching by lowercasing everything before matching |
uid |
A character string used to uniquely identify the ML estimator. |
The object returned depends on the class of x
.
spark_connection
: When x
is a spark_connection
, the function returns an instance of a ml_estimator
object. The object contains a pointer to
a Spark Estimator
object and can be used to compose
Pipeline
objects.
ml_pipeline
: When x
is a ml_pipeline
, the function returns a ml_pipeline
with
the NLP estimator appended to the pipeline.
tbl_spark
: When x
is a tbl_spark
, an estimator is constructed then
immediately fit with the input tbl_spark
, returning an NLP model.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.