Description Usage Arguments Author(s) References Examples
View source: R/tokenizer_set.R
Tokenizer operations
1 | tokenizer_set(conn, index, body, ...)
|
conn |
an Elasticsearch connection object, see |
index |
(character) A character vector of index names |
body |
Query, either a list or json. |
... |
Curl options passed on to crul::HttpClient |
Scott Chamberlain myrmecocystus@gmail.com
https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-tokenizers.html
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | ## Not run:
# connection setup
(x <- connect())
# set tokenizer
## NGram tokenizer
body <- '{
"settings" : {
"analysis" : {
"analyzer" : {
"my_ngram_analyzer" : {
"tokenizer" : "my_ngram_tokenizer"
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : "2",
"max_gram" : "3",
"token_chars": [ "letter", "digit" ]
}
}
}
}
}'
if (index_exists('test1')) index_delete('test1')
tokenizer_set(index = "test1", body=body)
index_analyze(text = "hello world", index = "test1",
analyzer='my_ngram_analyzer')
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.