indices: Index API operations

Description Usage Arguments Details Mappings Author(s) References Examples

Description

Index API operations

Usage

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
index_get(
  conn,
  index = NULL,
  features = NULL,
  raw = FALSE,
  verbose = TRUE,
  ...
)

index_exists(conn, index, ...)

index_delete(conn, index, raw = FALSE, verbose = TRUE, ...)

index_create(conn, index = NULL, body = NULL, raw = FALSE, verbose = TRUE, ...)

index_recreate(
  conn,
  index = NULL,
  body = NULL,
  raw = FALSE,
  verbose = TRUE,
  ...
)

index_close(conn, index, ...)

index_open(conn, index, ...)

index_stats(
  conn,
  index = NULL,
  metric = NULL,
  completion_fields = NULL,
  fielddata_fields = NULL,
  fields = NULL,
  groups = NULL,
  level = "indices",
  ...
)

index_settings(conn, index = "_all", ...)

index_settings_update(conn, index = NULL, body, ...)

index_segments(conn, index = NULL, ...)

index_recovery(conn, index = NULL, detailed = FALSE, active_only = FALSE, ...)

index_optimize(
  conn,
  index = NULL,
  max_num_segments = NULL,
  only_expunge_deletes = FALSE,
  flush = TRUE,
  wait_for_merge = TRUE,
  ...
)

index_forcemerge(
  conn,
  index = NULL,
  max_num_segments = NULL,
  only_expunge_deletes = FALSE,
  flush = TRUE,
  ...
)

index_upgrade(conn, index = NULL, wait_for_completion = FALSE, ...)

index_analyze(
  conn,
  text = NULL,
  field = NULL,
  index = NULL,
  analyzer = NULL,
  tokenizer = NULL,
  filters = NULL,
  char_filters = NULL,
  body = list(),
  ...
)

index_flush(
  conn,
  index = NULL,
  force = FALSE,
  full = FALSE,
  wait_if_ongoing = FALSE,
  ...
)

index_clear_cache(
  conn,
  index = NULL,
  filter = FALSE,
  filter_keys = NULL,
  fielddata = FALSE,
  query_cache = FALSE,
  id_cache = FALSE,
  ...
)

index_shrink(conn, index, index_new, body = NULL, ...)

Arguments

conn

an Elasticsearch connection object, see connect()

index

(character) A character vector of index names

features

(character) A single feature. One of settings, mappings, or aliases

raw

If TRUE (default), data is parsed to list. If FALSE, then raw JSON.

verbose

If TRUE (default) the url call used printed to console.

...

Curl args passed on to crul::HttpClient

body

Query, either a list or json.

metric

(character) A character vector of metrics to display. Possible values: "_all", "completion", "docs", "fielddata", "filter_cache", "flush", "get", "id_cache", "indexing", "merge", "percolate", "refresh", "search", "segments", "store", "warmer".

completion_fields

(character) A character vector of fields for completion metric (supports wildcards)

fielddata_fields

(character) A character vector of fields for fielddata metric (supports wildcards)

fields

(character) Fields to add.

groups

(character) A character vector of search groups for search statistics.

level

(character) Return stats aggregated on "cluster", "indices" (default) or "shards"

detailed

(logical) Whether to display detailed information about shard recovery. Default: FALSE

active_only

(logical) Display only those recoveries that are currently on-going. Default: FALSE

max_num_segments

(character) The number of segments the index should be merged into. Default: "dynamic"

only_expunge_deletes

(logical) Specify whether the operation should only expunge deleted documents

flush

(logical) Specify whether the index should be flushed after performing the operation. Default: TRUE

wait_for_merge

(logical) Specify whether the request should block until the merge process is finished. Default: TRUE

wait_for_completion

(logical) Should the request wait for the upgrade to complete. Default: FALSE

text

The text on which the analysis should be performed (when request body is not used)

field

Use the analyzer configured for this field (instead of passing the analyzer name)

analyzer

The name of the analyzer to use

tokenizer

The name of the tokenizer to use for the analysis

filters

A character vector of filters to use for the analysis

char_filters

A character vector of character filters to use for the analysis

force

(logical) Whether a flush should be forced even if it is not necessarily needed ie. if no changes will be committed to the index.

full

(logical) If set to TRUE a new index writer is created and settings that have been changed related to the index writer will be refreshed.

wait_if_ongoing

If TRUE, the flush operation will block until the flush can be executed if another flush operation is already executing. The default is false and will cause an exception to be thrown on the shard level if another flush operation is already running.

filter

(logical) Clear filter caches

filter_keys

(character) A vector of keys to clear when using the filter_cache parameter (default: all)

fielddata

(logical) Clear field data

query_cache

(logical) Clear query caches

id_cache

(logical) Clear ID caches for parent/child

index_new

(character) an index name, required. only applies to index_shrink method

Details

index_analyze: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-analyze.html This method can accept a string of text in the body, but this function passes it as a parameter in a GET request to simplify.

index_flush: https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-flush.html From the ES website: The flush process of an index basically frees memory from the index by flushing data to the index storage and clearing the internal transaction log. By default, Elasticsearch uses memory heuristics in order to automatically trigger flush operations as required in order to clear memory.

index_status: The API endpoint for this function was deprecated in Elasticsearch v1.2.0, and will likely be removed soon. Use index_recovery() instead.

index_settings_update: There are a lot of options you can change with this function. See https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html for all the options.

index settings: See https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules.html for the static and dynamic settings you can set on indices.

Mappings

The "keyword" type is not supported in Elasticsearch < v5. If you do use a mapping with "keyword" type in Elasticsearch < v5 index_create() should fail.

Author(s)

Scott Chamberlain myrmecocystus@gmail.com

References

https://www.elastic.co/guide/en/elasticsearch/reference/current/indices.html

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
## Not run: 
# connection setup
(x <- connect())

# get information on an index
index_get(x, index='shakespeare')
## this one is the same as running index_settings('shakespeare')
index_get(x, index='shakespeare', features='settings')
index_get(x, index='shakespeare', features='mappings')
index_get(x, index='shakespeare', features='alias')

# check for index existence
index_exists(x, index='shakespeare')
index_exists(x, index='plos')

# create an index
if (index_exists(x, 'twitter')) index_delete(x, 'twitter')
index_create(x, index='twitter')
if (index_exists(x, 'things')) index_delete(x, 'things')
index_create(x, index='things')
if (index_exists(x, 'plos')) index_delete(x, 'plos')
index_create(x, index='plos')

# re-create an index
index_recreate(x, "deer")
index_recreate(x, "deer", verbose = FALSE)

# delete an index
if (index_exists(x, 'plos')) index_delete(x, index='plos')

## with a body
body <- '{
 "settings" : {
  "index" : {
    "number_of_shards" : 3,
    "number_of_replicas" : 2
   }
 }
}'
if (index_exists(x, 'alsothat')) index_delete(x, 'alsothat')
index_create(x, index='alsothat', body = body)
## with read only
body <- '{
 "settings" : {
  "index" : {
    "blocks" : {
      "read_only" : true
    }
   }
 }
}'
# index_create(x, index='myindex', body = body)
# then this delete call should fail with something like:
## > Error: 403 - blocked by: [FORBIDDEN/5/index read-only (api)]
# index_delete(x, index='myindex')

## with mappings
body <- '{
 "mappings": {
   "properties": {
     "location" : {"type" : "geo_point"}
   }
 }
}'
if (!index_exists(x, 'gbifnewgeo')) index_create(x, index='gbifnewgeo', body=body)
gbifgeo <- system.file("examples", "gbif_geosmall.json", package = "elastic")
docs_bulk(x, gbifgeo)

# close an index
index_create(x, 'plos')
index_close(x, 'plos')

# open an index
index_open(x, 'plos')

# Get stats on an index
index_stats(x, 'plos')
index_stats(x, c('plos','gbif'))
index_stats(x, c('plos','gbif'), metric='refresh')
index_stats(x, metric = "indexing")
index_stats(x, 'shakespeare', metric='completion')
index_stats(x, 'shakespeare', metric='completion', completion_fields = "completion")
index_stats(x, 'shakespeare', metric='fielddata')
index_stats(x, 'shakespeare', metric='fielddata', fielddata_fields = "evictions")
index_stats(x, 'plos', level="indices")
index_stats(x, 'plos', level="cluster")
index_stats(x, 'plos', level="shards")

# Get segments information that a Lucene index (shard level) is built with
index_segments(x)
index_segments(x, 'plos')
index_segments(x, c('plos','gbif'))

# Get recovery information that provides insight into on-going index shard recoveries
index_recovery(x)
index_recovery(x, 'plos')
index_recovery(x, c('plos','gbif'))
index_recovery(x, "plos", detailed = TRUE)
index_recovery(x, "plos", active_only = TRUE)

# Optimize an index, or many indices
if (x$es_ver() < 500) {
  ### ES < v5 - use optimize
  index_optimize(x, 'plos')
  index_optimize(x, c('plos','gbif'))
  index_optimize(x, 'plos')
} else {
  ### ES > v5 - use forcemerge
  index_forcemerge(x, 'plos')
}

# Upgrade one or more indices to the latest format. The upgrade process converts any
# segments written with previous formats.
if (x$es_ver() < 500) {
  index_upgrade(x, 'plos')
  index_upgrade(x, c('plos','gbif'))
}

# Performs the analysis process on a text and return the tokens breakdown
# of the text
index_analyze(x, text = 'this is a test', analyzer='standard')
index_analyze(x, text = 'this is a test', analyzer='whitespace')
index_analyze(x, text = 'this is a test', analyzer='stop')
index_analyze(x, text = 'this is a test', tokenizer='keyword',
  filters='lowercase')
index_analyze(x, text = 'this is a test', tokenizer='keyword',
  filters='lowercase', char_filters='html_strip')
index_analyze(x, text = 'this is a test', index = 'plos',
  analyzer="standard")
index_analyze(x, text = 'this is a test', index = 'shakespeare',
  analyzer="standard")

## NGram tokenizer
body <- '{
        "settings" : {
             "analysis" : {
                 "analyzer" : {
                     "my_ngram_analyzer" : {
                         "tokenizer" : "my_ngram_tokenizer"
                     }
                 },
                 "tokenizer" : {
                     "my_ngram_tokenizer" : {
                         "type" : "nGram",
                         "min_gram" : "2",
                         "max_gram" : "3",
                         "token_chars": [ "letter", "digit" ]
                     }
                 }
             }
      }
}'
if (index_exists(x, "shakespeare2")) index_delete(x, "shakespeare2")
tokenizer_set(x, index = "shakespeare2", body=body)
index_analyze(x, text = "art thouh", index = "shakespeare2",
  analyzer='my_ngram_analyzer')

# Explicitly flush one or more indices.
index_flush(x, index = "plos")
index_flush(x, index = "shakespeare")
index_flush(x, index = c("plos","shakespeare"))
index_flush(x, index = "plos", wait_if_ongoing = TRUE)
index_flush(x, index = "plos", verbose = TRUE)

# Clear either all caches or specific cached associated with one ore more indices.
index_clear_cache(x)
index_clear_cache(x, index = "plos")
index_clear_cache(x, index = "shakespeare")
index_clear_cache(x, index = c("plos","shakespeare"))
index_clear_cache(x, filter = TRUE)

# Index settings
## get settings
index_settings(x)
index_settings(x, "_all")
index_settings(x, 'gbif')
index_settings(x, c('gbif','plos'))
index_settings(x, '*s')
## update settings
if (index_exists(x, 'foobar')) index_delete(x, 'foobar')
index_create(x, "foobar")
settings <- list(index = list(number_of_replicas = 4))
index_settings_update(x, "foobar", body = settings)
index_get(x, "foobar")$foobar$settings

# Shrink index - Can only shrink an index if it has >1 shard
## index must be read only, a copy of every shard in the index must
## reside on the same node, and the cluster health status must be green
### index_settings_update call to change these
settings <- list(
  index.routing.allocation.require._name = "shrink_node_name",
  index.blocks.write = "true"
)
if (index_exists(x, 'barbarbar')) index_delete(x, 'barbarbar')
index_create(x, "barbarbar")
index_settings_update(x, "barbarbar", body = settings)
cat_recovery(x, index='barbarbar')
# index_shrink(x, "barbarbar", "barfoobbar")

## End(Not run)

elastic documentation built on March 17, 2021, 1:07 a.m.

Related to indices in elastic...