knitr::opts_chunk$set( comment = "#>", collapse = TRUE, warning = FALSE, message = FALSE )
A general purpose R interface to Elasticsearch
This client is developed following the latest stable releases, currently v7.10.0
. It is generally compatible with older versions of Elasticsearch. Unlike the Python client, we try to keep as much compatibility as possible within a single version of this client, as that's an easier setup in R world.
You're fine running ES locally on your machine, but be careful just throwing up ES on a server with a public IP address - make sure to think about security.
Stable version from CRAN
install.packages("elastic")
Development version from GitHub
remotes::install_github("ropensci/elastic")
library('elastic')
w/ Docker
Pull the official elasticsearch image
# elasticsearch needs to have a version tag. We're pulling 7.10.1 here docker pull elasticsearch:7.10.1
Then start up a container
docker run -d -p 9200:9200 elasticsearch:7.10.1
Then elasticsearch should be available on port 9200, try curl localhost:9200
and you should get the familiar message indicating ES is on.
If you're using boot2docker, you'll need to use the IP address in place of localhost. Get it by doing boot2docker ip
.
on OSX
curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-darwin-x86_64.tar.gz
tar -zxvf elasticsearch-7.10.0-darwin-x86_64.tar.gz
sudo mv elasticsearch-7.10.0 /usr/local
cd /usr/local
elasticsearch
directory: rm -rf elasticsearch
sudo ln -s elasticsearch-7.10.0 elasticsearch
(replace version with your version)You can also install via Homebrew: brew install elasticsearch
Note: for the 1.6 and greater upgrades of Elasticsearch, they want you to have java 8 or greater. I downloaded Java 8 from here http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html and it seemed to work great.
I am not totally clear on best practice here, but from what I understand, when you upgrade to a new version of Elasticsearch, place old elasticsearch/data
and elasticsearch/config
directories into the new installation (elasticsearch/
dir). The new elasticsearch instance with replaced data and config directories should automatically update data to the new version and start working. Maybe if you use homebrew on a Mac to upgrade it takes care of this for you - not sure.
Obviously, upgrading Elasticsearch while keeping it running is a different thing (some help here from Elastic).
cd /usr/local/elasticsearch
bin/elasticsearch
I create a little bash shortcut called es
that does both of the above commands in one step (cd /usr/local/elasticsearch && bin/elasticsearch
).
The function connect()
is used before doing anything else to set the connection details to your remote or local elasticsearch store. The details created by connect()
are written to your options for the current session, and are used by elastic
functions.
x <- connect(port = 9200)
If you're following along here with a local instance of Elasticsearch, you'll use
x
below to do more stuff.
For AWS hosted elasticsearch, make sure to specify path = "" and the correct port - transport schema pair.
connect(host = <aws_es_endpoint>, path = "", port = 80, transport_schema = "http") # or connect(host = <aws_es_endpoint>, path = "", port = 443, transport_schema = "https")
If you are using Elastic Cloud or an installation with authentication (X-pack), make sure to specify path = "", user = "", pwd = "" and the correct port - transport schema pair.
connect(host = <ec_endpoint>, path = "", user="test", pwd = "1234", port = 9243, transport_schema = "https")
Elasticsearch has a bulk load API to load data in fast. The format is pretty weird though. It's sort of JSON, but would pass no JSON linter. I include a few data sets in elastic
so it's easy to get up and running, and so when you run examples in this package they'll actually run the same way (hopefully).
I have prepare a non-exported function useful for preparing the weird format that Elasticsearch wants for bulk data loads, that is somewhat specific to PLOS data (See below), but you could modify for your purposes. See make_bulk_plos()
and make_bulk_gbif()
here.
Elasticsearch provides some data on Shakespeare plays. I've provided a subset of this data in this package. Get the path for the file specific to your machine:
library(elastic) x <- connect() if (x$es_ver() < 600) { shakespeare <- system.file("examples", "shakespeare_data.json", package = "elastic") } else { shakespeare <- system.file("examples", "shakespeare_data_.json", package = "elastic") shakespeare <- type_remover(shakespeare) }
shakespeare <- system.file("examples", "shakespeare_data.json", package = "elastic") # If you're on Elastic v6 or greater, use this one shakespeare <- system.file("examples", "shakespeare_data_.json", package = "elastic") shakespeare <- type_remover(shakespeare)
Then load the data into Elasticsearch:
make sure to create your connection object with
connect()
# x <- connect() # do this now if you didn't do this above invisible(docs_bulk(x, shakespeare))
If you need some big data to play with, the shakespeare dataset is a good one to start with. You can get the whole thing and pop it into Elasticsearch (beware, may take up to 10 minutes or so.):
curl -XGET https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json > shakespeare.json curl -XPUT localhost:9200/_bulk --data-binary @shakespeare.json
A dataset inluded in the elastic
package is metadata for PLOS scholarly articles. Get the file path, then load:
if (index_exists(x, "plos")) index_delete(x, "plos") plosdat <- system.file("examples", "plos_data.json", package = "elastic") plosdat <- type_remover(plosdat) invisible(docs_bulk(x, plosdat))
A dataset inluded in the elastic
package is data for GBIF species occurrence records. Get the file path, then load:
if (index_exists(x, "gbif")) index_delete(x, "gbif") gbifdat <- system.file("examples", "gbif_data.json", package = "elastic") gbifdat <- type_remover(gbifdat) invisible(docs_bulk(x, gbifdat))
GBIF geo data with a coordinates element to allow geo_shape
queries
if (index_exists(x, "gbifgeo")) index_delete(x, "gbifgeo") gbifgeo <- system.file("examples", "gbif_geo.json", package = "elastic") gbifgeo <- type_remover(gbifgeo) invisible(docs_bulk(x, gbifgeo))
There are more datasets formatted for bulk loading in the sckott/elastic_data
GitHub repository. Find it at https://github.com/sckott/elastic_data
Search the plos
index and only return 1 result
Search(x, index = "plos", size = 1)$hits$hits
Search the plos
index, and query for antibody, limit to 1 result
Search(x, index = "plos", q = "antibody", size = 1)$hits$hits
Get document with id=4
docs_get(x, index = 'plos', id = 4)
Get certain fields
docs_get(x, index = 'plos', id = 4, fields = 'id')
Same index and different document ids
docs_mget(x, index = "plos", id = 1:2)
You can optionally get back raw json
from Search()
, docs_get()
, and docs_mget()
setting parameter raw=TRUE
.
For example:
(out <- docs_mget(x, index = "plos", id = 1:2, raw = TRUE))
Then parse
jsonlite::fromJSON(out)
HEAD
requests don't seem to work, not sure whyGET
requests, a number of functions that require
POST
requests obviously then won't work. A big one is Search()
, but
you can use Search_uri()
to get around this, which uses GET
instead
of POST
, but you can't pass a more complicated query via the bodyA screencast introducing the package: vimeo.com/124659179
elastic
in R doing citation(package = 'elastic')
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.