Keys and fast binary search based subset

require(data.table)
knitr::opts_chunk$set(
  comment = "#",
    error = FALSE,
     tidy = FALSE,
    cache = FALSE,
 collapse = TRUE)

This vignette is aimed at those who are already familiar with data.table syntax, its general form, how to subset rows in i, select and compute on columns, add/modify/delete columns by reference in j and group by using by. If you're not familiar with these concepts, please read the "Introduction to data.table" and "Reference semantics" vignettes first.


Data {#data}

We will use the same flights data as in the "Introduction to data.table" vignette.

options(width = 100L)
flights <- fread("flights14.csv")
head(flights)
dim(flights)

Introduction

In this vignette, we will

1. Keys

a) What is a key?

In the "Introduction to data.table" vignette, we saw how to subset rows in i using logical expressions, row numbers and using order(). In this section, we will look at another way of subsetting incredibly fast - using keys.

But first, let's start by looking at data.frames. All data.frames have a row names attribute. Consider the data.frame DF below.

set.seed(1L)
DF = data.frame(ID1 = sample(letters[1:2], 10, TRUE),
                ID2 = sample(1:3, 10, TRUE),
                val = sample(10),
                stringsAsFactors = FALSE,
                row.names = sample(LETTERS[1:10]))
DF

rownames(DF)

We can subset a particular row using its row name as shown below:

DF["C", ]

i.e., row names are more or less an index to rows of a data.frame. However,

  1. Each row is limited to exactly one row name.

    But, a person (for example) has at least two names - a first and a second name. It is useful to organise a telephone directory by surname then first name.

  2. And row names should be unique.

    ```r rownames(DF) = sample(LETTERS[1:5], 10, TRUE)

    Warning: non-unique values when setting 'row.names': 'C', 'D'

    Error in .rowNamesDF<-(x, value = value): duplicate 'row.names' are not allowed

    ```

Now let's convert it to a data.table.

DT = as.data.table(DF)
DT

rownames(DT)

Instead, in data.tables we set and use keys. Think of a key as supercharged rownames.

Keys and their properties {#key-properties}

  1. We can set keys on multiple columns and the column can be of different types -- integer, numeric, character, factor, integer64 etc. list and complex types are not supported yet.

  2. Uniqueness is not enforced, i.e., duplicate key values are allowed. Since rows are sorted by key, any duplicates in the key columns will appear consecutively.

  3. Setting a key does two things:

    a. physically reorders the rows of the data.table by the column(s) provided by reference, always in increasing order.

    b. marks those columns as key columns by setting an attribute called sorted to the data.table.

    Since the rows are reordered, a data.table can have at most one key because it can not be sorted in more than one way.

For the rest of the vignette, we will work with flights data set.

b) Set, get and use keys on a data.table

-- How can we set the column origin as key in the data.table flights?

setkey(flights, origin)
head(flights)

## alternatively we can provide character vectors to the function 'setkeyv()'
# setkeyv(flights, "origin") # useful to program with

set* and :=:

In data.table, the := operator and all the set* (e.g., setkey, setorder, setnames etc..) functions are the only ones which modify the input object by reference.

Once you key a data.table by certain columns, you can subset by querying those key columns using the .() notation in i. Recall that .() is an alias to list().

-- Use the key column origin to subset all rows where the origin airport matches "JFK"

flights[.("JFK")]

## alternatively
# flights[J("JFK")] (or) 
# flights[list("JFK")]

-- How can we get the column(s) a data.table is keyed by?

Using the function key().

key(flights)

c) Keys and multiple columns

To refresh, keys are like supercharged row names. We can set key on multiple columns and they can be of multiple types.

-- How can I set keys on both origin and dest columns?

setkey(flights, origin, dest)
head(flights)

## or alternatively
# setkeyv(flights, c("origin", "dest")) # provide a character vector of column names

key(flights)

-- Subset all rows using key columns where first key column origin matches "JFK" and second key column dest matches "MIA"

flights[.("JFK", "MIA")]

How does the subset work here? {#multiple-key-point}

-- Subset all rows where just the first key column origin matches "JFK"

key(flights)

flights[.("JFK")] ## or in this case simply flights["JFK"], for convenience

-- Subset all rows where just the second key column dest matches "MIA"

flights[.(unique(origin), "MIA")]

What's happening here?

2) Combining keys with j and by

All we have seen so far is the same concept -- obtaining row indices in i, but just using a different method -- using keys. It shouldn't be surprising that we can do exactly the same things in j and by as seen from the previous vignettes. We will highlight this with a few examples.

a) Select in j

-- Return arr_delay column as a data.table corresponding to origin = "LGA" and dest = "TPA".

key(flights)
flights[.("LGA", "TPA"), .(arr_delay)]

b) Chaining

-- On the result obtained above, use chaining to order the column in decreasing order.

flights[.("LGA", "TPA"), .(arr_delay)][order(-arr_delay)]

c) Compute or do in j

-- Find the maximum arrival delay corresponding to origin = "LGA" and dest = "TPA".

flights[.("LGA", "TPA"), max(arr_delay)]

d) sub-assign by reference using := in j

We have seen this example already in the Reference semantics vignette. Let's take a look at all the hours available in the flights data.table:

# get all 'hours' in flights
flights[, sort(unique(hour))]

We see that there are totally 25 unique values in the data. Both 0 and 24 hours seem to be present. Let's go ahead and replace 24 with 0, but this time using key.

setkey(flights, hour)
key(flights)
flights[.(24), hour := 0L]
key(flights)

Now, there shouldn't be any 24 in the hour column.

flights[, sort(unique(hour))]

e) Aggregation using by

Let's set the key back to origin, dest first.

setkey(flights, origin, dest)
key(flights)

-- Get the maximum departure delay for each month corresponding to origin = "JFK". Order the result by month

ans <- flights["JFK", max(dep_delay), keyby = month]
head(ans)
key(ans)

3) Additional arguments - mult and nomatch

a) The mult argument

We can choose, for each query, if "all" the matching rows should be returned, or just the "first" or "last" using the mult argument. The default value is "all" - what we've seen so far.

-- Subset only the first matching row from all rows where origin matches "JFK" and dest matches "MIA"

flights[.("JFK", "MIA"), mult = "first"]

-- Subset only the last matching row of all the rows where origin matches "LGA", "JFK", "EWR" and dest matches "XNA"

flights[.(c("LGA", "JFK", "EWR"), "XNA"), mult = "last"]

b) The nomatch argument

We can choose if queries that do not match should return NA or be skipped altogether using the nomatch argument.

-- From the previous example, Subset all rows only if there's a match

flights[.(c("LGA", "JFK", "EWR"), "XNA"), mult = "last", nomatch = NULL]

4) binary search vs vector scans

We have seen so far how we can set and use keys to subset. But what's the advantage? For example, instead of doing:

# key by origin,dest columns
flights[.("JFK", "MIA")]

we could have done:

flights[origin == "JFK" & dest == "MIA"]

One advantage very likely is shorter syntax. But even more than that, binary search based subsets are incredibly fast.

As the time goes data.table gets new optimization and currently the latter call is automatically optimized to use binary search.
To use slow vector scan key needs to be removed.

setkey(flights, NULL)
flights[origin == "JFK" & dest == "MIA"]

a) Performance of binary search approach

To illustrate, let's create a sample data.table with 20 million rows and three columns and key it by columns x and y.

set.seed(2L)
N = 2e7L
DT = data.table(x = sample(letters, N, TRUE),
                y = sample(1000L, N, TRUE),
                val = runif(N))
print(object.size(DT), units = "Mb")

DT is ~380MB. It is not really huge, but this will do to illustrate the point.

From what we have seen in the Introduction to data.table section, we can subset those rows where columns x = "g" and y = 877 as follows:

key(DT)
## (1) Usual way of subsetting - vector scan approach
t1 <- system.time(ans1 <- DT[x == "g" & y == 877L])
t1
head(ans1)
dim(ans1)

Now let's try to subset by using keys.

setkeyv(DT, c("x", "y"))
key(DT)
## (2) Subsetting using keys
t2 <- system.time(ans2 <- DT[.("g", 877L)])
t2
head(ans2)
dim(ans2)

identical(ans1$val, ans2$val)

b) Why does keying a data.table result in blazing fast subsets?

To understand that, let's first look at what vector scan approach (method 1) does.

Vector scan approach

This is what we call a vector scan approach. And this is quite inefficient, especially on larger tables and when one needs repeated subsetting, because it has to scan through all the rows each time.

Now let us look at binary search approach (method 2). Recall from Properties of key - setting keys reorders the data.table by key columns. Since the data is sorted, we don't have to scan through the entire length of the column! We can instead use binary search to search a value in O(log n) as opposed to O(n) in case of vector scan approach, where n is the number of rows in the data.table.

Binary search approach

Here's a very simple illustration. Let's consider the (sorted) numbers shown below:

1, 5, 10, 19, 22, 23, 30

Suppose we'd like to find the matching position of the value 1, using binary search, this is how we would proceed - because we know that the data is sorted.

A vector scan approach on the other hand would have to scan through all the values (here, 7).

It can be seen that with every search we reduce the number of searches by half. This is why binary search based subsets are incredibly fast. Since rows of each column of data.tables have contiguous locations in memory, the operations are performed in a very cache efficient manner (also contributes to speed).

In addition, since we obtain the matching row indices directly without having to create those huge logical vectors (equal to the number of rows in a data.table), it is quite memory efficient as well.

Summary

In this vignette, we have learnt another method to subset rows in i by keying a data.table. Setting keys allows us to perform blazing fast subsets by using binary search. In particular, we have seen how to

Key based subsets are incredibly fast and are particularly useful when the task involves repeated subsetting. But it may not be always desirable to set key and physically reorder the data.table. In the next vignette, we will address this using a new feature -- secondary indexes.



Try the data.table package in your browser

Any scripts or data that you put into this service are public.

data.table documentation built on March 7, 2023, 6:16 p.m.