rquery

rquery is a query generator based on Codd's relational algebra (updated to reflect lessons learned from working with R, SQL, and dplyr at big data scale in production). One goal of this experiment is to see if SQL would be more fun if it had a sequential data-flow or pipe notation.

rquery is currently experimental, and not yet recommended for production use.

To install: devtools::install_github("WinVector/rquery").

Discussion

rquery can be an excellent advanced SQL training tool (it shows how some very deep SQL by composing rquery operators). Currently rquery is biased towards the Spark and PostgeSQL SQL dialects.

There are many prior relational algebra inspired specialized query languages. Just a few include:

rquery is realized as a thin translation to an underlying SQL provider. We are trying to put the Codd relational operators front and center (using the original naming, and back-porting SQL progress such as window functions to the appropriate relational operator).

The primary relational operators include:

The primary non-relational (traditional SQL) operators are:

The primary missing relational operators are:

A great benefit of Codd's relational algebra is it gives one concepts to decompose complex data transformations into sequences of simpler transformations.

Some reasons SQL seems complicated include:

A lot of the grace of the Codd theory can be recovered through the usual trick changing function composition notation from g(f(x)) to x . f() . g(). This experiment is asking (and not for the first time): "what if SQL were piped (expressed composition as a left to right flow, instead of a right to left nesting)?"

Let's work a non-trivial example: the dplyr pipeline from Let’s Have Some Sympathy For The Part-time R User.

library("rquery")
use_spark <- TRUE

if(use_spark) {
  my_db <- sparklyr::spark_connect(version='2.2.0', 
                                   master = "local")
} else {
  # driver <- RPostgreSQL::PostgreSQL()
  driver <- RPostgres::Postgres()
  my_db <- DBI::dbConnect(driver,
                          host = 'localhost',
                          port = 5432,
                          user = 'postgres',
                          password = 'pg')
}


d <- dbi_copy_to(my_db, 'd',
                 data.frame(
                   subjectID = c(1,                   
                                 1,
                                 2,                   
                                 2),
                   surveyCategory = c(
                     'withdrawal behavior',
                     'positive re-framing',
                     'withdrawal behavior',
                     'positive re-framing'
                   ),
                   assessmentTotal = c(5,                 
                                       2,
                                       3,                  
                                       4),
                   irrelevantCol1 = "irrel1",
                   irrelevantCol2 = "irrel2",
                   stringsAsFactors = FALSE),
                 temporary = TRUE, 
                 overwrite = !use_spark)

First we show the Spark/database version of the original example data:

class(my_db)
print(d)

d %.>%
  rquery::to_sql(., my_db) %.>%
  DBI::dbGetQuery(my_db, .) %.>%
  knitr::kable(.)

Now we re-write the original calculation in terms of the rquery SQL generating operators.

scale <- 0.237

dq <- d %.>%
  extend_nse(.,
             probability :=
               exp(assessmentTotal * scale)/
               sum(exp(assessmentTotal * scale)),
             count := count(1),
             partitionby = 'subjectID') %.>%
  extend_nse(.,
             rank := rank(),
             partitionby = 'subjectID',
             orderby = c('probability', 'surveyCategory'))  %.>%
  rename_columns(., 'diagnosis' := 'surveyCategory') %.>%
  select_rows_nse(., rank == count) %.>%
  select_columns(., c('subjectID', 
                      'diagnosis', 
                      'probability')) %.>%
  order_by(., 'subjectID')

We then generate our result:

dq %.>%
  to_sql(., my_db, source_limit = 1000) %.>%
  DBI::dbGetQuery(my_db, .) %.>%
  knitr::kable(.)

We see we have quickly reproduced the original result using the new database operators. This means such a calculation could easily be performed at a "big data" scale (using a database or Spark; in this case we would not take the results back, but instead use CREATE TABLE tname AS to build a remote materialized view of the results).

The actual SQL query that produces the result is, in fact, quite involved:

cat(to_sql(dq, my_db, source_limit = 1000))

The query is large, but due to its regular structure it should be very amenable to query optimization.

A feature to notice is: the query was automatically restricted to just columns actually needed from the source table to complete the calculation. This has the possibility of decreasing data volume and greatly speeding up query performance. Our initial experiments show rquery narrowed queries to be twice as fast as un-narrowed dplyr on a synthetic problem simulating large disk-based queries. We think if we connected directly to Spark's relational operators (avoiding the SQL layer) we may be able to achieve even faster performance.

The above optimization is possible because the rquery representation is an intelligible tree of nodes, so we can interrogate the tree for facts about the query. For example:

column_names(dq)

tables_used(dq)

columns_used(dq)

Part of the plan is: the additional record-keeping in the operator nodes would let a potentially powerful query optimizer work over the flow before it gets translated to SQL (perhaps an extension of or successor to seplyr, which re-plans over dplyr::mutate() expressions). At the very least restricting to columns later used and folding selects together would be achievable. One should have a good chance at optimization as the representation is fairly high-level, and many of the operators are relational (meaning there are known legal transforms a query optimizer can use). The flow itself is represented as follows:

cat(format(dq))

We also can stand rquery up on non-DBI sources such as SparkR and perhaps even data.table.

Conclusion

rquery is still in early development (and not yet ready for extensive use in production), but it is maturing fast. Our current intent is to bring in sponsors, partners, and R community voices to help develop and steer rquery.

if(use_spark) {
  sparklyr::spark_disconnect(my_db)
} else {
  DBI::dbDisconnect(my_db)
}


YTLogos/rquery documentation built on May 19, 2019, 1:46 a.m.