A Teradata Backend for dplyr

knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "README-",
  eval = FALSE,
  message = FALSE
)

1. Overview

The package provides a Teradata backend for dplyr.

It makes it possible to operate Teradata Database in the same way as manipulating data frames with dplyr.

library(dplyr.teradata)

# Establish a connection to Teradata
con <- dbConnect(todbc(), 
                 driver = "{Teradata Driver}", DBCName = "host_name_or_IP_address",
                 uid = "user_name", pwd = "*****")
my_table <- tbl(con, "my_table_name")

# Build a query
q <- my_table %>% 
  filter(between(date, "2017-01-01", "2017-01-03")) %>% 
  group_by(date) %>%
  summarise(n = n()) %>%
  arrange(date)

show_query(q)
#> <SQL>
#> SELECT "date", count(*) AS "n"
#> FROM "my_table_name"
#> WHERE ("date" BETWEEN '2017-01-01' AND '2017-01-03')
#> GROUP BY "date"
#> ORDER BY "date"

# Send the query and get its result on R
df <- q %>% collect
df
#> # A tibble: 3 x 2
#>          date        n
#>        <date>    <int>
#>  1 2017-01-01   123456
#>  2 2017-01-02  7891011
#>  3 2017-01-03 12131415

2. Installation

You can install the dplyr.teradata package from CRAN.

install.packages("dplyr.teradata")

You can also install the development version of the package from GitHub.

install.packages("devtools") # if you have not installed "devtools" package
devtools::install_github("hoxo-m/dplyr.teradata")

The source code for dplyr.teradata package is available on GitHub at

3. Motivation

The package provides a Teradata backend for dplyr. It makes it possible to build SQL for Teradata Database in the same way as manipulating data frames with the dplyr package. It also can send the queries and then receive its results on R.

Therefore, you can complete data analysis with Teradata only on R. It means that you are freed from troublesome switching of tools and switching thoughts that cause mistakes.

4. Usage

The package uses the odbc package to connect database and the dbplyr package to build SQL.

First, you need to establish an ODBC connection to Teradata. See:

# Establish a connection to Teradata
con <- dbConnect(odbc(), 
                 driver = "{Teradata Driver}", DBCName = "host_name_or_IP_address",
                 uid = "user_name", pwd = "*****")

Second, you need to specify a table to build SQL. See:

To specify a table, you can use tbl():

# Getting table
my_table <- tbl(con, "my_table_name")

# Getting table in schema
my_table <- tbl(con, in_schema("my_schema", "my_table_name"))

Third, you build queries. It can do in the same way as manipulating data frames with dplyr:

For example, you can use follows:

# Build a query
q <- my_table %>% 
  filter(between(date, "2017-01-01", "2017-01-03")) %>% 
  group_by(date) %>%
  summarise(n = n()) %>%
  arrange(date)

n() is a function in dplyr to return the number of rows in the current group but here it will be translated to count(*) as a SQL function.

If you want to show built queries, use show_query():

show_query(q)
#> <SQL>
#> SELECT "date", count(*) AS "n"
#> FROM "my_table_name"
#> WHERE ("date" BETWEEN '2017-01-01' AND '2017-01-03')
#> GROUP BY "date"
#> ORDER BY "date"

Finally, you send built queries and get its results on R using collect().

# Send the query and get its result on R
df <- q %>% collect
df
#> # A tibble: 3 x 2
#>          date        n
#>        <date>    <int>
#>  1 2017-01-01   123456
#>  2 2017-01-02  7891011
#>  3 2017-01-03 12131415

5. Translatable functions

The package mainly use dbplyr to translate manipulations into queries.

Translatable functions are the available functions in manipulations that it can translate into SQL functions.

For instance, n() is translated to count(*) in the above example.

To know translatable functions for Teradata, refer the following:

Here, we introduce the special translatable functions that it becomes available by dplyr.teradata.

library(dplyr.teradata)
trans <- function(x) {
  translate_sql(!!enquo(x), con = simulate_teradata())
}

5.1. Treat Boolean

Teradata does not have the boolean data type. So when you use boolean, you need to write some complex statements. The package has several functions to treat it briefly.

bool_to_int transforms boolean to integer.

mutate(is_positive = bool_to_int(x > 0L))
trans(bool_to_int(x > 0L))

count_if() or n_if() counts a number of rows satisfying a condition.

summarize(n = count_if(x > 0L))
trans(count_if(x > 0L))

5.2. to_timestamp()

When your tables has some columns stored UNIX time and you want to convert it to timestamp, you need to write complex SQL.

to_timestamp() is a translatable function that makes it easy.

mutate(ts = to_timestamp(unixtime_column))

Such as above manipulation is translated into SQL like following:

trans(to_timestamp(unixtime_column))

5.3. cut()

cut() is very useful function that you can use in base R.

For example, you want to cut values of x into three parts of ranges by break points 2 and 4:

x <- 1:6
breaks <- c(0, 2, 4, 6)
cut(x, breaks)

dplyr.teradata has a translatable function similar to this:

breaks = c(0, 2, 4, 6)
mutate(y = cut(x, breaks))

In the result, it is translated to a CASE WHEN statement as follows:

trans(cut(x, c(0, 2, 4, 6)))

Arguments of base cut() are also available:

breaks = c(0, 2, 4, 6)
mutate(y = cut(x, breaks, labels = "-", include.lowest = TRUE))
trans(cut(x, c(0, 2, 4, 6), labels = "-", include.lowest = TRUE))

6. Other useful functions

6.1. blob_to_string()

The blob object from databases sometimes prevents manipulations with dplyr.

You might want to convert them to string.

blob_to_string() is a function to make it easy:

x <- blob::as_blob("Good morning")
x

# print raw data in blob
x[[1]]

blob_to_string(x)

7. Related work



Try the dplyr.teradata package in your browser

Any scripts or data that you put into this service are public.

dplyr.teradata documentation built on Nov. 12, 2020, 5:07 p.m.