knitr::opts_chunk$set(collapse = TRUE, comment = "", out.width = "600px", dpi = 70, collapse = TRUE)
options(tibble.print_min = 4L, tibble.print_max = 4L, crayon.enabed = FALSE)

library(dlookr)
library(dplyr)
library(ggplot2)

Preface

After you have acquired the data, you should do the following:

The dlookr package makes these steps fast and easy:

This document introduces EDA(Exploratory Data Analysis) methods provided by the dlookr package. You will learn how to EDA of tbl_df data that inherits from data.frame and data.frame with functions provided by dlookr.

dlookr increases synergy with dplyr. Particularly in data exploration and data wrangling, it increases the efficiency of the tidyverse package group.

Supported data structures

Data diagnosis supports the following data structures.

datasets

To illustrate the primary use of EDA in the dlookr package, I use a Carseats dataset. Carseats in the ISLR package is a simulated data set containing sales of child car seats at 400 stores. This data is a data.frame created to predict sales volume.

str(Carseats)

The contents of individual variables are as follows. (Refer to ISLR::Carseats Man page)

When data analysis is performed, data containing missing values is frequently encountered. However, 'Carseats' is complete data without missing values. So the following script created the missing values and saved them as carseats.

carseats <- Carseats

suppressWarnings(RNGversion("3.5.0"))
set.seed(123)
carseats[sample(seq(NROW(carseats)), 20), "Income"] <- NA

suppressWarnings(RNGversion("3.5.0"))
set.seed(456)
carseats[sample(seq(NROW(carseats)), 10), "Urban"] <- NA

Exploratory Data Analysis

dlookr can help understand the distribution of data by calculating descriptive statistics of numerical data. In addition, the correlation between variables is identified, and a normality test is performed. It also identifies the relationship between target variables and independent variables.:

The following is a list of the EDA functions included in the dlookr package.:

Univariate data EDA

Calculating descriptive statistics using describe()

describe() computes descriptive statistics for numerical data. Descriptive statistics help determine the distribution of numerical variables. Like the function of dplyr, the first argument is the tibble (or data frame). The second and subsequent arguments refer to variables within that data frame.

The variables of the tbl_df object returned by describe() are as follows.

For example, describe() can compute the statistics of all numerical variables in carseats:

describe(carseats)

The following explains the descriptive statistics only for a few selected variables.:

# Select columns by name
describe(carseats, Sales, CompPrice, Income)
# Select all columns between year and day (include)
describe(carseats, Sales:Income)
# Select all columns except those from year to day (exclude)
describe(carseats, -(Sales:Income))

The describe() function can be sorted by left or right skewed size(skewness) using dplyr.:

carseats %>%
  describe() %>%
  select(described_variables, skewness, mean, p25, p50, p75) %>% 
  filter(!is.na(skewness)) %>% 
  arrange(desc(abs(skewness)))

The describe() function supports the group_by() function syntax of the dplyr package.

carseats %>%
  group_by(US) %>% 
  describe(Sales, Income) 
carseats %>%
  group_by(US, Urban) %>% 
  describe(Sales, Income) 

Test of normality on numeric variables using normality()

normality() performs a normality test on numerical data. Shapiro-Wilk normality test is performed. When the number of observations exceeds 5000, it is tested after extracting 5000 samples by random simple sampling.

The variables of the tbl_df object returned by normality() are as follows.

normality() performs the normality test for all numerical variables of carseats as follows.:

normality(carseats)

The following example performs a normality test on only a few selected variables.

# Select columns by name
normality(carseats, Sales, CompPrice, Income)

# Select all columns between year and day (inclusive)
normality(carseats, Sales:Income)

# Select all columns except those from year to day (inclusive)
normality(carseats, -(Sales:Income))

You can use dplyr to sort variables that do not follow a normal distribution in order of p_value:

library(dplyr)

carseats %>%
  normality() %>%
  filter(p_value <= 0.01) %>% 
  arrange(abs(p_value))

In particular, the Advertising variable is considered to be the most out of the normal distribution

The normality() function supports the group_by() function syntax in the dplyr package.

carseats %>%
  group_by(ShelveLoc, US) %>%
  normality(Income) %>% 
  arrange(desc(p_value))

The Income variable does not follow the normal distribution. However, where US is No and ShelveLoc is Good and Bad at the significance level 0.01, it follows the normal distribution.

The following example performs the normality test of log(Income) for each combination of ShelveLoc and US categorical variables to search for variables that follow the normal distribution.

carseats %>%
  mutate(log_income = log(Income)) %>%
  group_by(ShelveLoc, US) %>%
  normality(log_income) %>%
  filter(p_value > 0.01)

Visualization of normality of numerical variables using plot_normality()

plot_normality() visualizes the normality of numeric data.

The information visualized by plot_normality() is as follows.:

The data analysis often encounters numerical data that follows the power-law distribution. Since the numerical data that follows the power-law distribution is converted into a normal distribution by performing the log or sqrt transformation, so draw a histogram of the log and sqrt transformed data.

plot_normality() can also specify several variables like normality() function.

# Select columns by name
plot_normality(carseats, Sales, CompPrice)

The plot_normality() function also supports the group_by() function syntax in the dplyr package.

carseats %>%
  filter(ShelveLoc == "Good") %>%
  group_by(US) %>%
  plot_normality(Income)

EDA of bivariate data

Calculation of correlation coefficient using correlate()

correlate() calculates the correlation coefficient of all combinations of carseats numerical variables as follows:

correlate(carseats)

The following example performs a normality test only on combinations that include several selected variables.

# Select columns by name
correlate(carseats, Sales, CompPrice, Income)

# Select all columns between year and day (include)
correlate(carseats, Sales:Income)

# Select all columns except those from year to day (exclude)
correlate(carseats, -(Sales:Income))

correlate() produces two pairs of variables. So the following example uses filter() to get the correlation coefficient for a pair of variable combinations:

carseats %>%
  correlate(Sales:Income) %>%
  filter(as.integer(var1) > as.integer(var2))

The correlate() also supports the group_by() function syntax in the dplyr package.

tab_corr <- carseats %>%
  filter(ShelveLoc == "Good") %>%
  group_by(Urban, US) %>%
  correlate(Sales) %>%
  filter(abs(coef_corr) > 0.5)

tab_corr

Visualization of the correlation matrix using plot.correlate()

plot.correlate() visualizes the correlation matrix with correlate class.

carseats %>% 
  correlate() %>% 
  plot()

plot.correlate() can also specify multiple variables, like the correlate() function. The following visualize the correlation matrix, including several selected variables.

# Select columns by name
correlate(carseats, Sales, Price) %>% 
  plot()

The plot.correlate() function also supports the group_by() function syntax in the dplyr package.

carseats %>%
  filter(ShelveLoc == "Good") %>%
  group_by(Urban) %>%
  correlate() %>%
  plot() 

EDA based on target variable

Definition of target variable

To perform EDA based on the target variable, you must create a target_by class object. target_by() creates a target_by class with an object inheriting data.frame or data.frame. target_by() is similar to group_by() in dplyr which creates grouped_df. The difference is that you specify only one variable.

The following is an example of specifying US as the target variable in carseats data.frame.:

categ <- target_by(carseats, US)

EDA when target variable is categorical variable

Let's perform EDA when the target variable is categorical. When the categorical variable US is the target variable, we examine the relationship between the target variable and the predictor.

Cases where predictors are numeric variable

relate() shows the relationship between the target variable and the predictor. The following example shows the relationship between Sales and the target variable US. The predictor Sales is a numeric variable. In this case, the descriptive statistics are shown for each level of the target variable.

# If the variable of interest is a numerical variable
cat_num <- relate(categ, Sales)
cat_num
summary(cat_num)

plot() visualizes the relate class object created by relate() as the relationship between the target and predictor variables. The relationship between US and Sales is visualized by a density plot.

plot(cat_num)

Cases where predictors are categorical variable

The following example shows the relationship between ShelveLoc and the target variable US. The predictor variable ShelveLoc is categorical. This case illustrates the contingency table of two variables. The summary() function performs an independence test on the contingency table.

# If the variable of interest is a categorical variable
cat_cat <- relate(categ, ShelveLoc)
cat_cat
summary(cat_cat)

plot() visualizes the relationship between the target variable and the predictor. A mosaics plot represents the relationship between US and ShelveLoc.

plot(cat_cat)

EDA when target variable is numerical variable

Let's perform EDA when the target variable is numeric. When the numeric variable Sales is the target variable, we examine the relationship between the target variable and the predictor.

# If the variable of interest is a numerical variable
num <- target_by(carseats, Sales)

Cases where predictors are numeric variable

The following example shows the relationship between Price and the target variable Sales. The predictor variable Price is numeric. In this case, it shows the result of a simple linear model of the target ~ predictor formula. The summary() function expresses the details of the model.

# If the variable of interest is a numerical variable
num_num <- relate(num, Price)
num_num
summary(num_num)

plot() visualizes the relationship between the target and predictor variables. The relationship between Sales and Price is pictured with a scatter plot. The figure on the left shows the scatter plot of Sales and Price and the confidence interval of the regression line and regression line. The figure on the right shows the relationship between the original data and the predicted values of the linear model as a scatter plot. If there is a linear relationship between the two variables, the scatter plot of the observations converges on the red diagonal line.

plot(num_num)

The scatter plot of the data with many observations is output as overlapping points. This makes it difficult to judge the relationship between the two variables. It also takes a long time to perform the visualization. In this case, the above problem can be solved by hexabin plot.

In plot(), the hex_thres argument provides a basis for drawing hexabin plot. If the number of observations is greater than hex_thres, draw a hexabin plot.

The following example visualizes the hexabin plot rather than the scatter plot by specifying 350 for the hex_thres argument. This is because the number of observations is 400.

plot(num_num, hex_thres = 350)

Cases where predictors are categorical variable

The following example shows the relationship between ShelveLoc and the target variable Sales. The predictor ShelveLoc is a categorical variable and displays the result of a one-way ANOVA of the target ~ predictor relationship. The results are expressed in terms of ANOVA. The summary() function shows the regression coefficients for each level of the predictor. In other words, it shows detailed information about the simple regression analysis of the target ~ predictor relationship.

# If the variable of interest is a categorical variable
num_cat <- relate(num, ShelveLoc)
num_cat
summary(num_cat)

plot() visualizes the relationship between the target variable and the predictor. A box plot represents the relationship between Sales and ShelveLoc.

plot(num_cat)

Automated report

dlookr provides two automated EDA reports:

Create a dynamic report using eda_web_report()

eda_web_report() creates a dynamic report for objects inherited from data.frame(tbl_df, tbl, etc) or data.frame.

Contents of dynamic web report

The contents of the report are as follows.:

Some arguments for dynamic web report

eda_web_report() generates various reports with the following arguments.

The following script creates an EDA report for the data.frame class object, heartfailure.

heartfailure %>%
  eda_web_report(target = "death_event", subtitle = "heartfailure", 
                 output_dir = "./", output_file = "EDA.html", theme = "blue")

Screenshot of dynamic report

knitr::include_graphics('img/eda_web_title.jpg')

Create a EDA report using eda_paged_report()

eda_paged_report() creates a static report for an object inherited from data.frame(tbl_df, tbl, etc) or data.frame.

Contents of static paged report

The contents of the report are as follows.:

Some arguments for static paged report

eda_paged_report() generates various reports with the following arguments.

The following script creates an EDA report for the data.frame class object, heartfailure.

heartfailure %>%
  eda_paged_report(target = "death_event", subtitle = "heartfailure", 
                   output_dir = "./", output_file = "EDA.pdf", theme = "blue")

Screenshot of static report

knitr::include_graphics('img/eda_paged_cover.jpg')
knitr::include_graphics('img/eda_paged_content.jpg')

Exploratory data analysis for tables in DBMS

EDA function for a table of DBMS supports In-database mode that performs SQL operations on the DBMS side. If the data size is large, using In-database mode is faster.

It is challenging to obtain anomalies or to implement the sampling-based algorithm in SQL of DBMS. So, some functions do not yet support In-database mode. In this case, it is performed in In-memory mode, where table data is brought to the R side and calculated. In this case, if the data size is large, the execution speed may be slow. It supports the collect_size argument, allowing you to import the specified number of data samples into R.

Preparing table data

Copy the carseats data frame to the SQLite DBMS and create it as a table named TB_CARSEATS. Mysql/MariaDB, PostgreSQL, Oracle DBMS, and other DBMS are also available for your environment.

library(dplyr)

carseats <- Carseats
carseats[sample(seq(NROW(carseats)), 20), "Income"] <- NA
carseats[sample(seq(NROW(carseats)), 5), "Urban"] <- NA

# connect DBMS
con_sqlite <- DBI::dbConnect(RSQLite::SQLite(), ":memory:")

# copy carseats to the DBMS with a table named TB_CARSEATS
copy_to(con_sqlite, carseats, name = "TB_CARSEATS", overwrite = TRUE)

Calculating descriptive statistics of numerical column of table in the DBMS

Use dplyr::tbl() to create a tbl_dbi object, then use it as a data frame object. The data argument of all EDA functions is specified as a tbl_dbi object instead of a data frame object.

# Positive values select variables
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  describe(Sales, CompPrice, Income)

# Negative values to drop variables, and In-memory mode and collect size is 200
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  describe(-Sales, -CompPrice, -Income, collect_size = 200)

# Find the statistic of all numerical variables by 'ShelveLoc' and 'US',
# and extract only those with the 'ShelveLoc' variable level as "Good".
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  group_by(ShelveLoc, US) %>%
  describe() %>%
  filter(ShelveLoc == "Good")

# extract only those with 'Urban' variable level is "Yes",
# and find 'Sales' statistics by 'ShelveLoc' and 'US'
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  filter(Urban == "Yes") %>%
  group_by(ShelveLoc, US) %>%
  describe(Sales)

Test of normality on numeric columns using in the DBMS

# Test all numerical variables by 'ShelveLoc' and 'US',
# and extract only those with the 'ShelveLoc' variable level is "Good".
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
 group_by(ShelveLoc, US) %>%
 normality() %>%
 filter(ShelveLoc == "Good")

# extract only those with 'Urban' variable level is "Yes",
# and test 'Sales' by 'ShelveLoc' and 'US'
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
 filter(Urban == "Yes") %>%
 group_by(ShelveLoc, US) %>%
 normality(Sales)

# Test log(Income) variables by 'ShelveLoc' and 'US',
# and extract only p.value greater than 0.01.

# SQLite extension functions for log transformation
RSQLite::initExtension(con_sqlite)

con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
 mutate(log_income = log(Income)) %>%
 group_by(ShelveLoc, US) %>%
 normality(log_income) %>%
 filter(p_value > 0.01)

Normalization visualization of numerical column in the DBMS

# Extract only those with the 'ShelveLoc' variable level is "Good",
# and plot 'Income' by 'US'
# The result is the same as the data.frame, but not displayed here. Reference above in document.
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  filter(ShelveLoc == "Good") %>%
  group_by(US) %>%
  plot_normality(Income)

Compute the correlation coefficient between two columns of the table in DBMS

# Correlation coefficient
# that eliminates redundant combination of variables
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  correlate() %>%
  filter(as.integer(var1) > as.integer(var2))

con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  correlate(Sales, Price) %>%
  filter(as.integer(var1) > as.integer(var2))

# Compute the correlation coefficient of the Sales variable by 'ShelveLoc'
# and 'US' variables. And extract only those with absolute
# value of the correlation coefficient is more significant than 0.5
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  group_by(ShelveLoc, US) %>%
  correlate(Sales) %>%
  filter(abs(coef_corr) >= 0.5)

# Extract only those with the 'ShelveLoc' variable level is "Good",
# and compute the correlation coefficient of the 'Sales' variable
# by 'Urban' and 'US' variables.
# And the correlation coefficient is negative and smaller than 0.5
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  filter(ShelveLoc == "Good") %>%
  group_by(Urban, US) %>%
  correlate(Sales) %>%
  filter(coef_corr < 0) %>%
  filter(abs(coef_corr) > 0.5)

Visualize correlation plot of numerical columns in the DBMS

# Extract only those with 'ShelveLoc' variable level is "Good",
# and visualize correlation plot of 'Sales' variable by 'Urban'
# and 'US' variables.
# The result is the same as the data.frame, but not displayed here. Reference above in document.
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  filter(ShelveLoc == "Good") %>%
  group_by(Urban) %>%
  correlate() %>% 
  plot(Sales)

EDA based on target variable

The following is an EDA where the target column is a character, and the predictor column is a numeric type.

# If the target variable is a categorical variable
categ <- target_by(con_sqlite %>% tbl("TB_CARSEATS") , US)

# If the variable of interest is a numerical variable
cat_num <- relate(categ, Sales)
cat_num
summary(cat_num)
# The result is the same as the data.frame, but not displayed here. Reference above in document.
plot(cat_num)

Reporting the information of EDA for table of the DBMS

The following shows several examples of creating an EDA report for a DBMS table.

Using the collect_size argument, you can perform EDA with the corresponding number of sample data. If the number of data is huge, use collect_size.

# create a web report file. 
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  eda_web_report()

# create a pdf file. the file name is EDA.pdf, and the collect size is 350
con_sqlite %>% 
  tbl("TB_CARSEATS") %>% 
  eda_paged_report(collect_size = 350, output_file = "EDA.pdf")


choonghyunryu/dlookr documentation built on June 11, 2024, 9:12 a.m.