knitr::opts_chunk$set(collapse = TRUE, comment = "", out.width = "600px", dpi = 70, collapse = TRUE) options(tibble.print_min = 4L, tibble.print_max = 4L, crayon.enabed = FALSE) library(dlookr) library(dplyr) library(ggplot2)
After you have acquired the data, you should do the following:
The dlookr package makes these steps fast and easy:
This document introduces EDA(Exploratory Data Analysis) methods provided by the dlookr package. You will learn how to EDA of tbl_df
data that inherits from data.frame and data.frame
with functions provided by dlookr.
dlookr increases synergy with dplyr
. Particularly in data exploration and data wrangling, it increases the efficiency of the tidyverse
package group.
Data diagnosis supports the following data structures.
To illustrate the primary use of EDA in the dlookr package, I use a Carseats
dataset.
Carseats
in the ISLR
package is a simulated data set containing sales of child car seats at 400 stores. This data is a data.frame created to predict sales volume.
str(Carseats)
The contents of individual variables are as follows. (Refer to ISLR::Carseats Man page)
When data analysis is performed, data containing missing values is frequently encountered. However, 'Carseats' is complete data without missing values. So the following script created the missing values and saved them as carseats
.
carseats <- Carseats suppressWarnings(RNGversion("3.5.0")) set.seed(123) carseats[sample(seq(NROW(carseats)), 20), "Income"] <- NA suppressWarnings(RNGversion("3.5.0")) set.seed(456) carseats[sample(seq(NROW(carseats)), 10), "Urban"] <- NA
dlookr can help understand the distribution of data by calculating descriptive statistics of numerical data. In addition, the correlation between variables is identified, and a normality test is performed. It also identifies the relationship between target variables and independent variables.:
The following is a list of the EDA functions included in the dlookr package.:
describe()
provides descriptive statistics for numerical data.normality()
and plot_normality()
perform normalization and visualization of numerical data.correlate()
and plot.correlate()
calculate the correlation coefficient between two numerical data and provide visualization.target_by()
defines the target variable, and relate()
describes the relationship with the variables of interest corresponding to the target variable.plot.relate()
visualizes the relationship to the variable of interest corresponding to the destination variable.eda_report()
performs an exploratory data analysis and reports the results.describe()
describe()
computes descriptive statistics for numerical data. Descriptive statistics help determine the distribution of numerical variables. Like the function of dplyr, the first argument is the tibble (or data frame). The second and subsequent arguments refer to variables within that data frame.
The variables of the tbl_df
object returned by describe()
are as follows.
n
: number of observations excluding missing valuesna
: number of missing valuesmean
: arithmetic averagesd
: standard deviationse_mean
: standard error mean. sd/sqrt(n)IQR
: interquartile range (Q3-Q1)skewness
: skewnesskurtosis
: kurtosisp25
: Q1. 25% percentilep50
: median. 50% percentilep75
: Q3. 75% percentilep01
, p05
, p10
, p20
, p30
: 1%, 5%, 20%, 30% percentilesp40
, p60
, p70
, p80
: 40%, 60%, 70%, 80% percentilesp90
, p95
, p99
, p100
: 90%, 95%, 99%, 100% percentilesFor example, describe()
can compute the statistics of all numerical variables in carseats
:
describe(carseats)
skewness
: The left-skewed distribution data, that is, the variables with significant positive skewness, should consider the log or sqrt transformations to follow the normal distribution. The variable Advertising
seems to need to consider variable transformation.mean
and sd
, se_mean
: ThePopulation
with a significant standard error of the mean
(se_mean) has low representativeness of the arithmetic mean
(mean). The standard deviation
(sd) is much more significant than the arithmetic average.The following explains the descriptive statistics only for a few selected variables.:
# Select columns by name describe(carseats, Sales, CompPrice, Income) # Select all columns between year and day (include) describe(carseats, Sales:Income) # Select all columns except those from year to day (exclude) describe(carseats, -(Sales:Income))
The describe()
function can be sorted by left or right skewed size
(skewness) using dplyr
.:
carseats %>% describe() %>% select(described_variables, skewness, mean, p25, p50, p75) %>% filter(!is.na(skewness)) %>% arrange(desc(abs(skewness)))
The describe()
function supports the group_by()
function syntax of the dplyr
package.
carseats %>% group_by(US) %>% describe(Sales, Income)
carseats %>% group_by(US, Urban) %>% describe(Sales, Income)
normality()
normality()
performs a normality test on numerical data. Shapiro-Wilk normality test
is performed. When the number of observations exceeds 5000, it is tested after extracting 5000 samples by random simple sampling.
The variables of the tbl_df
object returned by normality()
are as follows.
statistic
: Statistics of the Shapiro-Wilk testp_value
: p-value of the Shapiro-Wilk testsample
: Number of sample observations performed Shapiro-Wilk testnormality()
performs the normality test for all numerical variables of carseats
as follows.:
normality(carseats)
The following example performs a normality test on only a few selected variables.
# Select columns by name normality(carseats, Sales, CompPrice, Income) # Select all columns between year and day (inclusive) normality(carseats, Sales:Income) # Select all columns except those from year to day (inclusive) normality(carseats, -(Sales:Income))
You can use dplyr
to sort variables that do not follow a normal distribution in order of p_value
:
library(dplyr) carseats %>% normality() %>% filter(p_value <= 0.01) %>% arrange(abs(p_value))
In particular, the Advertising
variable is considered to be the most out of the normal distribution
The normality()
function supports the group_by()
function syntax in the dplyr
package.
carseats %>% group_by(ShelveLoc, US) %>% normality(Income) %>% arrange(desc(p_value))
The Income
variable does not follow the normal distribution. However, where US
is No
and ShelveLoc
is Good
and Bad
at the significance level 0.01, it follows the normal distribution.
The following example performs the normality test of log(Income)
for each combination of ShelveLoc
and US
categorical variables to search for variables that follow the normal distribution.
carseats %>% mutate(log_income = log(Income)) %>% group_by(ShelveLoc, US) %>% normality(log_income) %>% filter(p_value > 0.01)
plot_normality()
plot_normality()
visualizes the normality of numeric data.
The information visualized by plot_normality()
is as follows.:
Histogram of original data
Q-Q plot of original data
histogram of log transformed data
Histogram of square root transformed data
The data analysis often encounters numerical data that follows the power-law distribution
. Since the numerical data that follows the power-law distribution
is converted into a normal distribution by performing the log
or sqrt
transformation, so draw a histogram of the log
and sqrt
transformed data.
plot_normality()
can also specify several variables like normality()
function.
# Select columns by name plot_normality(carseats, Sales, CompPrice)
The plot_normality()
function also supports the group_by()
function syntax in the dplyr
package.
carseats %>% filter(ShelveLoc == "Good") %>% group_by(US) %>% plot_normality(Income)
correlation coefficient
using correlate()
correlate()
calculates the correlation coefficient of all combinations of carseats
numerical variables as follows:
correlate(carseats)
The following example performs a normality test only on combinations that include several selected variables.
# Select columns by name correlate(carseats, Sales, CompPrice, Income) # Select all columns between year and day (include) correlate(carseats, Sales:Income) # Select all columns except those from year to day (exclude) correlate(carseats, -(Sales:Income))
correlate()
produces two pairs of variables
. So the following example uses filter()
to get the correlation coefficient for a pair of variable
combinations:
carseats %>% correlate(Sales:Income) %>% filter(as.integer(var1) > as.integer(var2))
The correlate()
also supports the group_by()
function syntax in the dplyr
package.
tab_corr <- carseats %>% filter(ShelveLoc == "Good") %>% group_by(Urban, US) %>% correlate(Sales) %>% filter(abs(coef_corr) > 0.5) tab_corr
plot.correlate()
plot.correlate()
visualizes the correlation matrix with correlate class.
carseats %>% correlate() %>% plot()
plot.correlate()
can also specify multiple variables, like the correlate()
function.
The following visualize the correlation matrix, including several selected variables.
# Select columns by name correlate(carseats, Sales, Price) %>% plot()
The plot.correlate()
function also supports the group_by()
function syntax in the dplyr
package.
carseats %>% filter(ShelveLoc == "Good") %>% group_by(Urban) %>% correlate() %>% plot()
To perform EDA based on the target variable
, you must create a target_by
class object.
target_by()
creates a target_by
class with an object inheriting data.frame or data.frame. target_by()
is similar to group_by()
in dplyr
which creates grouped_df
. The difference is that you specify only one variable.
The following is an example of specifying US
as the target variable in carseats
data.frame.:
categ <- target_by(carseats, US)
Let's perform EDA when the target variable is categorical. When the categorical variable US
is the target variable, we examine the relationship between the target variable and the predictor.
relate()
shows the relationship between the target variable and the predictor. The following example shows the relationship between Sales
and the target variable US
. The predictor Sales
is a numeric variable. In this case, the descriptive statistics are shown for each level of the target variable.
# If the variable of interest is a numerical variable cat_num <- relate(categ, Sales) cat_num summary(cat_num)
plot()
visualizes the relate
class object created by relate()
as the relationship between the target and predictor variables. The relationship between US
and Sales
is visualized by a density plot.
plot(cat_num)
The following example shows the relationship between ShelveLoc
and the target variable US
. The predictor variable ShelveLoc
is categorical. This case illustrates the contingency table
of two variables. The summary()
function performs an independence test
on the contingency table.
# If the variable of interest is a categorical variable cat_cat <- relate(categ, ShelveLoc) cat_cat summary(cat_cat)
plot()
visualizes the relationship between the target variable and the predictor. A mosaics plot
represents the relationship between US
and ShelveLoc
.
plot(cat_cat)
Let's perform EDA when the target variable is numeric. When the numeric variable Sales
is the target variable, we examine the relationship between the target variable and the predictor.
# If the variable of interest is a numerical variable num <- target_by(carseats, Sales)
The following example shows the relationship between Price
and the target variable Sales
. The predictor variable Price
is numeric. In this case, it shows the result of a simple linear model
of the target ~ predictor
formula. The summary()
function expresses the details of the model.
# If the variable of interest is a numerical variable num_num <- relate(num, Price) num_num summary(num_num)
plot()
visualizes the relationship between the target and predictor variables. The relationship between Sales
and Price
is pictured with a scatter plot.
The figure on the left shows the scatter plot of Sales
and Price
and the confidence interval of the regression line and regression line.
The figure on the right shows the relationship between the original data and the predicted values of the linear model as a scatter plot. If there is a linear relationship between the two variables, the scatter plot of the observations converges on the red diagonal line.
plot(num_num)
The scatter plot of the data with many observations is output as overlapping points. This makes it difficult to judge the relationship between the two variables. It also takes a long time to perform the visualization.
In this case, the above problem can be solved by hexabin plot
.
In plot()
, the hex_thres
argument provides a basis for drawing hexabin plot
. If the number of observations is greater than hex_thres
, draw a hexabin plot
.
The following example visualizes the hexabin plot
rather than the scatter plot by specifying 350 for the hex_thres
argument. This is because the number of observations is 400.
plot(num_num, hex_thres = 350)
The following example shows the relationship between ShelveLoc
and the target variable Sales
. The predictor ShelveLoc
is a categorical variable and displays the result of a one-way ANOVA
of the target ~ predictor
relationship. The results are expressed in terms of ANOVA.
The summary()
function shows the regression coefficients
for each level of the predictor. In other words, it shows detailed information about the simple regression analysis
of the target ~ predictor
relationship.
# If the variable of interest is a categorical variable num_cat <- relate(num, ShelveLoc) num_cat summary(num_cat)
plot()
visualizes the relationship between the target variable and the predictor. A box plot
represents the relationship between Sales
and ShelveLoc
.
plot(num_cat)
dlookr provides two automated EDA reports:
eda_web_report()
eda_web_report()
creates a dynamic report for objects inherited from data.frame(tbl_df
, tbl
, etc) or data.frame.
The contents of the report are as follows.:
eda_web_report() generates various reports with the following arguments.
The following script creates an EDA report for the data.frame
class object, heartfailure
.
heartfailure %>% eda_web_report(target = "death_event", subtitle = "heartfailure", output_dir = "./", output_file = "EDA.html", theme = "blue")
knitr::include_graphics('img/eda_web_title.jpg')
eda_paged_report()
eda_paged_report()
creates a static report for an object inherited from data.frame(tbl_df
, tbl
, etc) or data.frame.
The contents of the report are as follows.:
eda_paged_report() generates various reports with the following arguments.
The following script creates an EDA report for the data.frame
class object, heartfailure
.
heartfailure %>% eda_paged_report(target = "death_event", subtitle = "heartfailure", output_dir = "./", output_file = "EDA.pdf", theme = "blue")
knitr::include_graphics('img/eda_paged_cover.jpg')
knitr::include_graphics('img/eda_paged_content.jpg')
EDA function for a table of DBMS supports In-database mode that performs SQL operations on the DBMS side. If the data size is large, using In-database mode is faster.
It is challenging to obtain anomalies or to implement the sampling-based algorithm in SQL of DBMS. So, some functions do not yet support In-database mode. In this case, it is performed in In-memory mode, where table data is brought to the R side and calculated. In this case, if the data size is large, the execution speed may be slow. It supports the collect_size argument, allowing you to import the specified number of data samples into R.
normality()
plot_normality()
correlate()
plot.correlate()
describe()
eda_web_report()
eda_paged_report()
Copy the carseats
data frame to the SQLite DBMS and create it as a table named TB_CARSEATS
.
Mysql/MariaDB, PostgreSQL, Oracle DBMS, and other DBMS are also available for your environment.
library(dplyr) carseats <- Carseats carseats[sample(seq(NROW(carseats)), 20), "Income"] <- NA carseats[sample(seq(NROW(carseats)), 5), "Urban"] <- NA # connect DBMS con_sqlite <- DBI::dbConnect(RSQLite::SQLite(), ":memory:") # copy carseats to the DBMS with a table named TB_CARSEATS copy_to(con_sqlite, carseats, name = "TB_CARSEATS", overwrite = TRUE)
Use dplyr::tbl()
to create a tbl_dbi object, then use it as a data frame object. The data argument of all EDA functions is specified as a tbl_dbi object instead of a data frame object.
# Positive values select variables con_sqlite %>% tbl("TB_CARSEATS") %>% describe(Sales, CompPrice, Income) # Negative values to drop variables, and In-memory mode and collect size is 200 con_sqlite %>% tbl("TB_CARSEATS") %>% describe(-Sales, -CompPrice, -Income, collect_size = 200) # Find the statistic of all numerical variables by 'ShelveLoc' and 'US', # and extract only those with the 'ShelveLoc' variable level as "Good". con_sqlite %>% tbl("TB_CARSEATS") %>% group_by(ShelveLoc, US) %>% describe() %>% filter(ShelveLoc == "Good") # extract only those with 'Urban' variable level is "Yes", # and find 'Sales' statistics by 'ShelveLoc' and 'US' con_sqlite %>% tbl("TB_CARSEATS") %>% filter(Urban == "Yes") %>% group_by(ShelveLoc, US) %>% describe(Sales)
# Test all numerical variables by 'ShelveLoc' and 'US', # and extract only those with the 'ShelveLoc' variable level is "Good". con_sqlite %>% tbl("TB_CARSEATS") %>% group_by(ShelveLoc, US) %>% normality() %>% filter(ShelveLoc == "Good") # extract only those with 'Urban' variable level is "Yes", # and test 'Sales' by 'ShelveLoc' and 'US' con_sqlite %>% tbl("TB_CARSEATS") %>% filter(Urban == "Yes") %>% group_by(ShelveLoc, US) %>% normality(Sales) # Test log(Income) variables by 'ShelveLoc' and 'US', # and extract only p.value greater than 0.01. # SQLite extension functions for log transformation RSQLite::initExtension(con_sqlite) con_sqlite %>% tbl("TB_CARSEATS") %>% mutate(log_income = log(Income)) %>% group_by(ShelveLoc, US) %>% normality(log_income) %>% filter(p_value > 0.01)
# Extract only those with the 'ShelveLoc' variable level is "Good", # and plot 'Income' by 'US' # The result is the same as the data.frame, but not displayed here. Reference above in document. con_sqlite %>% tbl("TB_CARSEATS") %>% filter(ShelveLoc == "Good") %>% group_by(US) %>% plot_normality(Income)
# Correlation coefficient # that eliminates redundant combination of variables con_sqlite %>% tbl("TB_CARSEATS") %>% correlate() %>% filter(as.integer(var1) > as.integer(var2)) con_sqlite %>% tbl("TB_CARSEATS") %>% correlate(Sales, Price) %>% filter(as.integer(var1) > as.integer(var2)) # Compute the correlation coefficient of the Sales variable by 'ShelveLoc' # and 'US' variables. And extract only those with absolute # value of the correlation coefficient is more significant than 0.5 con_sqlite %>% tbl("TB_CARSEATS") %>% group_by(ShelveLoc, US) %>% correlate(Sales) %>% filter(abs(coef_corr) >= 0.5) # Extract only those with the 'ShelveLoc' variable level is "Good", # and compute the correlation coefficient of the 'Sales' variable # by 'Urban' and 'US' variables. # And the correlation coefficient is negative and smaller than 0.5 con_sqlite %>% tbl("TB_CARSEATS") %>% filter(ShelveLoc == "Good") %>% group_by(Urban, US) %>% correlate(Sales) %>% filter(coef_corr < 0) %>% filter(abs(coef_corr) > 0.5)
# Extract only those with 'ShelveLoc' variable level is "Good", # and visualize correlation plot of 'Sales' variable by 'Urban' # and 'US' variables. # The result is the same as the data.frame, but not displayed here. Reference above in document. con_sqlite %>% tbl("TB_CARSEATS") %>% filter(ShelveLoc == "Good") %>% group_by(Urban) %>% correlate() %>% plot(Sales)
The following is an EDA where the target column is a character, and the predictor column is a numeric type.
# If the target variable is a categorical variable categ <- target_by(con_sqlite %>% tbl("TB_CARSEATS") , US) # If the variable of interest is a numerical variable cat_num <- relate(categ, Sales) cat_num summary(cat_num)
# The result is the same as the data.frame, but not displayed here. Reference above in document. plot(cat_num)
The following shows several examples of creating an EDA report for a DBMS table.
Using the collect_size
argument, you can perform EDA with the corresponding number of sample data.
If the number of data is huge, use collect_size
.
# create a web report file. con_sqlite %>% tbl("TB_CARSEATS") %>% eda_web_report() # create a pdf file. the file name is EDA.pdf, and the collect size is 350 con_sqlite %>% tbl("TB_CARSEATS") %>% eda_paged_report(collect_size = 350, output_file = "EDA.pdf")
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.