knitr::opts_chunk$set(collapse = TRUE, comment = "#>")
knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(cache =TRUE)
knitr::opts_chunk$set(warning=FALSE)

Installation

To use double/debiased machine learning (DML) estimators, please install the interflex package diretly from Github:

devtools::install_github('xuyiqing/interflex')

Set up Python environment

To use DML estimators in interflex, we rely on several Python packages. The integration between R and Python is facilitated by the reticulate package, which allows R to interface with Python seamlessly. The following steps set up a Python environment with all required dependencies.

First, install the reticulate package in R. This package enables R to interface with Python.

install.packages("reticulate", repos = 'http://cran.us.r-project.org', force = TRUE)

Then, we use Miniconda, a minimal installer for Conda. Miniconda helps manage Python environments and packages. By default, the installation process will create a virtual environment named r-reticulate.

reticulate::install_miniconda(update = TRUE)

With Miniconda installed, we point to the platform using use_condaenv. This step is incorporated in the interflex package.

reticulate::use_condaenv(condaenv = "r-reticulate")

Finally, install the necessary Python libraries. These libraries include tools for statistical modeling and machine learning that are essential for using the interflex package.

reticulate::py_install(packages = c("patsy", "numpy", "pandas",
                                    "scikit-learn", "doubleml", "econml"))

Some Mac users may encounter a " caught segfault " error. In that case, we recommend setting up a virtual environment. In Terminal, type: ```{sh eval=FALSE} python3 -m venv ~/.virtualenvs/myenv source ~/.virtualenvs/myenv/bin/activate

Then install all Python packages there:
```{sh eval=FALSE}
pip install numpy
pip install patsy
pip install pandas
pip install scikit-learn
pip install econml
pip install doubleml

In R, point to the newly set up environment myenv:

reticulate::use_virtualenv("myenv")

Load packages

These are packages are necessary for this tutorial:

library(interflex)
library(ggplot2)
library(ggpubr)
library(rmarkdown)
library(patchwork)

reticulate::use_virtualenv("myenv") ## point to your Python environment


This vignette gives a brief overview of estimating conditional marginal effects (CME) using the interflex pacakge, focusing on the Double/debiased Machine Learning (DML) estimator. This vignette walks through using DML estimators for different empirical setup, using both the simulating data and real-world data, including:

DML Estimators

Applications


Overview of Methodology

This section explores the basic ideas behind the DML framework, which enables the use of modern machine learning techniques to robustly estimate CME. The DML estimators for CME is first developed in @semenova2021debiased, and is expanded by @bonvini2023flexibly.

Estimand

We begin by formally defining CME using the Neyman–Rubin potential outcome framework @rubin1974estimating. Let $Y_i(d)$ be the potential outcomes corresponding to a unit $i$'s response to potential treatment value $D_i = d$. $X_i$ are the moderator of interest, along with $Z_i$ be the vector of covariates. We can define the full set of covariates as $V_i = (X_i, Z_i)$. The CME of treatment $D$ on $Y$ by $X$ is:

$$ \theta(x) = \mathbb{E}\left[\frac{\partial Y_i(d)}{\partial D} \mid X_i = x \right] $$

When $D_{i}$ is binary, it simplifies to $\mathbb{E}[Y_i(1) - Y_i(0) \mid X_i = x]$, which is essentially Conditional Average Treatment Effect (CATE) at covariate value $v$, $\tau(v) = \mathbb{E}[Y_i(1) - Y_i(0) \mid V_i = v]$, marginalized over the distribution of additional covariates $Z_{i}$.

Note that the defintion has implied the stable unit treatment value assumption (SUTVA):


Idenfitication assumptions

In the cross-sectional setting, we assume STUVA, unconfoundedness and strict overlap.

When $D$ is binary, it simplifies to: For some positive $\eta$, $$\eta \leq \mathbb{P}(W_i = 1 \mid V_i = v) \leq 1-\eta,\quad\text{with probability } 1.$$ This assumption states that every unit $i$ must have a non-zero probability of being assigned to each treatment condition.


The DML framework

When $D$, $X$ and $Z$ are all discrete, we can estimate $\mathbb{E}[Y(1)-Y(0)|X]$ nonparametrically. However, when one of them is continuous or high-dimensional, CME becomes difficult to estimate without imposing functional form assumptions. These assumptions can be too restrictive and easily violated in practice.

Here, we introduce the double/debiased machine learning (DML) estimator, to flexibly estimate CME under mild conditions. The primary advantage of DML lies in its ability to use machine learning method to estimate nuisance parameters $\eta_0$ (parameters that are not of direct interest to researchers), in nonparametric, data-driven way, while providing valid inference for the target quantity, in this case, the CME $\theta_0$.

Building on the principles of doubly robust estimation, DML is introduced by @chernozhukov2018double to estimate low-dimensional parameter $\theta_0$ in the presence of high-dimensional nuisance parameter $eta_0$. DML extends the doubly robust framework by integrating machine learning algorithms to flexibly model both the outcome and the treatment assignment processes.

The DML method yields unbiased, $\sqrt{n}$-consistent estimators and confidence intervals for the low-dimensional parameter of interest, $\theta_0$, even in the presence of potentially high-dimensional nuisance parameters $\eta_0$. Crucially, it achieves this without imposing stringent assumptions on high-dimensional covariates, instead deriving these forms directly from the data.

The three key ingredients of the DML framework are:

If these three assumptions are met, i.e., that (1) the Neyman-orthogonal score is constructed for the parameter of interest $\theta_0$, and (2) the nuisance functions are estimated with enough accuracy and are cross fitted, the DML estimator achieves the $\sqrt{n}-$ rate of convergence and is approximately normally distributed. Both the rate of concentration and the distributional approximation holds uniformly over a broad class of probability distributions.

Model

In the interflex pacakge, for CME with binary treatment, the data-generating process (DGP) is modeled through Interactive Regression Model (IRM); For CME with continuous treatment, the DGP is modeled through Partially Linear Regression Model (PLR).

In practice with binary treatment, the IMR has the form:

$$ Y =g(V, D) + \epsilon,\ E[\epsilon \mid V, D] = 0 $$ $$ D = m(V) + \eta, \ E[\eta \mid V] = 0 $$

With continuous treatment, the PLR has the form: $$ Y = \theta(V)D + g(V) + \epsilon,\ E[\epsilon \mid V, D] = 0 $$ $$ D = m(V) + \eta, \ E[\eta \mid V] = 0 $$

where:

Note that when $D$ is binary, $Y =g(V, D) + \epsilon$ can be written as $Y = \theta(V)D + g(V) + \epsilon$ without loss of generality.


Procedure

The DML framework for CME operates through the following steps:

1. Estimate nuisance functions

2. Apply the Neyman orthogonality condition

The key of DML is the construction of orthogonal score to meet the Neyman orthogonality condition.

In IRM, the orthogonal signal $\Lambda(\eta)$ is constructed through AIPW correction to meet the form:

$$\Lambda(\eta) = \mu(1, V) - \mu(0, V) + \frac{D(Y - \mu(1, V))}{e(V)} - \frac{(1 - D)(Y - \mu(0, V))}{1 - e(V)}$$

In PLR, the orthogonal signal is constructed via "partialling out" to meet the form:

$$\Lambda(\eta) = -\partial_d \log e(d\mid v)\,[Y-\mu(d,v)] + \partial_d\mu(d,v)$$

3. Dimension reduction

The constructed orthogonal signal is projected onto a functional space in ( X ), which, in the interflex package, is either a B-spline or kernel regression, to recover CME ( \theta(x) ).



DML Estimators

Load the simulated toy datasets, s6, s7, and s9, for this tutorial. s6 is a case of a binary treatment indicator with discrete outcomes. s7 is a case of a continuous treatment indicator with discrete outcomes. s9 is a case of a discrete treatment indicator (3 groups) with discrete outcomes.

data(interflex)
ls()
### continuous outcome
set.seed(1234)
n <- 200
x <- rnorm(n, 3, 1)
z <- rnorm(n, 3, 1)
e <- rnorm(n, 0, 1)

# s1
d1 <- sample(c(0, 1), n, replace=TRUE)
y1<-5 - 4 * x - 9 * d1 + 3 * x * d1 + 1 * z + 2 * e
s1<-cbind.data.frame(Y = y1, D = d1, X = x, Z1 = z)

# s2
d2 <- rnorm(n, 3, 1)
y2<-5 - 4 * x - 9 * d2 + 3 * x * d2 + 1 * z + 2 * e
s2<-cbind.data.frame(Y = y2, D = d2, X = x, Z1 = z)
set.seed(110)


n <- 2000
x <- runif(n,min=-3, max = 3)
d1 <- sample(x = c(0,1),size = n,replace = T)
d2 <- runif(min = 0,max = 1,n = n)
d3 <- sample(x = c(0,1,2),size = n,replace = T)
Z1 <- runif(min = 0,max = 1,n = n)
Z2 <- rnorm(n,3,1)

## s6
link1 <- -1+d1+x+d1*x
prob1 <- exp(link1)/(1+exp(link1))
rand.u <- runif(min = 0,max = 1,n = n)
y1 <- as.numeric(rand.u<prob1)
s6 <- cbind.data.frame(X=x,D=d1,Z1=Z1,Z2=Z2,Y=y1)

## s7
link2 <- -1+d2+x+d2*x
prob2 <- exp(link2)/(1+exp(link2))
rand.u <- runif(min = 0,max = 1,n = n)
y2 <- as.numeric(rand.u<prob2)
s7 <- cbind.data.frame(X=x,D=d2,Z1=Z1,Z2=Z2,Y=y2)

## s8
link3 <- -1+d1+x+d1*x-d1*x*x
prob3 <- exp(link3)/(1+exp(link3))
rand.u <- runif(min = 0,max = 1,n = n)
y3 <- as.numeric(rand.u<prob3)
s8 <- cbind.data.frame(X=x,D=d1,Z1=Z1,Z2=Z2,Y=y3)

## s9
link4 <- rep(NA,n)
link4[which(d3==0)] <- -x[which(d3==0)]
link4[which(d3==1)] <- (1+x)[which(d3==1)]
link4[which(d3==2)] <- (4-x*x-x)[which(d3==2)]

prob4 <- exp(link4)/(1+exp(link4))
rand.u <- runif(min = 0,max = 1,n = n)
y4 <- as.numeric(rand.u<prob4)
s9 <- cbind.data.frame(X=x,D=d3,Z1=Z1,Z2=Z2,Y=y4)

When setting the option estimator to DML, the function will conduct the two-stage DML estimator to compute the marginal effects. In the following command, the outcome variable is "Y", the moderator is "X", the covariates are "Z1" and "Z2", and here it uses logit link function. ml_method determines the machine learning model used in estimating nuisance function, treat.type set the data type of treatment variable, it can be either continuous or discrete. Here, we use dataset s6,as an example. Since s6 binary treatment indicator with discrete outcomes, we set treat.type to discrete.

Binary treatment with discrete outcomes

To implement the DML estimator, we need to set the option estimator='DML' and specify two machine learning models: one for the outcome model (via the option model.y) and one for the treatment model (via the option model.t). Here, we use a neural network model (nn) for the outcome model and a random forest model (rf) for the treatment model. The gray area depicts the pointwise interval, while the dashed line marks the uniform/joint confidence interval.

s6.DML.nn.rf <- interflex(estimator="DML", data = s6,
                                 Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),
                                 model.y = "nn",model.t = "rf")
plot(s6.DML.nn.rf)

The estimated treatment effects of $D$ on $Y$ across the support of $X$ are saved in:

## printing the first 10 rows
lapply(s6.DML.nn.rf$est.dml, head, 10)

Users can then use the plot command to visualize the treatment effects. For comparison, we can add the "true" treatment effects to the plot.

x <- s6$X
TE <- exp(-1+1+x+1*x)/(1+exp(-1+1+x+1*x))-exp(-1+0+x+0*x)/(1+exp(-1+0+x+0*x))
plot.s6 <- plot(s6.DML.nn.rf, show.all = TRUE, Xdistr = 'none')$`1`
plot.s6 + geom_line(aes(x=x,y=TE),color='red')

First Stage DML Model Selections

The DML estimator supports three different machine learning algorithms: rf (Random Forest), nn (Neural Network), and hgb (Histogram-based Gradient Boosting). Users can change the model.y and model.t option to calculate the nuisance parameter using a different ML method. In this example, we use the dataset s9 to demonstrate the process:

## true treatment effects
x <- s9$X
TE1 <- exp(1+x)/(1+exp(1+x))-exp(-x)/(1+exp(-x))
TE2 <- exp(4-x*x-x)/(1+exp(4-x*x-x))-exp(-x)/(1+exp(-x))

## neural network + neural network
s9.DML.nn.nn <- interflex(estimator="DML", data = s9,
                          Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),
                          model.y = "nn",model.t = "nn")

plot.s9.nn.nn.1<-plot(s9.DML.nn.nn, show.all = T)[[1]]
plot.s9.nn.nn.2<-plot(s9.DML.nn.nn, show.all = T)[[2]]
p1.nn.nn<-plot.s9.nn.nn.1 + geom_line(aes(x=x,y=TE1),color='red')
p2.nn.nn<-plot.s9.nn.nn.2 + geom_line(aes(x=x,y=TE2),color='red')


## random forest + random forest
s9.DML.rf.rf <- interflex(estimator="DML", data = s9,
                          Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),
                          model.y = "rf",model.t = "rf")

plot.s9.rf.rf.1<-plot(s9.DML.rf.rf, show.all = T)[[1]]
plot.s9.rf.rf.2<-plot(s9.DML.rf.rf, show.all = T)[[2]]
p1.rf.rf<-plot.s9.rf.rf.1 + geom_line(aes(x=x,y=TE1),color='red')
p2.rf.rf<-plot.s9.rf.rf.2 + geom_line(aes(x=x,y=TE2),color='red')


## hist gradient boosting + hist gradient boosting
s9.DML.hgb.hgb <- interflex(estimator="DML", data = s9,
                          Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),
                          model.y = "hgb",model.t = "hgb")

plot.s9.hgb.hgb.1<-plot(s9.DML.hgb.hgb, show.all = T)[[1]]
plot.s9.hgb.hgb.2<-plot(s9.DML.hgb.hgb, show.all = T)[[2]]
p1.hgb.hgb<-plot.s9.hgb.hgb.1 + geom_line(aes(x=x,y=TE1),color='red')
p2.hgb.hgb<-plot.s9.hgb.hgb.2 + geom_line(aes(x=x,y=TE2),color='red')
p1<-ggarrange(p1.nn.nn, p2.nn.nn)
p1<-annotate_figure(p1, 
                    top = text_grob("Neural Network",
                    face = "bold", size = 13))

p2<-ggarrange(p1.rf.rf, p2.rf.rf)
p2<-annotate_figure(p2,
                    top = text_grob("Random Forest",
                    face = "bold", size = 13))

p3<-ggarrange(p1.hgb.hgb, p2.hgb.hgb)
p3<-annotate_figure(p3,
                    top = text_grob("Hist Gradient Boosting",
                    face = "bold", size = 13))

ggarrange(p1,p2,p3, ncol = 1)

We find that different ML models give very similar results, which is expected since there are only two confounders, $Z_{1}$ and $Z_{2}$, in the dataset, and all methods can accurately partial out their impacts on the outcome and the treatment.

Specifying Model Parameters

Continuous treatment with discrete outcome

Users can customize parameters for the machine learning model in both stages by specifying the options param.y and param.t. For the whole list of parameters that are applicable to the neural network model, users can refer to the MLPRegressor or the MLPClassifier. For the random forest method, users can refer to the RandomForestRegressor or the RandomForestClassifier. For the hist gradient boosting method, users can refer to the HistGradientBoostingRegressor or the HistGradientBoostingClassifier. Here we use dataset s7 to explore how selecting different DML parameters affects performance. We can directly specify the size of the neural networks, the activation function, learning rate, and solver using the list param_nn and feed them to the DML estimator.

param_nn <- list(
  hidden_layer_sizes = list(10L,20L,10L),
  activation = c("tanh"),
  solver = c("sgd"),
  alpha = c(0.05),
  learning_rate = c("adaptive")
)

s7.DML.nn.linear.para <- interflex(estimator="DML", data = s7,
                                       Y = "Y", D = "D", X = "X", 
                                       Z = c("Z1", "Z2"),
                                       model.y = "nn", param.y = param_nn,
                                       model.t = "nn", param.t = param_nn)
n <- nrow(s7)
d2 <- s7$D
x_values <- sort(s7$X) 
d2_value <- median(d2)  # Example value for d2

marginal_effect <- function(D_value, X_value) {
  link_value <- -1 + D_value + X_value + D_value * X_value
  prob_value <- exp(link_value) / (1 + exp(link_value))
  marginal_effect_value <- prob_value * (1 - prob_value) * (1 + X_value)
  return(marginal_effect_value)
}

# Applying the function to the range of x values to calculate true ME
ME <- sapply(x_values, function(x_val) marginal_effect(d2_value, x_val))


p.s7.nn.linear <- plot(s7.DML.nn.linear.para, show.all = TRUE, ylim = c(-0.5,0.5))[[1]]
p.s7.nn.linear<-p.s7.nn.linear + ylim(-0.5,0.5) + 
  geom_line(aes(x=x_values,y=ME),color='red')+
  ggtitle("Neural Network with Specified Parameters")+
  theme(plot.title = element_text(size=10))

Cross-Validation

Users can also specify the grids of parameters using the options param.grid.y and param.grid.t. The DML estimator will then search through all sequences of parameter settings and pick the most suitable one using cross-validation. We need to set the options CV to TRUE to invoke the cross-validation. As an example, we specify the grids used for the neural network, random forest, and hist gradient boosting, then we compare their performances on s7.

param_grid_nn <- list(
  hidden_layer_sizes = list(list(10L,20L,10L), list(20L)),
  activation = c("tanh","relu"),
  solver = c("sgd","adam"),
  alpha = c(0.0001,0.05),
  learning_rate = c("constant","adaptive")
)

param_grid_rf <- list(
  max_depth = c(2L,3L,5L,7L),
  n_estimators = c(10L, 30L, 50L, 100L, 200L, 400L, 600L, 800L, 1000L),
  max_features = c(1L,2L)
)

param_grid_hgb <- list(
  max_leaf_nodes = c(5L,10L,20L,30L),
  min_samples_leaf = c(5L,10L,20L),
  l2_regularization = c(0,0.1,0.5),
  max_features = c(0.5,1)
)


# nn (cv) + rf (cv) + linear
s7.cvnn.cvrf <- interflex(estimator="DML", data = s7,
                                     Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),CV = TRUE,
                                     model.y = "nn",param.grid.y = param_grid_nn,
                                     model.t = "rf",param.grid.t = param_grid_rf)

# nn (cv) + hgb (cv) + regularization (cv)
s7.cvnn.cvhgb <- interflex(estimator="DML", data = s7,
                                     Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),CV = TRUE,
                                     model.y = "nn",param.grid.y = param_grid_nn,
                                     model.t = "hgb",param.grid.t = param_grid_hgb)

# rf (cv) + rf (cv) + rf (cv)
s7.cvrf.cvrf <- interflex(estimator="DML", data = s7,
                                     Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),CV = TRUE,
                                     model.y = "rf",param.grid.y = param_grid_rf,
                                     model.t = "rf",param.grid.t = param_grid_rf)

# hgb (cv) + hgb (cv) + hgb (cv)
s7.cvhgb.cvhgb <- interflex(estimator="DML", data = s7,
                                     Y = "Y", D = "D", X = "X", Z = c("Z1", "Z2"),CV = TRUE,
                                     model.y = "hgb",param.grid.y = param_grid_hgb,
                                     model.t = "hgb",param.grid.t = param_grid_hgb)
p.1<-plot(s7.cvnn.cvrf, show.all = TRUE, ylim = c(-0.5,0.9))[[1]]
p.2<-plot(s7.cvnn.cvhgb, show.all = TRUE, ylim = c(-0.5,0.9))[[1]]
p.3<-plot(s7.cvrf.cvrf, show.all = TRUE, ylim = c(-0.5,0.9))[[1]]
p.4<-plot(s7.cvhgb.cvhgb, show.all = TRUE, ylim = c(-1.5,1.5))[[1]]

## adding true treatment effects when d = 0.53 (50%) to the plots 
p.1<-p.1 + ylim(-0.5,0.9) +
  geom_line(aes(x=x_values,y=ME),color='red')+
  ggtitle("Neural Network (cv) + Random Forest (cv) + Linear")+
  theme(plot.title = element_text(size=10))


p.2<-p.2 +ylim(-0.5,0.9)+
  geom_line(aes(x=x_values,y=ME),color='red')+
  ggtitle("Neural Network (cv) + Random Forest (cv) + Lasso (CV)")+
  theme(plot.title = element_text(size=10))


p.3<-p.3  +ylim(-0.5,0.9)+
  geom_line(aes(x=x_values,y=ME),color='red')+
  ggtitle("Random Forest (cv) + Random Forest (cv) + Random Forest (cv)")+
  theme(plot.title = element_text(size=8))

p.4<-p.4 + ylim(-1.5,1.5)+
  geom_line(aes(x=x_values,y=ME),color='red')+
  ggtitle("HGB (cv) + HGB (cv) + HGB (cv)")+
  theme(plot.title = element_text(size=10))

ggarrange(p.1, p.2, p.3, p.4)

Application 1: Binary Treatment with Continuous Outcomes

In this section, we walk through an example application of DML estimator with binary treatment and continuous outcome. The data we are using is from @huddy2015expressive, which explores the expressive model of partisanship, contrasting it with instrumental perspectives on political behavior. It argues that partisan identity, more than policy stances or ideological alignment, drives campaign involvement and emotional responses to political events. Key findings indicate that strongly identified partisans are more likely to engage in campaign activities and exhibit stronger emotional reactions—such as anger when threatened with electoral loss and positivity when victory seems likely—compared to those with weaker partisan identities.

The authors drew data from four studies conducted among populations that differ in their level of political activity: a highly engaged sample recruited from political blogs, and less politically engaged samples of students, New York (NY) State residents, and a national YouGov panel. The total number of respondents was 1,482. The variables of interest include

Data overview

d <- app_hma2015
paged_table(head(d))
Y <- "totangry" 
D <- "threat" 
X <- "pidentity"
Z <- c("issuestr2", "pidstr2", "pidstr2_threat" ,"issuestr2_threat", "knowledge" , "educ" , "male" , "age10" )

Dlabel <- "Threat"
Xlabel <- "Partisan Identity"
Ylabel <- "Anger"
vartype <- "robust"
main <- "Huddy et al. (2015) \n APSR"
cl <- cuts <- cuts2 <- time <- NULL

Estimating marginal effects

out.dml.nn<-interflex(estimator='DML', data = d,
                      Y=Y,D=D,X=X, Z = Z, treat.type = "discrete",
                      model.y = 'nn', model.t = 'nn', 
                      Xlabel=Xlabel,Ylabel=Ylabel, Dlabel=Dlabel)

lapply(out.dml.nn$est.dml, head, 20)

Comparison of estimators

out.dml.nn<-interflex(estimator='DML', data = d,
                      model.y = 'nn', model.t = 'nn',
                      Y=Y,D=D,X=X, Z = Z, treat.type = "discrete",
                      Xlabel=Xlabel,
                      Ylabel=Ylabel, Dlabel=Dlabel,ylim=c(-0.4,1))

out.dml.rf<-interflex(estimator='DML', data = d, 
                      model.y = 'rf', model.t = 'rf', 
                      Y=Y,D=D,X=X, Z = Z, treat.type = "discrete",
                      Xlabel=Xlabel,
                      Ylabel=Ylabel, Dlabel=Dlabel,ylim=c(-0.4,1))

out.dml.hgb<-interflex(estimator='DML', data = d, 
                       model.y = 'hgb', model.t = 'hgb', 
                       Y=Y,D=D,X=X, Z = Z, treat.type = "discrete",
                       Xlabel=Xlabel,
                       Ylabel=Ylabel, Dlabel=Dlabel,ylim=c(-0.4,1))

out.raw <- interflex(estimator = "raw", Y=Y,D=D,X=X,data=d, Xlabel=Xlabel,
                     Ylabel=Ylabel, Dlabel=Dlabel)

out.est1<-interflex(estimator = "binning",Y=Y,D=D,X=X,Z=Z,data=d,
                    Xlabel=Xlabel, Ylabel=Ylabel, Dlabel=Dlabel,
                    nbins=3, cutoffs=cuts,  cl=cl, time=time,
                    pairwise=TRUE,  Xdistr = "histogram",ylim=c(-0.4,1))

out.kernel<-interflex(estimator = "kernel", data=d, Y=Y,D=D,X=X,Z=Z,
                      cl=NULL, vartype = 'bootstrap', nboots = 2000, 
                      parallel = T, cores = 31,
                      Dlabel=Dlabel, Xlabel=Xlabel, Ylabel=Ylabel,
                      bw=0.917, Xdistr = "histogram", ylim=c(-0.4,1))
ggarrange(out.raw, out.est1$figure, out.kernel$figure,
          out.dml.nn$figure, out.dml.rf$figure, out.dml.hgb$figure, 
          common.legend = TRUE, legend = "none",
          labels = c("Raw", "Binning", "Kernel", 
                     "DML: nn","DML: rf", "DML: hgb"))

The lower left panel features a diagnostic scatterplot, illustrating that the relationship between anger and partisan identity is closely approximated by a linear fit in both treated and untreated groups, as indicated by the proximity of the linear and LOESS lines. This supports the assumption of a nearly linear interaction effect, with the impact of the threat on anger intensifying as partisan identity increases. Furthermore, the box plots demonstrate ample sufficient support for partisan identity ranging from approximately 0.3 to 1.

The three DML estimator plots consistently show that the marginal effect of the threat on anger escalates with increasing partisan identity across all machine learning methods employed. However, these DML estimators appear to overfit when partisan identity is below 0.25, a region with sparse data representation. The widen confidence intervals in areas where partisan identity is lower (e.g., below 0.25) further indicate uncertainty in the model’s estimations in these areas.


Application 2: Continuous Treatment with Continuous Outcomes

In this section, we demonstrate the application of the DML estimator, focusing on scenarios with continuous treatment and outcome variables. We refer to the study by @vernby2013inclusion, which examines the impact of non-citizen suffrage on public policy in Swedish municipalities. Vernby's analysis reveals that municipalities with a higher percentage of school-aged non-citizens would see a more significant increase in education spending following the enfranchisement of non-citizens.

In the example, the dataset consists of 183 cases. The key variables include:

Data overview

d <- app_vernby2013
paged_table(head(d))

Y <- "school_diff" 
D <- "noncitvotsh" 
X <- "noncit15" 
Z <- c("Taxbase2" ,  "Taxbase2_2" , "pop" , "pop_2" ,   "manu" , "manu_2")

Dlabel <- "Share NC"
Xlabel <- "Prop. School-Aged NC"
Ylabel <- "Δ Ed. Services"
vartype <- "robust"
name <- "vernby_2013a"
main <- "Vernby (2013) \n AJPS"
time <- cl <- cuts <- cuts2 <- NULL

Estimating marginal effects

out.dml.nn<-interflex(estimator='DML', data = d, 
          Y=Y,D=D,X=X, Z = Z, treat.type = "continuous",
          model.y = 'nn', model.t = 'nn',
          Xlabel=Xlabel,
          Ylabel=Ylabel, Dlabel=Dlabel)

lapply(out.dml.nn$est.dml, head, 20)

Comparison of estimators

out.dml.nn<-interflex(estimator='DML', data = d, 
          Y=Y,D=D,X=X, Z = Z, treat.type = "continuous",
          model.y = 'nn', model.t = 'nn', 
          Xlabel=Xlabel, xlim = c(0.125,0.3),ylim = c(-25,75),
          Ylabel=Ylabel, Dlabel=Dlabel)

out.dml.rf<-interflex(estimator='DML', data = d, 
                       Y=Y,D=D,X=X, Z = Z, treat.type = "continuous",
                       model.y = 'rf', model.t = 'rf', 
                       Xlabel=Xlabel,xlim = c(0.125,0.3),ylim = c(-25,75),
                       Ylabel=Ylabel, Dlabel=Dlabel)

out.dml.hgb<-interflex(estimator='DML', data = d, 
                       Y=Y,D=D,X=X, Z = Z, treat.type = "continuous",
                       model.y = 'hgb', model.t = 'hgb', 
                       Xlabel=Xlabel,xlim = c(0.125,0.3),ylim = c(-25,75),
                       Ylabel=Ylabel, Dlabel=Dlabel)

out.raw <- interflex(estimator = "raw", Y=Y,D=D,X=X,data=d,Xlabel=Xlabel,
                     Ylabel=Ylabel, Dlabel=Dlabel, cutoffs=cuts,span=NULL)


out.est1<-interflex(estimator = "binning",Y=Y,D=D,X=X,Z=Z,data=d,
                    Xlabel=Xlabel, Ylabel=Ylabel, Dlabel=Dlabel,
                    cutoffs=cuts,  cl=cl, time=time,xlim = c(0.125,0.3),
                    pairwise=TRUE, Xdistr = "histogram")


out.kernel<-interflex(estimator = "kernel", data=d, Y=Y,D=D,X=X,Z=Z,
                      cl=cl,vartype = 'bootstrap', nboots = 2000, 
                      parallel = T, cores = 31,xlim = c(0.125,0.3),
                      Dlabel=Dlabel, Xlabel=Xlabel, Ylabel=Ylabel,
                      Xdistr = "histogram")
ggarrange(out.raw, 
          out.est1$figure, 
          out.kernel$figure,
          out.dml.nn$figure, 
          out.dml.rf$figure, out.dml.hgb$figure, 
          common.legend = TRUE, legend = "none",
          labels = c("Raw", "Binning", "Kernel", "DML: nn","DML: rf","DML: hgb"))


References



xuyiqing/interflex documentation built on April 14, 2025, 12:23 a.m.