knitr::opts_chunk$set(echo = TRUE) knitr::opts_chunk$set(eval = TRUE)
The R package rdlearn
implements the safe policy learning under regression discontinuity designs with multiple cutoffs of Zhang et al.(2022). It provides functions to learn improved treatment assignment rules (cutoffs) which are guaranteed to yield no worse overall outcomes than the existing cutoffs.
In this vignette, we replicate some of the empirical application results from Section 6 of Zhang et al. (2022).
The rdlearn
package for R can be downloaded using (requires previous installation of the remotes
package).
remotes::install_github("kkawato/rdlearn@0.1.1")
Load the package after the installation is complete.
library(rdlearn)
For our case study, we download the acces
dataset and apply the proposed methodology to the ACCES (Access with Quality to Higher Education) program, a national-level subsidized loan initiative in Colombia. To qualify for the ACCES program, students must achieve a high score on the SABER 11, the national high school exit exam. Each student is assigned a position score ranging from 1 to 1,000, based on each student’s position in terms of 1,000 quantiles.
To select ACCES beneficiaries, the Colombian Institute for Educational Loans and Studies Abroad establishes an eligibility cutoff such that students whose position scores are higher than the cutoff become eligible for the ACCES program. The eligibility cutoff is defined separately for each department (or region) of Colombia (see Melguizo et al. (2016) for more details).
The acces
dataset includes four columns. The elig
column contains the outcome (eligibility for the ACCES program (1: eligible; 0: not eligible). The saber11
column contains running variable (position scores from the SABER 11 exam). The cutoff
column contains the eligibility cutoff for each department, and the department
column contains the names of the departments.
library(rdlearn) # Load acces data data(acces) head(acces)
First, we demonstrate how to output the summary of Table 1 in Zhang et al. (2022), which includes local treatment effect estimates at the baseline cutoffs. This can be done as follows:
rdestimate_result <- rdestimate( data = acces, y = "elig", x = "saber11", c = "cutoff", group_name = "department" ) print(rdestimate_result)
This provides basic information, including the sample size and baseline cutoff for each group, as well as the RD treatment effect and standard error. RD treatment effects marked with an asterisk (**) are significant at the 5% level.
Next, we demonstrate how to obtain the results of Figure 2, which shows the cutoff change relative to the baseline for each department under different smoothness multiplicative factors (M
). Due to the computational time required, we are presenting a part of Figure 2. Learning safe cutoffs can be done as follows:
# set seed for replication set.seed(12345) # only a subset of data is used acces_filtered <- acces[acces$department %in% c("DISTRITO CAPITAL", "MAGDALENA"), ] # To replicate exactly, set data = acces, fold = 20 and M = c(0, 1, 2, 4) rdlearn_result <- rdlearn( data = acces_filtered, y = "elig", # elig x = "saber11", # saber11 c = "cutoff", # cutoff group_name = "department", fold = 2, M = 1, cost = 0, trace = FALSE ) summary(rdlearn_result) plot(rdlearn_result, opt = "dif")
This plot shows the cutoff changes relative to the baseline for each department under different smoothness multiplicative factors (M
).
The main function here is rdlearn
. The arguments include data
for the dataset, y
for the column name of outcome, x
for the column name of running variable, c
for the column name of cutoff, and group_name
for the column name of group. These arguments specify the data we analyze. The fold
argument specifies the number of folds for cross-fitting.
For sensitivity analysis, the M
argument specifies the multiplicative smoothness factor and cost
specifies the treatment cost.
For an rdlearn_result
object obtained by rdlearn
, we can use summary
to display the RD estimates, the obtained safe cutoffs, and the differences between the safe and original cutoffs.
The plot
function provides a clear visualization of the safe cutoffs. Using plot(result, opt = "safe")
shows the obtained safe cutoffs, while plot(result,opt = "dif")
shows the differences between the safe and original cutoffs.
The trace
argument can be set to TRUE
to show progress during the learning process.
Next, we show results for another sensitivity analysis, as shown in Figure 3 in Zhang et al.(2022). For the same reason, we are presenting only a part of Figure 3.
In the case of Zhang et al.(2022), we assume the utility function ( u(y, w) = y - C \times w ), where ( y ) is a binary outcome (representing the utility gain from enrollment), ( C ) is a cost parameter ranging from 0 to 1, and ( w ) is a binary treatment indicator (representing the offering of a loan). To explore the trade-off between cost and utility, we conduct a sensitivity analysis for the cost parameter: ( C ).
We use the sens
function with the rdlearn_result
object as follows:
# To replicate exactly, set cost = c(0, 0.2, 0.4, 0.6, 0.8, 1) sens_result <- sens( rdlearn_result, M = 1, cost = 1, trace = FALSE) plot(sens_result, opt = "dif")
This plot shows the cutoff change relative to the baseline for each department under different values of cost
.
If the learning process has already been implemented and the rdlearn_result
object is available, the sens
function can be used to modify parameters specifically for sensitivity analysis.
Zhang, Y., Ben-Michael, E. and Imai, K. (2022) ‘Safe Policy Learning under Regression Discontinuity Designs with Multiple Cutoffs’, arXiv [stat.ME]. Available at: http://arxiv.org/abs/2208.13323.
Melguizo, T., F. Sanchez, and T. Velasco (2016). Credit for low-income students and access to and academic performance in higher education in Colombia: A regression discontinuity approach. World Development, 80, 61–77.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.