knitr::opts_chunk$set(echo = TRUE)
knitr::opts_chunk$set(cache = TRUE)
library(knitr)
library(dplyr)
library(devtools)
load_all()

RPadjust (Risk Preference Adjustment)

R-CMD-check Project Status: WIP – Initial development is in progress, but there has not yet been a stable, usable release suitable for the public. lifecycle

Introduction

The RPadjust package contains functions for impementing the risk preference adjustment procedure outlined in Turner and Landry, 2021.

Turner, D. and Landry, C. (2021). Accounting for uncertainty in decision weights for experimental elicitation of risk preferences. SSRN Working Paper. Avaliable at SSRN: https://ssrn.com/abstract=3882694.

Working Example

First, we will simulate some data to work with. To calculate adjusted risk preferences we need the following:

n <- 10 # number of observations for fake data

# initialize a data frame to hold the fake data
df <- data.frame(lottery_choice = sample(seq(1,5,1),n,replace = T),
           lottery_probs_1 = NA,
           lottery_probs_2 = NA,
           lottery_payoffs_1 = NA,
           lottery_payoffs_2 = NA,
           subjective_beliefs = NA)

# populate the df with fake data
df$lottery_probs_1 <- list( c(1,.5,.225,.125,.025) ) 
df$lottery_probs_2 <- list( c(0,.5,.775,.875,.975) )
df$lottery_payoffs_1 <- list( c(5,8,22,60,300) )
df$lottery_payoffs_2 <- list( c(0,3,2,0,0) )
for(k in 1:nrow(df)){
  df$subjective_beliefs[k] <- list( c(sample(seq(1,5,1),1,replace = T),sample(seq(1,5,1),1,replace = T) ,sample(seq(1,5,1),1,replace = T),sample(seq(1,5,1),1,replace = T) ))
}

kable(head(df))

First we can calculate the risk preference implied by the respondents' observed lottery choices using the adjust_rp() function ```r

calculate the risk preferences of the first respondent (i.e., the first row in the data frame)

obs <- 1 # calculating for the first observation for(obs in 1:nrow(df)){ risk_pref_unadjusted <- adjust_rp(mc_reps = 100, # the number of monte-carlo reps to use large_adjustment = .10, # the upper bound on the large adjustment interval small_adjustment = .05, # the upper bound on the small adjustment interval rp_lb = -2, # the lower bound on the range of CRRA values to consider rp_ub = 2, # the upper bound on the range of CRRA values to consider rp_resolution = .01, # the resolution of the CRRA value (controls how finely we search the parameter space) lottery_probs_1 = df$lottery_probs_1[obs][[1]], # probabilities that the first outcome in each lottery occurs lottery_probs_2 = df$lottery_probs_2[obs][[1]], # probabilities that the second outcome in each lottery occurs lottery_payoffs_1 = df$lottery_payoffs_1[obs][[1]], # payoffs for the first outcome in each lottery lottery_payoffs_2 = df$lottery_payoffs_2[obs][[1]], # payoffs for the second outcome in each lottery sub_beliefs = c(3,3,3,3), # likert responses to the subjective probability debriefing questions, # setting all equal to 3's will produce unadjusted risk preference parameters lottery_choice = df$lottery_choice[obs], # the observed lottery choice of the respondent utility_function = "crra", # the utility function to use in the risk preference calculation (crra is the only choice right now) initial_wealth = 0.000000000000000001, # initial wealth to assume. Assuming arbitrarily close to zero initial wealth. I.e., assuming the agent is playing the lottery in isolation returned_obj = "range") # will returning the range of the implied crra interval (can set to "midpoint" to get the midpoint of the implied range)

print(risk_pref_unadjusted) } ````



dylan-turner25/RPadjust documentation built on Dec. 20, 2021, 2:17 a.m.