README.md

RPadjust (Risk Preference Adjustment)

R-CMD-check Project Status: WIP – Initial development is in progress, but there
has not yet been a stable, usable release suitable for the
public. lifecycle

Introduction

The RPadjust package contains functions for impementing the risk preference adjustment procedure outlined in Turner and Landry, 2021.

Turner, D. and Landry, C. (2021). Accounting for uncertainty in decision weights for experimental elicitation of risk preferences. SSRN Working Paper. Avaliable at SSRN: https://ssrn.com/abstract=3882694.

Working Example

First, we will simulate some data to work with. To calculate adjusted risk preferences we need the following:

n <- 10 # number of observations for fake data

# initialize a data frame to hold the fake data
df <- data.frame(lottery_choice = sample(seq(1,5,1),n,replace = T),
           lottery_probs_1 = NA,
           lottery_probs_2 = NA,
           lottery_payoffs_1 = NA,
           lottery_payoffs_2 = NA,
           subjective_beliefs = NA)

# populate the df with fake data
df$lottery_probs_1 <- list( c(1,.5,.225,.125,.025) ) 
df$lottery_probs_2 <- list( c(0,.5,.775,.875,.975) )
df$lottery_payoffs_1 <- list( c(5,8,22,60,325) )
df$lottery_payoffs_2 <- list( c(0,3,2,0,0) )
for(k in 1:nrow(df)){
  df$subjective_beliefs[k] <- list( c(sample(seq(1,5,1),1,replace = T),sample(seq(1,5,1),1,replace = T) ,sample(seq(1,5,1),1,replace = T),sample(seq(1,5,1),1,replace = T) ))
}

kable(head(df))

| lottery_choice | lottery_probs_1 | lottery_probs_2 | lottery_payoffs_1 | lottery_payoffs_2 | subjective_beliefs | |----------------:|:----------------------------------|:----------------------------------|:--------------------|:--------------------|:--------------------| | 2 | 1.000, 0.500, 0.225, 0.125, 0.025 | 0.000, 0.500, 0.775, 0.875, 0.975 | 5, 8, 22, 60, 325 | 0, 3, 2, 0, 0 | 5, 2, 2, 5 | | 1 | 1.000, 0.500, 0.225, 0.125, 0.025 | 0.000, 0.500, 0.775, 0.875, 0.975 | 5, 8, 22, 60, 325 | 0, 3, 2, 0, 0 | 3, 2, 5, 3 | | 2 | 1.000, 0.500, 0.225, 0.125, 0.025 | 0.000, 0.500, 0.775, 0.875, 0.975 | 5, 8, 22, 60, 325 | 0, 3, 2, 0, 0 | 1, 5, 5, 4 | | 1 | 1.000, 0.500, 0.225, 0.125, 0.025 | 0.000, 0.500, 0.775, 0.875, 0.975 | 5, 8, 22, 60, 325 | 0, 3, 2, 0, 0 | 5, 3, 4, 3 | | 1 | 1.000, 0.500, 0.225, 0.125, 0.025 | 0.000, 0.500, 0.775, 0.875, 0.975 | 5, 8, 22, 60, 325 | 0, 3, 2, 0, 0 | 5, 2, 3, 5 | | 5 | 1.000, 0.500, 0.225, 0.125, 0.025 | 0.000, 0.500, 0.775, 0.875, 0.975 | 5, 8, 22, 60, 325 | 0, 3, 2, 0, 0 | 4, 1, 4, 1 |

First we can calculate the risk preference implied by the respondents’ observed lottery choices using the adjust_rp() function

# calculate the risk preferences of the first respondent (i.e., the first row in the data frame)
obs <- 1 # calculating for the first observation
for(obs in 1:nrow(df)){
risk_pref_unadjusted <- adjust_rp(mc_reps = 100, # the number of monte-carlo reps to use
                       large_adjustment = .10,  # the upper bound on the large adjustment interval
                       small_adjustment = .05, # the upper bound on the small adjustment interval
                       rp_lb = -2, # the lower bound on the range of CRRA values to consider
                       rp_ub = 2, # the upper bound on the range of CRRA values to consider
                       rp_resolution = .01, # the resolution of the CRRA value (controls how finely we search the parameter space)
                       lottery_probs_1 = df$lottery_probs_1[obs][[1]], # probabilities that the first outcome in each lottery occurs
                       lottery_probs_2 = df$lottery_probs_2[obs][[1]], # probabilities that the second outcome in each lottery occurs
                       lottery_payoffs_1 = df$lottery_payoffs_1[obs][[1]], # payoffs for the first outcome in each lottery
                       lottery_payoffs_2 = df$lottery_probs_1[obs][[1]], # payoffs for the second outcome in each lottery
                       sub_beliefs = c(3,3,3,3), # likert responses to the subjective probability debriefing questions, 
                                                 # setting all equal to 3's will produce unadjusted risk preference parameters
                       lottery_choice = df$lottery_choice[obs], # the observed lottery choice of the respondent
                       utility_function = "crra", # the utility function to use in the risk preference calculation (crra is the only choice right now)
                       initial_wealth = .0000001, # initial wealth to assume. Assuming a positive, but 
                                                  # arbitrarily close to zero initial wealth. I.e., assuming the agent is playing the lottery in isolation
                       returned_obj = "range") # will returning the range of the implied crra interval (can set to "midpoint" to get the midpoint of the implied range)

print(risk_pref_unadjusted)
}
## [1] -2.00  0.03
## [1] -2.00  0.03
## [1] 0.19 2.00

## Warning in min(crra_vals$crra): no non-missing arguments to min; returning Inf

## Warning in max(crra_vals$crra): no non-missing arguments to max; returning -Inf

## [1]  Inf -Inf
## [1] 0.04 0.18
## [1] 0.04 0.18

## Warning in min(crra_vals$crra): no non-missing arguments to min; returning Inf

## Warning in min(crra_vals$crra): no non-missing arguments to max; returning -Inf

## [1]  Inf -Inf

## Warning in min(crra_vals$crra): no non-missing arguments to min; returning Inf

## Warning in min(crra_vals$crra): no non-missing arguments to max; returning -Inf

## [1]  Inf -Inf
## [1] -2.00  0.03
## [1] 0.19 2.00


dylan-turner25/RPadjust documentation built on Dec. 20, 2021, 2:17 a.m.