rel_capped_rne: Solve an RNE for a capped version of a game

Description Usage Arguments

View source: R/rne_capped.R

Description

In a capped version of the game we assume that after period T the state cannot change anymore and always stays the same. I.e. after T periods players play a repeated game. For a given T a capped game has a unique RNE payoff. Also see rel_T_rne.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
rel_capped_rne(
  g,
  T,
  delta = g$param$delta,
  rho = g$param$rho,
  adjusted.delta = NULL,
  beta1 = g$param$beta1,
  tie.breaking = c("equal_r", "slack", "random", "first", "last", "max_r1", "max_r2",
    "unequal_r")[1],
  tol = 1e-12,
  add.iterations = FALSE,
  save.details = FALSE,
  save.history = FALSE,
  use.cpp = TRUE,
  T.rne = FALSE,
  spe = NULL,
  res.field = "eq"
)

Arguments

g

The game

T

The number of periods in which new negotiations can take place.

delta

the discount factor

rho

the negotiation probability

adjusted.delta

the adjusted discount factor (1-rho)*delta. Can be specified instead of delta.

beta1

the bargaining weight of player 1. By default equal to 0.5. Can also be initially specified with rel_param.

tie.breaking

A tie breaking rule when multiple action profiles could be implemented on the equilibrium path with same joint payoff U. Can take the following values:

  • "equal_r" (DEFAULT) prefer actions that in expectation move to states with more equal negotiation payoffs.

  • "slack" prefer the action profile with the highest slack in the incentive constraints

  • "random" pick randomly from all eligible action profiles

  • "max_r1" pick action profiles that in moves to states with highest negotiation payoff for player 1.

  • "max_r2" pick action profiles that in moves to states with highest negotiation payoff for player 2.

tol

Due to numerical inaccuracies the calculated incentive constraints for some action profiles may be vialoated even though with exact computation they should hold, yielding unexpected results. We therefore also allow action profiles whose numeric incentive constraints is violated by not more than tol. By default we have tol=1e-10.

add.iterations

if TRUE just add T iterations to the previously computed capped RNE or T-RNE.

save.details

if set TRUE details of the equilibrium are saved that can be analysed later by calling get_rne_details. For an example, see the vignette for the Arms Race game.

save.history

saves the values for intermediate T.


skranz/RelationalContracts documentation built on March 6, 2021, 11:54 a.m.