knitr::opts_chunk$set(echo = TRUE, error=TRUE, cache=TRUE)
library(dyngame)
library(RSGSolve)
library(dplyr)
library(R.cache)

This RMarkdown document contains the code that replicates the analysis in Section 4.4 of Goldlücke and Kranz (2017).

We numerically compare the algorithm to solve stochastic games with monetary transfers by Goldlücke and Kranz (2017) (GK) with the algorithm to solve stochastic games with just a public correlation device by Abreu, Brooks, Sanikov (2016) (ABS).

The ABS algorithm is implemented in C++ library SGSolve by Ben Brooks http://www.benjaminbrooks.net/software.shtml, and an R interface is in the package RSGSolve https://github.com/skranz/RSGSolve

The GK algorithm is implemented in the R package dyngame: https://github.com/skranz/dyngame The dyngame package also contains links to RSGSolve that easily allow to compare the equilibrium payoff sets with and without monetary transfers.

Install and Load packages

If you have not yet installed all required R packages, first install dyngame and RSGSolve as explained on their Github sites linked above. Furthermore, we need to install the following Packages from CRAN:

install.packages("dplyr")
install.packages("ggplot2")
install.packages("R.cache")

To load the installed R libraries needed to run the examples, execute the following code:

library(dyngame)
library(RSGSolve)
library(dplyr)
library(ggplot2)
library(R.cache)

Two State PD Game

Payoff matrix

We first want to compare the algorithm with the simple two-state PD game from ABS (p. 11).

The following function specifies the relevant variables to solve the game with transfers using the dyngame package.

abs.pd.game = function(delta=2/3) 
{

  # act.fun specifies the action set for a given state xv
  # val = numerical values of actions
  # lab = labels of actions
  act.fun = function(xv,m=NULL) {
    restore.point("act.fun")
        val = list(1:2,1:2)
        lab = list(c("C","D"),c("C","D"))
    list(val=val,lab=lab, i=c(1,2))
  }

  # Stage game payoff function
  # Specifies the stage game payoffs as a function
  # of the action profile av and state xv
  # The function must be vectorized, i.e. avm and xvm
  # are matrices of action profiles and states
  g.fun = function(avm,xvm,m=NULL) {
    restore.point("g.fun");

        # Payoff matrices state 1
        g1 = matrix(c(2, 3, -1,0),2,2) # player 1
        g2 = t(g1) # player 2

        g = cbind(g1[avm], g2[avm])

        # In state 2 payoffs are just by 2 larger
        g[xvm==2] = g[xvm==2] + 2
        return(g)
  }

  # State transitions
  # For a matrix of action profiles and states specifies
  # the matrix of state transitions
  tau.fun = function(avm,xvm,m=NULL) {
    #restore.point("tau.fun")

    rownames(avm)=rownames(xvm)=NULL
    # set default of 1/2

    tau = matrix(1/2,NROW(avm),m$nx)
    ## CC in state 1
    row = avm[,1] == 1 & avm[,2] == 1 & xvm == 1
    tau[row,] = c(1/3, 2/3)
    ## DD in state 1
    row = avm[,1] == 2 & avm[,2] == 2 & xvm == 1
    tau[row,] = c(1/3, 2/3)

    ## CC in state 2
    row = avm[,1] == 1 & avm[,2] == 1 & xvm == 2
    tau[row,] = c(2/3, 1/3)
    ## DD in state 2
    row = avm[,1] == 2 & avm[,2] == 2 & xvm == 2
    tau[row,] = c(2/3, 1/3)
    tau
  }

  # return required information for the dynamic game
  list(n=2,
       delta=delta,
       xv.val = list(1:2), # states
       act.fun=act.fun,
       g.fun=g.fun,
       tau.fun=tau.fun
  )
}

The specification using functions to compute action sets, payoffs and transistion probabilities may look a bit more complicated than neccessary. Yet, for more complex games, this approach is quite useful.

Solving the game with transfers

The following code formally inits and solves the stochastic game with transfers and shows the solution.

library(dyngame)
# init game
m = init.game(abs.pd.game(delta=0.7))
# solve game with transfers
sol = solve.game(m,plots = FALSE, verbose = FALSE)
plot.payoff.set(state=1, sol,all.iter = FALSE) 

The plot shows the equilibrium payoff set in state 1. Under the assumptions of GK (risk-neutrality, monetary transfers with deep pockets, and money burning) the equilibrium payoff set of a two player stochastic game is always a triangle.

We can also look at the solution to see for each state the highest joint payoffs, punishment payoffs and action profiles that are played in the equilibrium regime and in the punishment regimes.

print.sol(sol)

Also solve game without transfers

The following code solves the game without transfers and shows both payoff sets for state 1:

library(RSGSolve)
rsg = dyngame.to.rsg(m)
rsg.sol = solveSG(rsg=rsg)
# only final revolution
plot.payoff.set(state=1, sol, rsg.sol,all.iter = FALSE) 

# all iterations of ABS
plot.payoff.set(state=1, sol, rsg.sol) 

Same critical discount factor

As a note on the side: With some experimentation, it looks as if the critical discount factor to implement cooperation in both states is the same with and without transfers and around $\delta = 0.357$

We first solve the game for $\delta = 0.357$. With and without transfers, we can sustain mutual cooperation in both states:

# init game
my.game = abs.pd.game(delta=0.357)
m = init.game(my.game = my.game)
# solve game with transfers
sol = solve.game(m)
print.sol(sol)

# solve game without transfers
rsg = dyngame.to.rsg(m)
rsg.sol = solveSG(rsg=rsg)

# Plot payoff sets
plot.payoff.set(state=1, sol, rsg.sol)

Now we solve both games for a slightly smaller discount factor of $\delta = 0.356$. With and without transfers only mutual defection can be implemented:

# Smaller discount factor
# init game
my.game = abs.pd.game(delta=0.356)
m = init.game(my.game = my.game)
# solve game with transfers
sol = solve.game(m)
print.sol(sol)

# solve game without transfers
rsg = dyngame.to.rsg(m)
rsg.sol = solveSG(rsg=rsg)

# Plot payoff sets
plot.payoff.set(state=1, sol, rsg.sol)

Speed comparison for PD Game

Here is a small function that stops the time to solve a game under the GK and ABS algorithms.

game = abs.pd.game(delta=0.7)
stop.game.solve.time(game, times=3)

The small two state PD game can be solved very quickly with both algorithms.

Cournot Game

This is the dyngame specificaton of the Cournot game with stochastic reserves described in Section 4.3 of Goldlücke and Kranz (2017).

# type of reserve increment
DET = 1  # deterministic reserve increment
UNIF = 2 # uniformely distributed reserve increment

# type of solution
COLL = 1 # collusive solution (optimal dynamic game equilibrium)
MON = 2  # integrated monopoly solution


# a quicker translation function from states vectors to state indices
xv.to.x = function(m,xvm) {
  return( (xvm[,1])*m$xv.dim[[2]]+xvm[,2]+1)
}

# Cournot duopoly with stochastic water reserves. Example from paper
cournot.reserves.game = function(
           x.cap=20, K=20, x.inc=2, delta=2/3,inc.min=0,inc.max=2,
           sol.type=COLL,inc.type=DET,para=NULL) 
{

  # If parameters are given as a list just copy them all
  # into the local environment
  if (!is.null(para)) {
    copy.into.env(source=para)
  }

  # act.fun specifies the action set for a given state xv
  # val = numerical values of actions
  # lab = labels of actions
  act.fun = function(xv,m=NULL) {
    restore.point("act.fun")
        val = list(0:xv[1],0:xv[2])
        lab = val

    list(val=val,lab=lab, i=c(1,2))
  }

  # States can be grouped into sets of states with same
  # action sets.
  # In our game each state has a different action set
  x.group = function(xvm,m=NULL) {
    #x.group = rep(1,NROW(xvm))
    x.group = 1:NROW(xvm)
    x.group
  } 

  # Stage game payoff function
  # Specifies the stage game payoffs as a function
  # of the action profile av and state xv
  # The function must be vectorized, i.e. avm and xvm
  # are matrices of action profiles and states
  g.fun = function(avm,xvm,m=NULL) {
    restore.point("g.fun");

        rownames(avm)=rownames(xvm)=NULL

    # compute outputs and prices
        q = avm
    P = K-q[,1]-q[,2]
    P[P<0]=0
    # return profits
    pi = P*q 
    pi
  }

  # Some additional statistics of the solution that we might be interested in
  # Here we want to know about price, total output, joint profits, consumer surplus,
  # and total welfare 
  extra.sol.fun = function(avm,xvm,m=NULL) {
    q   = avm
    P   = K-q[,1]-q[,2]
    P[P<0]=0
    pi = P*q 
    Q  = q[,1]+q[,2]
    CS = (K-P)*pmin(Q,K)*(1/2)
    Pi = pi[,1]+pi[,2]
    W  = CS+Pi


    mat = cbind(q,P,Q,CS,Pi,W)
    colnames(mat)[1:2]=c("q1","q2")
    return(mat)
  }

  # We do not consider a multistage game
  x.stage.fun = NULL

  # State transitions
  # For a matrix of action profiles and states specifies
  # the matrix of state transitions
  tau.fun = function(avm,xvm,m=NULL) {
    #restore.point("tau.fun")

        rownames(avm)=rownames(xvm)=NULL
    tau = matrix(0,NROW(avm),m$nx)          

        # Firms reserves are restored by a fixed amount x.inc
        if (inc.type==DET) {

      # matrix of resulting state values
            new.xvm = with.floor.and.ceiling(xvm - avm + x.inc ,floor=0,ceiling=x.cap)

            # Translate state values to state indices
      new.x = xv.to.x(m,new.xvm)

      # Set transition probabilities to 1 for resulting states 
            ind.mat = cbind(1:NROW(xvm),new.x)
            tau[ind.mat]=1

        # Alternatively, firms reserves are restored by a uniformely 
    # distributed integer amount between 0 and x.inc
        } else if (inc.type==UNIF) {

      # loop through all combinations of independently
      # distributed restore amount draws
            for (x.add1 in inc.min:inc.max) {
                for (x.add2 in inc.min:inc.max) {
                    x.add = c(x.add1,x.add2)                
                    xvm1 = with.floor.and.ceiling(xvm[,1] - avm[,1] + x.add1 ,floor=0,ceiling=x.cap)
                    xvm2 = with.floor.and.ceiling(xvm[,2] - avm[,2] + x.add2 ,floor=0,ceiling=x.cap)

                    # Translate state values to state indices
          new.x = xv.to.x(m,cbind(xvm1,xvm2))

          # Add probability of restore draw to transition matrix
                    ind.mat = cbind(1:NROW(xvm),new.x)
                    tau[ind.mat]=tau[ind.mat] +1/((inc.max-inc.min+1)^2)
                }
            }   
        }

    tau
  }

  integrated = sol.type == MON

  # return required information for the dynamic game
  list(n=2,
       delta=delta,
       integrated = integrated,
       xv.val = list(0:x.cap,0:x.cap),
       act.fun=act.fun,
       g.fun=g.fun,
       tau.fun=tau.fun,
       x.stage.fun = x.stage.fun,
       x.group=x.group,
       extra.sol.fun=extra.sol.fun)
}

Comparing payoff sets with- and without transfers in the Cournot game

Here we compare the payoff sets for the Cournot game

library(dyngame)
library(RSGSolve)
size = 5
game = cournot.reserves.game(delta=0.7,x.cap=size,K=size,inc.min=0,inc.max=2,inc.type=UNIF)
m = init.game(game)
# solve game with transfers
sol = solve.game(m)

# solve game without transfers
rsg = dyngame.to.rsg(m)
rsg.sol = cachedEval(solveSG(rsg=rsg),key = "cournot.set5")

Plot the payoff sets:

plot.payoff.set(state=1, sol, rsg.sol,all.iter = FALSE,show.np.e = FALSE)

Plot a zoomed-out version, with all candidate points in all iterations of the ABS algorithm

plot.payoff.set(state=1, sol, rsg.sol, show.np.e = FALSE)

Speed comparisons for the Cournot game

The following function helps to create and solve the Cournot game for a given size and discount factor. It stops the time to solve the game and returns the results as a data frame. The argument times allows to replicate the solving process multiple times to get a better estimate. If use.cache, we store and retrieve the timing information of previous runs in a cache directory. If you set a times arguments larger than in all previous runs, only the additional runs will be conducted.

test.cournot.game = function(size = 3,delta = 0.7, just.gk=FALSE, times=1, run.id=1:times, use.cache=TRUE) {
  restore.point("test.cournot.game")

  # generate game definition with specifications
  inc.max = ceiling(size/3)
  game = cournot.reserves.game(delta=delta,x.cap=size,K=size,inc.min=0,inc.max=inc.max,sol.type=COLL,inc.type=UNIF)

  # solve game with and without transfers (if just.gk=FALSE)
  # and stop the runtime
  res = stop.game.solve.time(game, just.gk=just.gk, times=times,run.id = run.id, use.cache=use.cache)

  # combine results to a data frame res
  if (just.gk) {
    res = res %>% group_by(delta) %>% summarize(gk=mean(gk), runs=n(), gk.has.eq=all(gk.has.eq),numStates=first(numStates), numA = first(numA))
  } else {
    res = res %>% group_by(delta) %>% summarize(gk=mean(gk),abs=mean(abs), factor=abs/gk, runs=n(), gk.has.eq=all(gk.has.eq), abs.solved=sum(abs.solved), numStates=first(numStates), numA = first(numA))

  }

  res$size = size
  res
}

Here we compare the solution speed for the cournot games of different sizes and both the ABS and GK algorithms:

library(dyngame)
library(RSGSolve)
res = lapply(2:6, test.cournot.game, delta=0.7, times=3)
res = bind_rows(res)
res

For larger discount factors computing the payoff sets without transfers takes considerably longer, while without transfers it becomes even faster.

res = lapply(2:4, test.cournot.game, delta=0.9, times=2)
bind_rows(res)

With transfers we can solve even for a larger number of states stochastic games fairly quick. Of course the curse of dimensionality will also kick in at some point...

res = lapply(2:20, test.cournot.game, delta=0.9, just.gk=TRUE, times=2)
res = bind_rows(res)
res

The column numA contains the total number of action profiles summed up over all states. This seems a natural measure of the total size of a game. Here we plot the time to solve the game with transfers against this size measure:

library(ggplot2)
qplot(data=res, x=numA,y=gk,xlab="Total number of action profiles",ylab="Seconds to solve") + geom_line(size=1.5, color="blue") + geom_point(size=1.5, color="black") + theme_bw()


skranz/dyngame documentation built on March 27, 2021, 6:03 a.m.