#source("components/install.R")

library("methods")
library("knitr")
basename <- "crashes"
opts_chunk$set(fig.path = paste("components/figure/", basename, "-", sep=""),
               cache.path = paste("components/cache/", basename, "/", sep=""))
opts_chunk$set(cache = 2)
opts_chunk$set(tidy=FALSE, warning=FALSE, message=FALSE, 
               comment = NA, verbose = TRUE, echo=FALSE)
# PDF-based figures
opts_chunk$set(dev='pdf')
fig.cache <- TRUE

library("rmarkdown")
library("pdgControl")
library("reshape2")
library("plyr")
library("ggplot2")
library("data.table")
library("pander")
library("cboettigR")
library("ggthemes")
library("snowfall")

options(digits=2)

theme_set(theme_tufte())
source("components/crash-analysis.R")

Introduction

Ecosystems are dynamic and exhibit rich patterns of variability in both time and space. In designing management policies for ecosystems, managers need to decide how much of that variation to respond to. Managers could try to track variations in ecosystem dynamics very closely by setting policies that are extremely responsive to the environment. Many theoretical studies that seek to identify optimal policies for exploited populations and communities adopt this approach and largely ignore the challenges that would be involved in implementing such recommendations [e.g., @Reed1979; @Neubert2003; @Sethi2005; @Halpern2011]. However, the policy process can often be much more sluggish to respond to variations in ecosystem dynamics [@Walters1978; @Armsworth2010]. Moreover, stakeholders impacted by ecosystem management may prefer some stability and not want to deal with continually changing management recommendations [@Biais1995; @Armsworth2003; @Patterson2007a; @Patterson2007b; @Sanchirico2008]. In other words, whatever gains are available from fine-tuning a policy prescription to reflect environmental variation more closely should be traded off against potential costs associated with the more interventionist approach to management this would require.

tuna <- read.csv("components/data/tuna.csv")
tuna<-melt(tuna, id="year")
ggplot(tuna, aes(year, value)) + geom_point() +
  facet_wrap(~variable, ncol=1, scale="free_y") + 
  ylab("Bluefin Tuna harvest (tonnes)") + theme_bw()

To illustrate these concepts, Figure 1 show an example from fisheries management. On the left axis, the figure shows a time series for the estimated population size, represented as spawning stock biomass, of west Atlantic bluefin tuna (Thunnus thynnus, henceforth bluefin) from a recent stock assessment [@ICCAT20XX]. On the right axis the figure also shows the catch quota for the stock that was set by the relevant management agency [@ICCAT20XX]. Despite the estimated population size declining by XX% between YYYY and ZZZZ, the quota was not changed during this period. Instead, the quota only changed occasionally and in between times was left unaltered. Fishery management decisions regarding this species can be highly contentious (@Safina1998, @Sissenwine1998, @Porch2005). Moreover, the stock is fished by fleets from many nations with quotas being set by a multilaterial management agency through a process of negotiation. As such, we might reasonably anticipate that for this species there could be substantial transaction costs involved in reaching agreement over any quota change, which could contribute to the observed quota stability. More generally, reviews by @Biais1995 and @Patterson2007a document many cases where changes in catch quotas that a management agency set were more modest than changes that would be recommended just by considering variations in stock abundance.

We use a classic fisheries management question to examine how accounting for costs of policy adjustment can change optimal policies (see also @Ludwig1980, @Feichtinger1994, @Wirl1999). We focus on how harvest quotas for a stochastically varying fish population can be chosen to maximize the net present value of a fishery. Our formulation and solution method largely follow Reed's classic treatment on this question, a treatment repeated widely in bioeconomic textbooks. We note that while later work has extended this treatment to deal with a variety of other issues (e.g. @Sethi2005; @Singh2006; @McGough2009), we start from the classical model for simplicity of presentation and analysis.

With his formulation, Reed showed that a constant escapement policy could be optimal under certain conditions. Such a policy involves choosing annual quotas that are perfectly responsive to recruitment variation in a fish stock. In poor recruitment years, the quota is set to zero and no fishing is allowed. But any time there is a good recruitment pulse, a quota is set that allows the fishery to exactly compensate through harvesting, thereby maintaining the optimal escapement level. However, in that analysis, Reed did not account for any costs of policy adjustment, which for such a responsive management strategy potentially could be large.

As the example in Figure 1 makes clear, management policies will rarely be as responsive as this constant escapement policy assumes. In this paper, we explore conditions under which this lack of responsiveness may be a rational response on the part of managers. Specifically, we consider a case where managers seek to balance the benefits in terms of increased profits from fishing from more finely tracking recruitment variations with the growing costs associated with adjusting policies quickly to do so. The policy adjustment costs involved could reflect pure administrative transaction costs or preferences held by fishermen, fish processing plants or other stakeholders, for less variable quotas. In seeking to account for these policy adjustment costs, we recognize that we do not know just what functional form they should take and that it will require a lot of empirical work to estimate that. Therefore, we scope three candidate functional forms that represent different assumptions about how these costs operate to determine whether the results we obtain are sensitive to such differences.

Methods

Fish population dynamics (state equation)

We will assume Beverton-Holt dynamics with multiplicative environmental noise

\begin{equation} N_{t+1} = Z_t \frac{A (N_t - h_t)}{1 + B (N_t - h_t)}, \label{eq:state_equation} \end{equation}

where $N_t$ the stock size, $h_t$ the harvested level, $Z_t$ gives the stochastic shocks, which we assume are lognormally distributed and $A$ and $B$ are positive constants.

We assume managers set an annual quota for harvesting $h_t$ (the control variable) after observing the stock size that year $N_t$ (the state variable), but while still being uncertain about future environmental conditions and stock sizes. Through time, this gives a time path of management actions ${\bf h}=(h_1,h_2,\dots)$ that depend on the stock sizes that were observed (a state dependent control rule).

We assume that managers choose annual quotas to maximize the expected net present value (NPV) of the fishery. We take as a base case the situation where there are no costs associated with policy adjustment and the managers' objective is

\begin{equation} \max_{{\bf h}}\mathbf{ E} ( NPV_{0} )=\max_{{\bf h}} \sum_0^\infty \mathbf{E} \left( \displaystyle \frac{\Pi_0(N_t,h_t)}{(1+\delta)^{t+1}} \right) \label{eq:objective} \end{equation}

where $\mathbf{E}$ is the expectation operator, $\delta>0$ is the discount rate and $\Pi_0$ is the net revenue from operating the fishery in a given year. In this base case, we assume the annual net revenue from fishing is

\begin{equation} \Pi_0(N_t,h_t) = p h_t - c_0 E_t \label{eq:annual_netrevenue} \end{equation}

where $E_t$ represents fishing effort. We assume catch is proportional to stock size and effort expended fishing that year, $h_t = q E_t N_t$ and constant $q>0$ is the catchability coefficient. In Eqn. \eqref{eq:annual_netrevenue}, $p$ is the price per unit harvest and $c_0$ the cost of fishing and, to simplify the presentation of results, we assume that these are constants with $p>0$ and $c_0>0$. (Examples shown here use $p$= r price and $c_0$ = r c0, complete code is provided in the supplementary materials to re-run these analyses with arbitrarily specified values and replot the figures shown).

Taken together this objective function \eqref{eq:objective} and the state equation \eqref{eq:state_equation} define a stochastic dynamic programming problem that we solve using backwards recursion via Bellman's equation. We denote the resulting state-dependent, optimal control as $h_0^{\ast}$.

We solve this problem on a finite time horizon of $T =$ r OptTime using value iteration [@Mangel1988; @Clark2000].

Costs of policy adjustment

We compare this base case to three alternative problem formulations, each reflecting different plausible functional forms that costs of policy adjustment could take. In each, we assume managers can adjust the quota set in the fishery $h_t$ in a given year and that any policy adjustment costs are associated with changes to this control variable. In each case, we assume there is no cost in initially setting the harvest policy at time 0.

First we assume that policy adjustment costs are directly proportional to the magnitude of the change in policy being proposed, such that larger changes to annual harvesting quotas incur greater policy adjustment penalties. Specifically, we replace $\Pi_0$ in the $NPV_0$ equation with

$$\Pi_{1}(N_t,h_t, h_{t-1}) = \Pi_0 - c_1 | h_t - h_{t-1} | \,.$$

Next we continue to assume that policy adjustment costs depend on the magnitude of the change in policy being proposed. However, we consider a case where this dependence is nonlinear with big changes in policy being disproportionately expensive:

$$\Pi_{2}(N_t,h_t, h_{t-1}) = \Pi_0 - c_2 ( h_t - h_{t-1})^2 \,.$$

Finally, we assume instead that there is a fixed cost associated with any adjustment of policy, regardless of how large that adjustment might be

$$\Pi_{3}(N_t,h_t, h_{t-1}) = \Pi_0 - c_3 (1-\mathbf{I}(h_t, h_{t-1})) \,,$$

where indicator function $\mathbf{I}(h_t, h_{t-1})$ takes the value 1 if $h_t=h_{t-1}$ and zero otherwise. For each $\Pi_i$, we can then define a new objective function $NPV_i$ similar to that in Eqn. \eqref{eq:objective}.

We note that the fixed cost has conceptual analogues to the set-up cost of @Reed1974 and @Spulber1982 but here the fee is associated with a change in harvesting ($h_t\neq h_{t-1}$) and not with harvesting per se ($h_t>0$) as in those studies.

Taken together with the state equation Eqn. \eqref{eq:state_equation}, each of these new objective functions defines a different stochastic dynamic programming problem. Again, we solve them numerically using backwards recursion. To include costs of policy adjustment, we expand the state space to include both the current stock size and the management action taken on the previous time step $(N_t,h_{t-1})$. For each new objective function $\max NPV_i$, we denote the corresponding optimal control policy $h_i^*$.

In addition in the Supplementary Material, we compare our results with more conventional fisheries economic formulation in which additional costs are applied to the control variables themselves, as opposed to adjustments to the controls: $\Pi_4 (N_t,h_t) = p h_t - c_0 E_t-c_4E_t^2$. Additional costs of this form tend to have a smoothing effect on optimal quotas, but also change the long-term average stock size or quota size that is optimal.

Choice of parameters

Each policy adjustment cost function is characterized in terms of a cost coefficient $c_i$. However, $c_i$ takes different units for each functional form. Therefore, if seeking to compare the relative effect of each penalty function on optimal management, it is unclear what parameter values should be used. To address this issue, we calibrate choices of $c_i$ so that each has a comparable impact on the optimal net present value of fishery, broadly following the example of @Bovenberg2008.

Figure 2 shows the calibration graphically. Each curve plots the change in maximum expected NPV for a given penalty function as the penalty cost parameter is increased. In each case, the figure shows maximum expected NPV with policy adjustment costs as a proportion of the maximum expected NPV available in the basic problem $NPV_0({\bf h_0^*})$ without policy adjustment costs.

```r{NPV_0}$. The horizontal line indicates a value of the stock that is reduced by 25% from the maximum expected value in the absence of policy adjustment costs, $NPV_0({\bf h_0^*})$. Selecting the coefficient $c_i$ corresponding to this value in each functional form allows us to make consistent comparisons across the different functional forms of policy costs.", dependson="quadcosts", fig.width=5, fig.height=3, cache = fig.cache, dev=c("pdf", "svg", "png")}

relabel <- c(L1 = substitute(paste(Pi[1])), L2 = substitute(paste(Pi[2])), fixed = substitute(paste(Pi[3])))

ggplot(fees, aes(c2, (npv0-value)/npv0, lty=variable, color=variable)) + geom_line() + geom_hline(aes(yintercept=reduction), linetype=4) + xlab("Penalty coefficient") + ylab("Reduction in net present value") + scale_linetype_discrete(labels=relabel) + scale_color_discrete(labels=relabel)

write.csv(dt, "components/data/100reps.csv") write.csv(fees, "components/data/figure2.csv")

To compare the impact of penalty functions on optimal management across
the different functional forms, we select penalty cost coefficients that
induce the same reduction in maximum expected NPV. For example, the
dashed vertical lines in Figure 1 map the needed cost coefficient $c_i$,
for each penalty function, such that the fishery is worth `r 100*(1-reduction)`% of its
unconstrained value when optimally managed.

We focus our examination of the models on how the type and severity of
policy adjustment costs affects the optimal management strategy. To
simplify presentation of the results, we will illustrate cases where the
growth parameters are chosen so that the intrinsic rate of increase of
the fish population is 1 and the equilibrium biomass for the equivalent
deterministic model without harvesting is 10; (specifically, $A=$ `r pars[1]` and
$B=$ `r pars[2]` in Eqn \eqref{eq:state_equation}. To characterize environmental variability, we
assume multiplicative shock $Z_t$ is distributed log-normally with 
log standard deviation $\sigma_g$ = `r sigma_g`. In addition, we
show cases where $p=$ `r price`, $\delta=$ `r delta` and $c_0=$ `r c0`.

<!-- is chosen such that the
optimal stock size in the analogous deterministic problem is 5% larger
than that corresponding to the maximum sustainable yield in biomass
($c_0=XXX$). -->


Results
=======

Effect of policy adjustment costs on optimal quotas and stock sizes 
-------------------------------------------------------------------



```r

fig3_df <- as.data.frame(subset(dt, replicate=='rep_17'))
fig3_df <- fig3_df[c("time", "fishstock", "alternate", "harvest", "harvest_alt", "penalty_fn")]
fig3_df <- melt(fig3_df, id = c("time", "penalty_fn"))
fig3_df <- data.frame(fig3_df, baseline = fig3_df$variable)
variable_map <- c(fishstock = "fish_stock", alternate = "fish_stock", harvest = "harvest", harvest_alt = "harvest")
baseline_map <- c(fishstock = "penalty", alternate = "no_penalty", harvest = "penalty", harvest_alt = "no_penalty")
fig3_df$variable <- variable_map[fig3_df$variable]
fig3_df$baseline <- baseline_map[fig3_df$baseline]

harvest_fig3 <- subset(fig3_df, variable=="harvest")
fishstock_fig3 <- subset(fig3_df, variable=="fish_stock")

labeller <- function(variable,value){
    return(relabel[paste(value)])
}

ggplot(harvest_fig3, aes(time, value, col=baseline)) +
  geom_line(lwd=1) +
  facet_grid(penalty_fn~., labeller = labeller) + 
  labs(x="time", y="stock size", title = "Example Harvest Dynamics")  +
  scale_color_discrete(labels=c("Reed solution", "With penalty"), name="")

write.csv(harvest_fig3, "components/data/figure3.csv")

Figure 3 illustrates how the different forms of adjustment cost can impact the optimal harvest policy. Corresponding stock sizes, which are measured here before harvest ($h_t$ being determined after observing stock size $x_t$) and thus show less influence of the policy cost can be seen in the supplement. Each panel is generated against the same sequence of random shocks so that they can be compared directly. The harvest policy chosen shows a systematic deviations from the penalty free optimum, depending on the nature of the cost. In each case, the optimal solution without any adjustment cost is shown in against the policy induced by optimization under the given adjustment penalty (equivalent to a r 100*reduction% reduction in maximum expected NPV) is overlaid in blue.

When considering the first formulation of policy adjustment costs (linear costs, $\Pi_1$), the first panel of Figure 3 shows a typical pattern, in which the optimal policy tends to avoid small policy adjustments, resulting in periods of constant policy followed by sudden bursts of adjustment. This results in a relatively step-like policy pattern. In contrast, the second formulation (quadratic costs, $\Pi_2$) disproportionately penalizes large policy adjustments. The corresponding optimal policy tracks all of the changes made by the cost-free policy, but with smaller magnitude. This results in a smoother $h_t$ curve, one that undershoots the larger oscillations seen in the cost-free optimum in favor of a policy that changes incrementally each year. Finally, the optimal policy for the third, 'fixed fee' formulation ($\Pi_3$) only makes large adjustments, as one might expect, because the magnitude of the adjustment made is not reflected in the resulting penalty.

The comparisons shown in Figure 3 are for one realization (i.e. a particular sequence of random number draws representing environmental variability) and are made for a particular magnitude of policy adjustment costs equal to r reduction *100% of the maximum expected net present value in the base case ($NPV_0({\bf h_0^*})$). Figure 4 generalizes to show how optimal harvest levels and corresponding optimal stock sizes are affected by the magnitude of the costs of policy adjustment. This time we show summary statistics across 50 replicate sets of environmental conditions. We show the impact of policy adjustment costs on the variance of both harvests and stock sizes through time; on the autocorrelation of harvests and stock sizes through time.

library("dplyr")

ggplot(filter(stats_df, variable != 'cross.correlation'), 
       aes(penalty_fraction, value, fill=penalty_fn, col=penalty_fn))  +
  stat_summary(fun.y = mean, 
               fun.ymin = function(x) mean(x) - sd(x), 
               fun.ymax = function(x) mean(x) + sd(x), 
               geom = "ribbon", alpha=0.3, colour=NA) +
  stat_summary(fun.y = mean, geom = "line") + 
  coord_cartesian(xlim = c(0, .3)) + 
  facet_wrap(~ variable, scale="free_y") + 
  scale_color_discrete(labels=relabel, name="Penalty function") + 
  scale_fill_discrete(labels=relabel, name="Penalty function") +
  xlab("Penalty size (as fraction of NPV0)")



write.csv(stats_df, "components/data/figure4.csv")

For example, when small policy adjustments cost little but large adjustments are expensive ($\Pi_2$), we see the smoothing signal that we might have expected (see also @Ludwig1980 for this particular case). As cost penalties increase in severity (i.e., $c_i$ increases), the variance in quotas through time decreases and the autocorrelation of quotas through time increases. This smoothing of the harvest quotas through time has knock-on effects for the corresponding optimal stock sizes, which also become more autocorrelated as the scaling on policy adjustment costs ($c_i$) is increased. The autocorrelation in stock sizes here reflects the lag involved in a more modest increase in harvests taking longer to eliminate the signal of strong recruitment events, when compared to a more immediately responsive constant escapement policy.

Interestingly, as suggested by the realization in Fig. 3, including a fixed cost of policy adjustment increases the variation in harvests through time. This is the opposite of a smoothing effect. Follow-on consequences for the other statistics are not as clear in this case.

Finally, the case where policy adjustment costs scale linearly with the size of the adjustment ($\Pi_1$) appear to be something of a middle of the road strategy, in that increasing the severity of policy adjustment costs (increasing $c_i$ has little effect on the variance or autocorrelation of optimal harvest rates or stock sizes. To reveal the particular impact of policy adjustment costs of this type requires a more targeted summary statistic. Specifically, for each run we calculated the frequency with which the optimal policy involved maintaining a positive quota across multiple time steps unaltered. This type of policy is arguably the most commonly observed behavior in TAC management, but is one that is very rarely observed to be part of the optimal management strategy in the basic model without policy adjustment costs (Eqn. 3) or when optimizing against $\Pi_2$ or $\Pi_3$.

fraction_no_shift <- function(x){

noshift <- x[2:length(x)] - x[1:length(x)-1] == 0
strictly_positive <- (x > 0.01 )[-1] # drop first point, 
who <- noshift & strictly_positive # True whenever consecutive pairs did not shift and the second in the pair was nonzero harvest
 sum(who)/length(who)
}

shifts <- dt[, fraction_no_shift(harvest), by=c("penalty_fn", "replicate")]
ave <- shifts[, mean(V1), by=penalty_fn]

no_shifts <- ave$V1
names(no_shifts) <- ave$penalty_fn

While still not common for the particular parameter combinations we examine, we find that positive unaltered quotas through time are much more likely to occur when optimizing against $\Pi_1$ where policy adjustment costs scale linearly with the size of proposed policy changes: over the 100 replicates time series the harvest policy is strictly positive and identical in consecutive intervals only r no_shifts[["L2"]] * 100% of the time for quadratic costs $\Pi_2$, compared with r no_shifts[["L1"]] * 100% with linear costs $\Pi_1$ and r no_shifts[["fixed"]]*100% for fixed costs $\Pi_3$. Moreover, these occurrences increase in frequency as the severity of these costs ($c_i$) increases.

Consequences of policy adjustment costs

profits <- dt[, sum(profit_fishing), by=c('penalty_fn', 'replicate') ]
costs <- dt[, sum(policy_cost), by=c('penalty_fn', 'replicate') ]
reed_profits <- dt[, sum(profit_fishing_alt), by=c('penalty_fn', 'replicate') ]
reed_costs <- dt[, sum(policy_cost_alt), by=c('penalty_fn', 'replicate') ]
setnames(profits, "V1", "profits")
setnames(reed_profits, "V1", "profits")

Reed <- cbind(reed_profits, costs = reed_costs$V1, Assumption = "Reed") 
Adj <- cbind(profits, costs = costs$V1, Assumption = "Adjustment penalty")

hist_dat <- melt(rbind(Adj, Reed), id=c("penalty_fn", "replicate", "Assumption"))
````


```r

assume <- levels(as.factor(hist_dat$Assumption))
labeller <- function(variable,value){
if (variable=='penalty_fn') {
    return(relabel[paste(value)])
  } else {
    return(assume[paste(value)])
  }
}

ggplot(hist_dat) + 
  geom_density(aes(value, fill=variable, color=variable), alpha=0.8)+
  facet_grid(Assumption~penalty_fn, labeller = labeller)

write.csv(hist_dat, "components/data/figure5.csv")

Next we examine the consequences either of ignoring policy adjustment costs when they are present or assuming they are present when they are not. To do so, it helps to break-down the different contributions to the maximum expected $NPV_i$ when employing the optimal policy $\bf{h_i}^$ versus when employing the policy that would be optimal were there no costs to policy adjustment $\bf{h_0}^$. For example, if we take the first of the adjustment penalty forms, $\Pi_1$

$$NPV_i( \mathbf{h_1^} ) = \sum_{t=0}^\infty (\overbrace{ p h^{1,t}-c_0E^_{1t}}^{[1]}-\overbrace{c_1 | h^{1,t}- h^*_{1,t-1}|}^{[2]} ) \displaystyle \frac{1}{(1+\delta)^t}$$

and

$$NPV_i( \mathbf{h_0^} ) = \sum_{t=0}^\infty (\underbrace{ p h^{0,t}-c_0E^{0t}}{[3]}-\underbrace{c_1 | h^{0,t}- h^*{0,t-1}|}{[4]} ) \displaystyle \frac{1}{(1+\delta)^t}$$

Histograms for the different contributions labelled $[1]-[4]$ here when taken across 100 replicate runs are shown in Figure 5. Observe that the red distributions (dockside profits) are relatively unchanged between policies based on the adjustment-free assumption (Reed, bottom row) vs when adjustments are properly accounted for (adjustment penalty, top row), while the blue distributions (costs for adjustment) are much larger in the bottom row then in the top (when these costs have not been accounted for vs when they have been taken into account.) The cost of assuming policy adjustments are present when in fact they are absent is given by subtracting the contribution to the first infinite sum from term $[1]$ from the contribution to the second infinite sum of term $[3]$. (Note that convergence properties required to do this are met because we assume a positive discount rate). This cost is positive because the optimal controls recommended when assuming policy adjustment costs are present do not track variations in stock size as closely. Therefore, this harvesting strategy provides lower net revenue dockside.

In contrast, the cost of ignoring policy adjustment costs when in fact they are present is given by subtracting the second infinite sum from the first (contributions $([1]-[2])-([3]-[4]))$ in the equations. These costs are positive because the lost revenues from not tracking stock variations as closely are smaller than the penalties of policy adjustment that are incurred if managers try to track stock variations more tightly.

who <- c("penalty_fn", "ignore_fraction", "assume_fraction", "reduction")
table1 <- arrange(error_df[who], reduction) 
names(table1) = c("penalty.fn", "ignoring", "assuming", "reduction")
table1_long <- melt(table1, id = c('penalty.fn', 'reduction'))
table1_long$reduction <- as.factor(table1_long$reduction)
table1_long <- subset(table1_long, reduction != "0.3")
ggplot(table1_long, aes(penalty.fn, value, fill = variable)) + 
  geom_bar(stat="identity", position="dodge") + 
  facet_wrap(~reduction, ncol=2) + 
  scale_x_discrete(labels = c('L1' = substitute(paste(Pi[1])),
                              'L2' = substitute(paste(Pi[2])),
                              'fixed' = substitute(paste(Pi[3])))) + 
  xlab("Penalty function")
write.csv(table1_long, "components/data/figure6.csv")

Figure 6 compares these different contributions, namely the $NPV$ lost by assuming policy adjustment costs when none are present (first entry in each cell, $[3]-[1]$ in the equations)) and the $NPV$ lost when ignoring policy adjustment costs when they are present (second entry in each cell of the table, contributions $([1]-[2])-([3]-[4]))$ in the equations) across the three different functional forms. Specifically, the table shows how these values change as we vary the severity of policy adjustment costs. In each case, values are shown as percentages of the maximum expected $NPV$ in the base case where no policy adjustment costs apply or are assumed to apply $NPV_0({\bf h_0^*})$.

We find a pronounced asymmetry when comparing the impact on the net present value of the fishery of assuming policy adjustment costs are present when they are absent with the impact of assuming they are absent when they are present. For all three types of policy adjustment cost, the impact on NPV of ignoring these costs if they are in fact present is much bigger than that incurred by fallaciously managing as if they are present when they do not occur. This asymmetry arises, because there is less difference in the dockside revenue under the two management recommendations than there is in the potential policy adjustment costs themselves, as is evident in for the particular parameters shown in Figure 5.

We also see that the impacts on net present value of the fishery of incorrectly assuming or ignoring policy adjustment costs are most severe when these costs are assumed to scale quadratically with the size of the policy change being implemented ($\Pi_2$).

When policy costs are present, assuming the wrong functional form may or may not be worse than ignoring the presence of policy costs all together. Figure 7 considers several such cases. The X axis labels indicate first which policy is being used to drive the harvest decisions, and then which penalty is actually being applied. Thus $\Pi_1_\Pi_2$ indicates a policy calculated under the linear ($\Pi_1$) costs, when the reality involves quadratic ($\Pi_2$) costs. Though the coefficients of the ($\Pi_2$) costs have been scaled such that the net effect is the same, the inferred policies differ. As a result, the value is slightly more than half of the optimal free solution when the expected reduction should only be 10% (panel A) less than optimal free based on the $c_i$ calibration. In this case, ignoring the costs all together is still worse than using $\Pi_1$ costs -- the red bar is lower than the blue.

For fixed costs ($\Pi_3$), the reverse is true. Because fixed costs increase the volatility of the optimal solution while $\Pi_2$ costs decrease it, simply ignoring the cost of adjustment all together results in a higher value than either assuming costs are quadratic ($\Pi_2$) when they are fixed ($\Pi_3$), or assuming they are fixed when they are quadratic. Scenarios in which true costs are quadratic (regardless of what cost structure is assumed by the policy) do significantly worse than those that are not (i.e. all cases that assume quadratic but actually have linear or fixed costs).

These patterns hold across the different size reductions shown (from 10% to 25%, panels a-d respectively), though net present value becomes negative under the quadratic form for the larger penalties as costs paid for adjustments exceed profits derived from harvest. (Though optimal solutions will always avoid negative net present values, these scenarios in which policies are applied under conditions other than assumed in the optimality calculation have no such guarantee.) For each scenario, results are averaged over 100 replicates.

who <- c("penalty_fn", "ignore_fraction", "mismatched_fraction", "reduction")
table2 <- arrange(mismatches_df[who], reduction) 
names(table2) = c("penalty.fn", "ignoring", "mismatched", "reduction")
table2_long <- melt(table2, id = c('penalty.fn', 'reduction'))
table2_long$reduction <- as.factor(table2_long$reduction)

ggplot(table2_long, aes(penalty.fn, value, fill = variable)) + 
  geom_bar(stat="identity", position="dodge") + 
  facet_wrap(~reduction, scales="free_y") +
  scale_x_discrete(labels = c('L1_L2' = substitute(paste(Pi[1],"_", Pi[2])), 
                              'L2_L1' =  substitute(paste(Pi[2],"_", Pi[1])),
                              'L1_fixed' = substitute(paste(Pi[1],"_", Pi[3])), 
                              'fixed_L1' =  substitute(paste(Pi[3],"_", Pi[1])),
                              'fixed_L2' = substitute(paste(Pi[3],"_", Pi[2])), 
                              'L2_fixed' =  substitute(paste(Pi[2],"_", Pi[3])))) + 
  xlab("Penalty function")
write.csv(table2_long, "components/data/figure7.csv")

Discussion

Policy-makers managing ecological systems that vary in space and time must evaluate how much of that variation to reflect in management recommendations. Fine-tuning a policy to respond to frequent variations in ecological dynamics may incur increased transaction costs associated with constantly revisiting past policy decisions. A more pragmatic approach would be one that balances benefits from responding to frequent variations in ecological conditions with the increased transaction costs involved. As a first step towards exploring these ideas, we revisited a classic problem from bioeconomics concerning the optimal management of a fish stock subject to stochastically varying recruitment [@Reed1979; @Clark2010]. We examined how optimal policy recommendations, here annual catch quotas, changed when accounting for costs associated with policy adjustment and what the implications of following these policy recommendations would be for the exploited population. We also compared how the value of the fishery is affected by managers either under- or over-estimating the importance of policy adjustment costs of this type.

Estimating policy adjustment costs and how they respond to the size of proposed policy changes would be empirically challenging. Recognizing this fact, in our analyses, we compared different plausible forms that these costs might take. We compared two cost structures where we assumed the magnitude of policy adjustment costs increased with the magnitude of the change in policy being proposed with an alternative formulation in which we assumed there was a fixed cost associated with making any change to current policy. While each formulation clearly provides only a very phenomenological representation of the policy setting environment, we believe that each enables us to explore meaningful differences in how policy adjustment costs might operate. That being said, in the real-world we might expect the different types of adjustment cost to operate in combination. Interestingly, there are analogues between how we have represented different types of policy adjustment costs in our models and regularization techniques that are sometimes used in model fitting to address concerns about over-specifying models (e.g. ridge or lasso regression or total variation denoising approaches [@Rudin1992]).

The biggest differences between the representations of policy adjustment cost that we consider are between those that have a smoothing effect on annual quotas and those that do not. The few past studies that incorporate policy adjustment costs in models of fisheries management [@Ludwig1980; @Feichtinger1994; @Wirl1999] assumed costs of policy adjustment increased as a quadratic function of the magnitude of the policy change being proposed (our $\Pi_2$ formulation) In effect, this means that small changes to annual quotas incur little extra cost, but large changes to annual quotas become disproportionately expensive to make. Including costs of this form smoothes inter-annual variation in the catch quotas that would be recommended. Also, the catches and remaining stock sizes corresponding to optimal management become more autocorrelated in time, because it takes the fishery longer to harvest down peaks in abundance that follow large recruitment pulses. The effects of smoothing here are similar to those predicted when assuming the cost per unit effort is increasing in the amount of effort expended (e.g., @Lewis1981, @McGough2009; see Supporting Information for a summary of the relevant results) as opposed to associating extra costs with changes to policy per se. Smoothing effects of this type are what one might have expected when including policy adjustment costs.

Standing in sharp contrast to these smoothing predictions is our finding that including policy adjustment costs can actually increase the variability of quotas through time if there is a fixed cost associated with making any changes to current policy (e.g. costs of running relevant stakeholder meetings and public consultations on proposed policy changes, $\Pi_3$). With this formulation, as stock sizes vary in response to recruitment, the fishery manager must balance the cost of brokering a policy change with the cost in forgone fisheries revenue from not responding to favorable recruitment pulses. The optimal policy involves ignoring small variations in recruitment, but then assigns a larger quota when particularly strong recruitment years arise than would have been the case in the absence of policy adjustment costs. The other functional form we consider, in which the costs of policy adjustment scale linearly with the size of the adjustment ($\Pi_1$), has less obvious effects on optimal policies. But interestingly, it is the formulation that most frequently produces stretches of strictly positive but unchanging quotas of the type most commonly encountered in real-world applications.

That the different representations of policy adjustment costs result in such different dynamics suggests that researchers constructing fisheries economics models should proceed cautiously when choosing how to represent these costs. However, in our own experience, we have found that while modeling studies sometimes mention policy adjustment costs when motivating model assumptions (references), they rarely provide much justification for the choice of functional form used or test the sensitivity of any conclusions drawn to alternative specifications. Indeed, we were surprised to find such clear differences between the functional forms, because we anticipated that the limiting case of increasing policy adjustment costs within each functional form should be the same, namely a constant annual quota that does not change through time (open-loop control).

We also compared the efficiency costs that would result from failing to account for policy adjustment cost if they are present with those involved in assuming them when in fact they are absent. The results of this comparison were not sensitive to the particular form of policy adjustment costs assumed. Instead, we always found the efficiency costs of ignoring policy adjustment costs when they were present to be much larger than the efficiency costs of assuming policy adjustment costs applied when they were in fact absent. Policy adjustment costs affect the overall value of the fishery in two ways here. First, there is the direct cost associated with each quota change. Second there is a cost in foregone revenue from missed catches when not following what would be the optimal policy if quotas were free to track recruitment variability. Our finding that it is more costly to ignore policy adjustment costs when they are present arises because the first, more direct, cost contribution here is larger than the second.

Assuming an adjustment cost exists when in fact it is absent is not the same as using the wrong functional form when it is present. Here, our results showed that choice of the functional form makes a significant difference. We show it can be better to ignore policy costs than to derive a policy based on assuming the wrong functional form. This result further underscores the importance of modelers using caution when seeking to account for these costs. Assuming an arbitrary form in order to capture the influence of adjustment costs is thus unlikely to be instructive.

As with any modeling analysis, our formulation makes many assumptions. For example, we assume that the fishery in question is being optimally managed by a policy-maker who acts as the 'sole owner' of the stock. This approach is different to models that assume fisheries are not well-managed, e.g. by assuming open access conditions (references [^1]), or ones that derive policy recommendations endogenously by modeling interactions between different stakeholders [@Kaitala1993; @Laukkanen2003]. As such, we anticipate that our approach will be more relevant to some fisheries, particularly domestic fisheries in developed countries that are subject to relatively strong regulatory regimes, than to others (e.g., artisanal fisheries that are subject to weaker regulation). To focus on the effects of policy adjustment costs, we focused narrowly on the basic model specification of @Reed1979 and examined how the predictions of this classic problem were changed by introducing policy adjustment costs of different forms. However, there have been many elaborations on Reed's basic approach that increase the realism of the optimization models involved by relaxing other assumptions (see for example @Sethi2005; @Singh2006; @McGough2009). For example, in following Reed's approach closely another important assumption we made is that the fishery manager observes the current stock size correctly when setting the annual quota (see @Clark1986, @Williams2001 for alternative specifications). However, predictions of current cohort sizes from stock assessments are subject to uncertainty [@Ralston2011]. Were uncertainty regarding reported recruitment pulses also included, an optimizing manager might moderate their policy responses to initial reports of particularly good recruitment years increase yet further.

[^1]: NOTE Would be good to give an open access model in a fluctuating environment here. From Andersen and Sutinen's review (Mar Res Econ 1, 117-136) it looks like one basal citation is Andersen 1981 The exploitation of fish resources under stock uncertainty. But it is a 'staff paper'. Our library don't hold it. Has anyone else got it?

One obvious research avenue suggested by our models is that of empirically estimating costs of policy adjustment. A direct estimation approach could quantify some sources of policy adjustment costs, e.g. costs to processing plants that arise from having more variable catches. However, other more intangible sources of policy adjustment costs, e.g. preferences of policy-makers or different stakeholders for less variable quotas, might be missed. An alternative, more holistic, approach would be to apply revealed preference methods to fisheries management agencies themselves, in the tradition of [@McFadden1975; @McFadden1976]. Such an analysis would involve comparing quotas that were set relative to stock sizes as they were estimated at the time each management decision was taken to try to infer what objective managers were maximizing.

A worthwhile modeling extension suggested by our results would be to examine the implications of policy adjustment costs for risks of stock collapse. Examining risks of stock collapse is not possible with the Reed model-formulation that we followed here but would be worthwhile in light of our finding that the variance and autocorrelation in stock sizes through time can be affected when accounting for costs of policy adjustment. Both are properties known in other modeling settings to be associated with changes to the risk of extinction or of transitions between alternative stable states [@Scheffer2009; @Boettiger2013].

When facing highly variable ecological systems, how often should natural resource managers respond? A highly interventionist strategy would track ecosystem variation very closely. Alternatively, a manager might choose only to take action or change policy only when conditions look very different to those previously experienced. We took a modeling approach to begin to explore these ideas. Specifically, we focused on a well-known problem from fisheries management and examined how optimal management recommendations changed when we accounted for costs associated with frequently changing management decisions. While we focused on a fisheries context, our findings would be relevant to many other settings where natural resource managers revisit management decisions through time in light of ecological variability, including game management, management of in-stream flow rates, fire management, habitat restoration, and managing for endangered species. In some ways, the analyses that we present can also be thought of as providing a temporal counter-part to discussions about the optimal spatial scale over which ecosystem management should be conducted.

Acknowledgements

This project arose out of a NIMBioS working group, "Pretty Darn Good Control". The authors acknowledge helpful discussions and input from working group co-organizers Megan Donahue and Carl Toews, and participants Marie-Josee Fortin, Dan Ryan, Frank Doyle, Claire Paris, Iadine Chades and Mandy Karnauskas.
We also acknowledge the following support: NSF Grant DBI-1306697 (CB), ...

Supplementary material

Economic penalties promised for the supplement are still to come

fig1s_df <- as.data.frame(subset(dt, replicate %in% paste0('rep_', 11:20)))
fig1s_df <- fig1s_df[c("time", "fishstock", "alternate", "harvest", "harvest_alt", "penalty_fn", "replicate")]
fig1s_df <- melt(fig1s_df, id = c("time", "penalty_fn", "replicate"))
fig1s_df <- data.frame(fig1s_df, baseline = fig1s_df$variable)
variable_map <- c(fishstock = "fish_stock", alternate = "fish_stock", harvest = "harvest", harvest_alt = "harvest")
baseline_map <- c(fishstock = "penalty", alternate = "no_penalty", harvest = "penalty", harvest_alt = "no_penalty")
fig1s_df$variable <- variable_map[fig1s_df$variable]
fig1s_df$baseline <- baseline_map[fig1s_df$baseline]



fig1s_df <- subset(fig1s_df, variable=="harvest")
labeller <- function(variable,value){
if (variable=='penalty_fn') {
    return(relabel[value])
  } else {
    return(value)
  }
}

ggplot(fig1s_df, aes(time, value, col=baseline)) +
  geom_line(lwd=1) +
  facet_grid(replicate ~ penalty_fn, labeller=labeller) + 
  labs(x="time", y="stock size", title = "Example Stock & Harvest Dynamics")  
write.csv(fig1s_df, "components/data/figure_s1.csv")
labeller <- function(variable,value){
    return(relabel[paste(value)])
}

ggplot(fishstock_fig3, aes(time, value, col=baseline)) +
  geom_line(lwd=1) +
  facet_grid(penalty_fn~., labeller = labeller) + 
  labs(x="time", y="stock size", title = "Example Stock & Harvest Dynamics")  

write.csv(fishstock_fig3, "components/data/figure_s2.csv")
caption = "Table 1: Column 'ignoring' shows the results of ignoring a adjustment penalty when it is present, Column 'assuming' shows the results of assuming an adjustment cost when it is absent. Values (ignoring, assuming) are given as the expected net present value (ENPV) as a fraction of that when no penalty is present, ENPV0.  Percent reduction refers to penalty coefficient being calibrated to achieve the given percent reduction relative to ENPV0 when the penalty is optimally accounted for.  Comparing the second and third columns shows that ignoring a real cost is a significantly graver error, particularly in the case of quadratic costs."
pandoc.table(table1, caption=caption)
hist_dat_alt <- melt(cbind(profits, costs = costs$V1, 
                       reed_profits = reed_profits$profits, reed_costs = reed_costs$V1),
                 id = c("penalty_fn", "replicate"))

ggplot(hist_dat_alt) + 
  geom_density(aes(value, fill=variable, color=variable), alpha=0.8)+
  facet_grid(penalty_fn~., labeller = labeller)

References



cboettig/pdg_control documentation built on May 13, 2019, 2:10 p.m.