Description Usage Arguments Value
Identify the dynamic optimum using backward iteration (dynamic programming)
1 2 | optim_policy(SDP_Mat, x_grid, h_grid, OptTime, xT, profit, delta, reward = 0,
penalty, effort_penalty = function(x, h) 0)
|
SDP_Mat |
the stochastic transition matrix at each h value |
x_grid |
the discrete values allowed for the population size, x |
h_grid |
the discrete values of harvest levels to optimize over |
OptTime |
the stopping time |
xT |
the boundary condition population size at OptTime |
delta |
the exponential discounting rate |
reward |
the profit for finishing with >= Xt fish at the end (i.e. enforces the boundary condition) |
penalty |
the kind of penalty applied: currently L1, L2, or assymetric |
c |
the cost/profit function, a function of harvested level |
list containing the matrices D and V. D is an x_grid by OptTime matrix with the indices of h_grid giving the optimal h at each value x as the columns, with a column for each time. V is a matrix of x_grid by x_grid, which is used to store the value function at each point along the grid at each point in time. The returned V gives the value matrix at the first (last) time.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.