R/optimize_multiple_testing_procedure.R

Defines functions optimize_multiple_testing_procedure

Documented in optimize_multiple_testing_procedure

#' Optimizes multiple testing procedure of two-stage, adaptive enrichment design
#'   for a precomputed (e.g., using optimize_design) end-of-stage-1 decision rule.
#'
#' @author Michael Rosenblum, Ethan Fang, Han Liu
#'
#' @param subpopulation.1.proportion Proportion of overall population in subpopulation 1. Must be between 0 and 1.
#' @param total.alpha Familywise Type I error rate (1-sided)
#' @param data.generating.distributions Matrix encoding data generating distributions (defined in terms of treatment effect pairs and outcome variances) used to define power constraints and  objective function; each row defines the pair (Delta_1,Delta_2) of subpopulation 1 and 2 average treatment effects, followed by outcome variances for the four combinations of subpouplation (1 and 2) by study arm (0 and 1).
#' @param stage.1.sample.sizes Vector with 2 entries representing stage 1 sample sizes for subpopulations 1 and 2, respectively
#' @param stage.2.sample.sizes.per.enrollment.choice Matrix with number.choices.end.of.stage.1 rows and 2 columns, where the (i,j) entry represents the stage 2 sample size under enrollment choice i for subpopulation j.
#' @param objective.function.weights Vector with length equal to number of rows of population.parameters, representing weights used to define the objective function
#' @param power.constraints Matrix with same number of rows as population.parameters (each representing a data generating distribution) and three columns corresponding to the required power to reject (at least) H_01, H_02, H_0C, respectively.
#' @param type.of.LP.solver "matlab", "cplex", "GLPK", or "gurobi" The linear program solve that you want to use; assumes that you have installed this already and that path is set
#' @param discretization.parameter vector with 3 elements representing initial discretization of decision region, rejection regions, and grid representing Type I error constraints
#' @param number.cores the number of cores available for parallelization using the parallel R package
#' @param ncp.list list of pairs of real numbers representing the non-centrality parameters to be used in the Type I error constraints; if list is empty, then default list is used.
#' @param list.of.rectangles.dec list of rectangles representing decision region partition, encoded as a list with each element of the list having fields $lower_boundaries (pair of real numbers representing coordinates of lower left corner of rectangle), $upper_boundaries (pair of real numbers representing upper right corner of rectangle), $allowed_decisions (subset of stage.2.sample.sizes.per.enrollment.choice representing which decisions allowed if first stage z-statistics are in corresponding rectangle; default is entire list stage.2.sample.sizes.per.enrollment.choice), $preset_decision (indicator of whether the decision probabilities are hard-coded by the user; default is 0), $d_probs (empty unless $preset_decision==1, in which case it is a vector representing the probabilities of each decision); if list.or.rectangles.dec is empty, then a default partition is used based on discretization.parameter.
#' @param LP.iteration positive integer used in file name to store output; can be used to avoid overwriting previous computations
#' @param prior.covariance.matrix 2x2 positive semidefinite matrix representing the covariance corresponding to each component of the mixture of multivariate normals prior distribution (used only in defining the objective function); the default is the matrix of all 0's, corresponding to the prior being a mixture of point masses
#' @param round.each.multiple.testing.procedure.rectangle.to.integer TRUE/FALSE indicator of whether to round the multiple testing proce ure to integer values and save; only can be done if the procedure is passed a decision rule (encoded in list.of.rectangles.dec) that has all probabilities set as would typically be the case in the final refinement of the original problem
#' @param plots.to.round.simply subset of plots (one per allowed enrollment decision) for which rounding is simply based on thresholding using rounding.threshold.H01, rounding.threshold.H02, rounding.threshold.H0C
#' @param rounding.threshold.H01 threshold above which fractional solution corresponding to probability of rejecting H01 is rounded to 1
#' @param rounding.threshold.H02 threshold above which fractional solution corresponding to probability of rejecting H02 is rounded to 1
#' @param rounding.threshold.H0C threshold above which fractional solution corresponding to probability of rejecting H0C is rounded to 1
#' @param power.constraint.tolerance amount by which power corresponding to rounded solution is allowed to be less than power.constraints; typically set to be small, e.g., 0.01
#' @param LP.solver.path path (i.e., directory) where LP.solver is installed; e.g., if type.of.LP.solver=="cplex" then LP.solver.path is directory where cplex is installed
#' @param cleanup.temporary.files TRUE/FALSE indicates whether temporary files generated during problem solving process should be deleted or not after termination; set to FALSE for debugging purposes only.
#' @return Nothing is returned; instead the optimized design is saved as "optimized_design<k>.rdata", where <k> is the user-defined iteration number (LP.iteration).
#' @section Output:
#' The software computes an optimized design and saves it as "optimized_design<k>.rdata", where <k> is the user-defined iteration number (LP.iteration). E.g., if one sets LP.iteration=1, then the optimized design is saved as "optimized_design1.rdata". That file can be opened in R and contains the following 6 items:
#' input.parameters (the inputs passed to the optimized_design function)
#' list.of.rectangles.dec (the decision rectangle partition of R^2)
#' list.of.rectangles.mtp (the multiple testing procedure partition of R^2)
#' ncp.active.FWER.constraints (the active familywise Type I error constraints in the optimized design, obtained using the dual solution to the linear program)
#' ncp.list (the complete list of familywise Type I error constraints input to the linear program solver)
#' sln (the solution to the linear program; sln$val is the expected sample size; sln$status, if either 1 or 5, indicates that a feasible solution was found and other wise the problem was infeasible or no solution was found; sln$z is the actual solution as a vector)
#' @examples
#' #For demonstration purposes, the code below implements the final
#' # iteration, as described in Section 5.2 of the paper, for Example
#' # 3.2 of the paper.
#' #First set all problem parameters based on Example 3.2, and using
#' #  explicit choices of sample sizes (where n=200, sigma=1,
#' # Delta^min=4*qnorm(0.95)/sqrt(200)=0.465),
#' # as follows:
#' subpopulation.1.proportion=0.5;
#' total.alpha=0.049;
#' delta_min=4*qnorm(0.95)/sqrt(200);
#' data.generating.distributions=matrix(
#'   data=c(0,0,1,1,1,1,
#'          0,delta_min,1,1,1,1,
#'          delta_min,0,1,1,1,1,
#'          delta_min,delta_min,1,1,1,1),
#'   nrow=4,
#'   ncol=6,
#'   byrow=TRUE,
#'   dimnames=list(c(),c("Delta1","Delta2","Variance10",
#'                       "Variance11","Variance20","Variance21")));
#' stage.1.sample.sizes=c(50,50);
#' stage.2.sample.sizes.per.enrollment.choice= matrix(
#'   c(50,50,
#'     0,0,
#'     150,0,
#'     0,150),
#'   nrow=4,ncol=2,
#'   byrow=TRUE,
#'   dimnames=list(c(), c("Subpopulation1Stage2SampleSize",
#'                        "Subpopulation2Stage2SampleSize")));
#' objective.function.weights=0.25*c(1,1,1,1);
#' prior.covariance.matrix=diag(2);
#' power.constraints=matrix(c(0,0,0,
#' 			   0,0.83,0,
#' 			   0.83,0,0,
#' 			   0,0,0.83),nrow=4,ncol=3,byrow=TRUE,dimnames=list(c(),
#' 			   c("PowerH01","PowerH02","PowerH0C")));
#' type.of.LP.solver="cplex";
#' discretization.parameter=c(1,0.25,10);
#' number.cores=min(parallel::detectCores(), 30);
#' # Load list of Type I Error Constraints (encoded as ncp.list in our
#' # software and denoted as G in the paper) and the partition of decision
#' # rectangles (encoded as list.of.rectangles.dec in our software and
#' # denoted as A_1 in the paper).
#' load(system.file("examples", "example3.2final.iteration.inputs.rdata",
#' package = "AdaptiveDesignOptimizerSparseLP"));
#' # Run final iteration solving sparse linear program with above inputs
#' \donttest{
#' optimize_multiple_testing_procedure(
#'   subpopulation.1.proportion,
#'   total.alpha = 0.049,
#'   data.generating.distributions,
#'   stage.1.sample.sizes,
#'   stage.2.sample.sizes.per.enrollment.choice,
#'   objective.function.weights,
#'   power.constraints,
#'   type.of.LP.solver,
#'   discretization.parameter,
#'   number.cores,
#'   ncp.list,
#'   list.of.rectangles.dec,
#'   LP.iteration = 5,
#'   prior.covariance.matrix,
#'   round.each.multiple.testing.procedure.rectangle.to.integer = TRUE,
#'   plots.to.round.simply = c(1, 2),
#'   power.constraint.tolerance = 0.01,
#'   LP.solver.path = c()
#' )
#' }
#' @export
#' @importFrom grDevices postscript dev.off
#' @importFrom graphics par plot axis rect legend
optimize_multiple_testing_procedure <- function(
  subpopulation.1.proportion=0.5,
  total.alpha=0.05-(1e-4),
  data.generating.distributions,
  stage.1.sample.sizes,
  stage.2.sample.sizes.per.enrollment.choice,
  objective.function.weights,
  power.constraints,
  type.of.LP.solver=c("cplex", "matlab", "GLPK", "gurobi"),
  discretization.parameter=c(1,1,10),
  number.cores=30,
  ncp.list=c(),
  list.of.rectangles.dec=c(),
  LP.iteration=1,
  prior.covariance.matrix=diag(2)*0,
  round.each.multiple.testing.procedure.rectangle.to.integer=FALSE,
  plots.to.round.simply = c(),
  rounding.threshold.H01 = 1-1e-10,
  rounding.threshold.H02 = 1-1e-10,
  rounding.threshold.H0C = 1-1e-10,
  power.constraint.tolerance=0,
  LP.solver.path=c(),
  cleanup.temporary.files=TRUE
){
  if (type.of.LP.solver != "test_version") {
    type.of.LP.solver = match.arg(type.of.LP.solver)
  }
  max_error_prob <- 0 # track approximation errors in problem construction; initialize to 0 here
  covariance_Z_1_Z_2 <-  0 # covariance_due_to overlap of subpopulations (generally we assume no overlap)
  p1 <- subpopulation.1.proportion;
  p2 <- 1-p1 # proportion of population in subpopulation 2 (assumes non-overlapping subpopulations that partition entire population)
  rho1 <- sqrt(p1)
  rho2 <- sqrt(p2)

  actions <- c(1,2,3,4,5,6,7)
  number_actions <- length(actions) # correspond to rejecting nothing, H01, H02, H0C, H01 and H0C, H02 and H0C, all

  ## ncp (shorthand for non-centrality parameter) is a 2 component vector equal to c(sqrt(2*p_1*sum(stage.1.sample.sizes))Delta_1/(2sigma_1) , sqrt(2*p_2*sum(stage.1.sample.sizes)Delta_2/(2sigma_2))) for sigma^2_s=(sigma^2_(s0)+sigma^2_(s1))/2.

  ## It represents c(EZ_1,EZ_2) for the fixed design that enrolls 2*p_s*sum(stage.1.sample.sizes) from each subpopulation s.

  map_from_P_to_type_I_error_indicator_over_set_of_actions <- function(ncp)
  {
    return(c(0,ncp[1]<=0,ncp[2]<=0,rho1*ncp[1]+rho2*ncp[2]<=0, ncp[1]<=0 || rho1*ncp[1]+rho2*ncp[2]<=0, ncp[2] <= 0 || rho1*ncp[1]+rho2*ncp[2]<=0, ncp[1]<=0 || ncp[2]<=0 || rho1*ncp[1]+rho2*ncp[2]<=0))
  }

  indicator_contribute_to_H01_power <- c(0,1,0,0,1,0,1)
  indicator_contribute_to_H02_power <- c(0,0,1,0,0,1,1)
  indicator_contribute_to_H0C_power <- c(0,0,0,1,1,1,1)

  # Sample Sizes per stage and decision:
  # where sample sizes are in units of sum(stage.1.sample.sizes)/2
  n_stage1_subpopulation1 <- stage.1.sample.sizes[1];
  n_stage1_subpopulation2 <- stage.1.sample.sizes[2];
  n_stage2_subpopulation1_decision <- as.vector(stage.2.sample.sizes.per.enrollment.choice[,1]) #Sample size in stage 2 under each decision rule for subpopulation 1
  n_stage2_subpopulation2_decision <- as.vector(stage.2.sample.sizes.per.enrollment.choice[,2]) #Sample size in stage 2 under each decision rule for subpopulation 1
  stopifnot(length(n_stage2_subpopulation1_decision)==length(n_stage2_subpopulation2_decision))
  number_decisions <- length(n_stage2_subpopulation1_decision)
  decisions <- (1:number_decisions)

  ## Set loss function to be sample size; can modify if desired--general format is matrix with number_decision rows and number_actions columns, and entry is loss function value at corresponding (decision,action pair). Can also generalize to make it depend on the ncp value as well but if so need to modify generalized_generate... objective function construction
  loss_function_value <- array(n_stage1_subpopulation1+n_stage1_subpopulation2+n_stage2_subpopulation1_decision+n_stage2_subpopulation2_decision,c(number_decisions,number_actions))

  # Convert data.generating.distributions to list of non-centrality parameter vectors
  prior_mean_support <- c()
  for(count in 1:dim(data.generating.distributions)[1]){
    ncp <- c(data.generating.distributions[count,1]*sqrt(p1*sum(stage.1.sample.sizes)/(data.generating.distributions[count,3]+data.generating.distributions[count,4])),
             data.generating.distributions[count,2]*sqrt(p2*sum(stage.1.sample.sizes)/(data.generating.distributions[count,5]+data.generating.distributions[count,6])))
    names(ncp) <- c("NonCentralityParameter1","NonCentralityParameter2")
    prior_mean_support <- c(prior_mean_support,list(ncp))
  }

  # Prior information used in constructing objective function
  prior_weights <- objective.function.weights
  # List of non-centrality parameters to use for power constraints:
  # WARNING: If following line is modified, need to modify the line that follows it as well, and also the line:
  # switch(power_constraint_counter,power_constraint_vector_subpopulation1<-power_constraint_vector,power_constraint_vector_subpopulation2<-power_constraint_vector,power_constraint_vector_combined_population<-power_constraint_vector)
  # in the generalized_generate... file in order to modify the format of power constraint output
  power_constraint_list <- prior_mean_support
  ## The type of power to put constraint on for each scenario in the power_constraint list:
  #power_constraint_null_hyp_contribution_list <- c(list(indicator_contribute_to_subpopulation1_power),list(indicator_contribute_to_subpopulation2_power),list(indicator_contribute_to_combined_population_power))

  # Decision Type: indicates one of 4 types of stage 2 enrollment choices:
  # 1: both subpopulations enrolled
  # 2: neither subpopulation enrolled
  # 3: just subpopulation 1 enrolled
  # 4: just subpopulation 2 enrolled
  # Each is handled differently, e.g., decisions 3 and 4 use a 3x3 covariance matrix while decision 1 uses 4x4.

  d_type <- ifelse(n_stage2_subpopulation1_decision>0 & n_stage2_subpopulation2_decision>0,1,
                   ifelse(n_stage2_subpopulation1_decision==0 & n_stage2_subpopulation2_decision==0,2,
                          ifelse(n_stage2_subpopulation1_decision>0 & n_stage2_subpopulation2_decision==0,3,
                                 ifelse(n_stage2_subpopulation1_decision==0 & n_stage2_subpopulation2_decision>0,4,5))))

  # Create covariance matrices under each decision rule
  covariance_matrix <- list()
  for(d in decisions){
    if(d_type[d]==1){
      # Corresponds to decision to enroll from both subpopulations in stage 2
      gamma1 <- sqrt(n_stage1_subpopulation1/(n_stage1_subpopulation1+n_stage2_subpopulation1_decision[d]))
      gamma2 <- sqrt(n_stage1_subpopulation2/(n_stage1_subpopulation2+n_stage2_subpopulation2_decision[d]))
      ## Statistics: Z^(1)_1,Z^(1)_2,Z^(C)_1,Z^(C)_2,
      covariance_matrix <- c(covariance_matrix,list(
        array(c(1,covariance_Z_1_Z_2,gamma1,covariance_Z_1_Z_2*gamma2,
                covariance_Z_1_Z_2,1,covariance_Z_1_Z_2*gamma1,gamma2,
                gamma1,covariance_Z_1_Z_2*gamma1,1,covariance_Z_1_Z_2,
                covariance_Z_1_Z_2*gamma2,gamma2,covariance_Z_1_Z_2,1),c(4,4))))
    }else if(d_type[d]==2){
      # Corresponds to decision to stop early
      covariance_matrix <- c(covariance_matrix,list(
        array(c(1,covariance_Z_1_Z_2,
                covariance_Z_1_Z_2,1),c(2,2))))
    }else if(d_type[d]==3){
      # Corresponds to decision to enrich subpopulation 1 only
      gamma1 <- sqrt(n_stage1_subpopulation1/(n_stage1_subpopulation1+n_stage2_subpopulation1_decision[d]))
      ## Statistics: Z^(1)_1,Z^(1)_2,Z^(C)_1,Z^(C)_2,
      covariance_matrix <- c(covariance_matrix,list(
        array(c(1,covariance_Z_1_Z_2,gamma1,
                covariance_Z_1_Z_2,1,covariance_Z_1_Z_2*gamma1,
                gamma1,covariance_Z_1_Z_2*gamma1,1),c(3,3))))
    }else if(d_type[d]==4){
      # Corresponds to decision to enrich subpopulation 2 only
      gamma2 <- sqrt(n_stage1_subpopulation2/(n_stage1_subpopulation2+n_stage2_subpopulation2_decision[d]))
      ## Statistics: Z^(1)_1,Z^(1)_2,Z^(C)_1,Z^(C)_2,
      covariance_matrix <- c(covariance_matrix,list(
        array(c(1,covariance_Z_1_Z_2,covariance_Z_1_Z_2*gamma2,
                covariance_Z_1_Z_2,1,gamma2,
                covariance_Z_1_Z_2*gamma2,gamma2,1),c(3,3))))
    }
  }

  mean_vector <- function(ncp,d){
    if(d_type[d]==1){
      return(c(
        ncp[1]*sqrt(n_stage1_subpopulation1/(2*p1*sum(stage.1.sample.sizes))),ncp[2]*sqrt(n_stage1_subpopulation2/(2*p2*sum(stage.1.sample.sizes))),
        ncp[1]*sqrt((n_stage1_subpopulation1+n_stage2_subpopulation1_decision[d])/(2*p1*sum(stage.1.sample.sizes))),ncp[2]*sqrt((n_stage1_subpopulation2+n_stage2_subpopulation2_decision[d])/(2*p2*sum(stage.1.sample.sizes)))))} else if(d_type[d]==2){
          return(c(
            ncp[1]*sqrt(n_stage1_subpopulation1/(2*p1*sum(stage.1.sample.sizes))),ncp[2]*sqrt(n_stage1_subpopulation2/(2*p1*sum(stage.1.sample.sizes)))))} else if(d_type[d]==3){
              return(c(
                ncp[1]*sqrt(n_stage1_subpopulation1/(2*p1*sum(stage.1.sample.sizes))),ncp[2]*sqrt(n_stage1_subpopulation2/(2*p2*sum(stage.1.sample.sizes))),
                ncp[1]*sqrt((n_stage1_subpopulation1+n_stage2_subpopulation1_decision[d])/(2*p1*sum(stage.1.sample.sizes)))))} else if(d_type[d]==4){
                  return(c(
                    ncp[1]*sqrt(n_stage1_subpopulation1/(2*p1*sum(stage.1.sample.sizes))),ncp[2]*sqrt(n_stage1_subpopulation2/(2*p2*sum(stage.1.sample.sizes))),
                    ncp[2]*sqrt((n_stage1_subpopulation2+n_stage2_subpopulation2_decision[d])/(2*p2*sum(stage.1.sample.sizes)))))
                }
  }

  # L <- function(ncp){return(
  #    -(as.numeric(ncp[1]>delta_1_min)*indicator_contribute_to_subpopulation1_power
  #    +as.numeric(ncp[2]>delta_2_min)*indicator_contribute_to_subpopulation2_power
  #    +as.numeric(rho1*ncp[1]+rho2*ncp[2]>rho1*delta_1_min+rho2*delta_2_min)*indicator_contribute_to_combined_population_power))}

  ## Handles case of prior distribution used in objective function
  modified_joint_distribution <- function(prior_component_index,decision){
    modified_mean_vector <- mean_vector(ncp=prior_mean_support[[prior_component_index]],d=decision)
    modified_covariance_matrix <-  (array(c(mean_vector(ncp=c(1,0),d=decision),mean_vector(ncp=c(0,1),d=decision)),c(length(mean_vector(ncp=c(1,0),d=decision)),2))  %*% prior.covariance.matrix %*% t(array(c(mean_vector(ncp=c(1,0),d=decision),mean_vector(ncp=c(0,1),d=decision)),c(length(mean_vector(ncp=c(1,0),d=decision)),2)))) +  covariance_matrix[[decision]]
    return(list(modified_mean_vector,modified_covariance_matrix))
  }

  number_reference_rectangles <- number_decisions

  ## Below set the FWER constraints and the discretization of the decision and rejection regions

  #set widths of large square outside of which multiple testing procedure rejects no null hypotheses (corresponds to w in text)
  w1 <- 6
  w2 <- 6
  # dimension of small squares used in discretization
  tau <- discretization.parameter[1]
  tau_mtp <- discretization.parameter[2]
  # Settings for integration region and precision in computing lower bound to original problem using dual solution to discretized problem
  max_eval_iters <- 100000
  w1_unconst <- 5
  w2_unconst <- 5

  if(is.null(ncp.list)){
    # list of pairs of non-centrality parameters in G_{tau,w}
    ncp.list <- list()
    for(z in seq(-9,9,length=max(1,floor(18*discretization.parameter[3])))) {ncp.list <- c(ncp.list,list(c(0,z)))}
    for(z in seq(-9,9,length=max(1,floor(18*discretization.parameter[3])))) {ncp.list <- c(ncp.list,list(c(z,0)))}
    for(z in seq(-9,9,length=max(1,floor(18*discretization.parameter[3])))) {ncp.list <- c(ncp.list,list(c(z,-(rho1/rho2)*z)))}
    ncp.list <- c(ncp.list,list(c(0,0)))
    ncp.list <- unique(ncp.list)
  }

  constraints_per_A1_file <- ceiling(length(ncp.list)/200) # Set number of familywise Type I error constraints to encode per file written

  # construct list of rectangles in set R
  ## List of rectangles defining decision boundaries
  ## Each rectangle is encoded by c(lower left (x,y) coordinates, upper right (x,y) coordinates)

  if(!is.null(list.of.rectangles.dec)){
    number_preset_decision_rectangles <- 0
    for(r1_counter in 1:(length(list.of.rectangles.dec))){if(list.of.rectangles.dec[[r1_counter]]$preset_decision>0){number_preset_decision_rectangles <- number_preset_decision_rectangles+1}}
  } else{
    list.of.rectangles.dec <- list()

    number_preset_decision_rectangles <- 0

    for(x in c(seq(-w1,-3-tau,by=tau),seq(3,w1-tau,by=tau)))
    {
      for(y in seq(-w2,w2-tau,by=tau))
      {
        list.of.rectangles.dec<- c(list.of.rectangles.dec,list(list(lower_boundaries=c(x,y),upper_boundaries=c(x+tau,y+tau),allowed_decisions=decisions,preset_decision=0)))
      }
    }

    for(x in seq(-3,3-tau,by=tau))
    {
      for(y in c(seq(-w1,-3-tau,by=tau),seq(3,w1-tau,by=tau)))
      {
        list.of.rectangles.dec<- c(list.of.rectangles.dec,list(list(lower_boundaries=c(x,y),upper_boundaries=c(x+tau,y+tau),allowed_decisions=decisions,preset_decision=0)))
      }
    }

    for(x in seq(-3,3-tau/2,by=tau/2))
    {
      for(y in seq(-3,3-tau/2,by=tau/2))
      {
        list.of.rectangles.dec<- c(list.of.rectangles.dec,list(list(lower_boundaries=c(x,y),upper_boundaries=c(x+tau/2,y+tau/2),allowed_decisions=decisions,preset_decision=0)))
      }
    }

    for(d in decisions){list.of.rectangles.dec <- c(list.of.rectangles.dec,list(list(lower_boundaries=c(0,0),upper_boundaries=c(0,0),allowed_decisions=d,preset_decision=0)))}# these are reference rectangles

    ## Set upper neighbors
    counter_for_r <- 1
    for(counter_for_r in 1:(length(list.of.rectangles.dec)-number_reference_rectangles)){
      r <- list.of.rectangles.dec[[counter_for_r]]
      count_value <- 1
      #while(count_value <= length(list.of.rectangles.dec) && (!(r$upper_boundaries[2]== list.of.rectangles.dec[[count_value]]$lower_boundaries[2] && r$lower_boundaries[1]== list.of.rectangles.dec[[count_value]]$lower_boundaries[1] && r$upper_boundaries[1]== list.of.rectangles.dec[[count_value]]$upper_boundaries[1]))){count_value <- count_value +1}
      while(count_value <= length(list.of.rectangles.dec) && (!(abs(r$upper_boundaries[2] - list.of.rectangles.dec[[count_value]]$lower_boundaries[2])<10^(-10) && abs(r$lower_boundaries[1] - list.of.rectangles.dec[[count_value]]$lower_boundaries[1])<10^(-10) && abs(r$upper_boundaries[1] - list.of.rectangles.dec[[count_value]]$upper_boundaries[1])<10^(-10)))){count_value <- count_value +1}
      if(count_value <= length(list.of.rectangles.dec)){list.of.rectangles.dec[[counter_for_r]]$upper_neighbor <- count_value}
    }
    #save(list.of.rectangles.dec,file=paste("list.of.rectangles.dec",LP.iteration,".rdata",sep=""))
    #save(ncp.list,file=paste("ncp.list",LP.iteration,".rdata",sep=""))
  }

  ## Set multiple testing procedure rectangle partition

  stage_2_rectangle_offset_value <- 0
  list.of.rectangles.mtp1 <- list()
  for(x in seq(-w1,w1,by=tau_mtp))
  {
    #Skip lower left quadrant
    for(y in seq(ifelse(x<0,0,-w2),w2,by=tau_mtp))
    {
      list.of.rectangles.mtp1<- c(list.of.rectangles.mtp1,list(list(lower_boundaries=c(x,y),upper_boundaries=c(x+tau_mtp,y+tau_mtp),allowed_actions=actions,stage_2_rectangle_offset=stage_2_rectangle_offset_value)))
      stage_2_rectangle_offset_value <- stage_2_rectangle_offset_value + length(actions)
    }
  }
  ## Only one large rectangle for lower left quadrant
  list.of.rectangles.mtp1<- c(list.of.rectangles.mtp1,list(list(lower_boundaries=c(-Inf,-Inf),upper_boundaries=c(0,0),allowed_actions=actions,stage_2_rectangle_offset=stage_2_rectangle_offset_value)))
  stage_2_rectangle_offset_value <- stage_2_rectangle_offset_value + length(actions)

  number_rectangles_by_actions_for_decision_type1  <- stage_2_rectangle_offset_value

  ## Set right neighbors
  counter_for_rprime <- 1
  for(rprime in list.of.rectangles.mtp1)
  {
    count_value <- 1
    #while(count_value <= length(list.of.rectangles.mtp1) && (!(rprime$upper_boundaries[1]== list.of.rectangles.mtp1[[count_value]]$lower_boundaries[1] && rprime$lower_boundaries[2]== list.of.rectangles.mtp1[[count_value]]$lower_boundaries[2] && rprime$upper_boundaries[2]== list.of.rectangles.mtp1[[count_value]]$upper_boundaries[2]))){count_value <- count_value +1}
    while(count_value <= length(list.of.rectangles.mtp1) && (!(abs(rprime$upper_boundaries[1] - list.of.rectangles.mtp1[[count_value]]$lower_boundaries[1])<10^(-10) && abs(rprime$lower_boundaries[2] - list.of.rectangles.mtp1[[count_value]]$lower_boundaries[2])<10^(-10) && abs(rprime$upper_boundaries[2] - list.of.rectangles.mtp1[[count_value]]$upper_boundaries[2])<10^(-10)))){count_value <- count_value +1}
    if(count_value <= length(list.of.rectangles.mtp1)){list.of.rectangles.mtp1[[counter_for_rprime]]$right_neighbor <- count_value}
    counter_for_rprime <- counter_for_rprime +1
  }
  ## Set upper neighbors
  counter_for_rprime <- 1
  for(rprime in list.of.rectangles.mtp1)
  {
    count_value <- 1
    #while(count_value <= length(list.of.rectangles.mtp1) && (!(rprime$upper_boundaries[2]== list.of.rectangles.mtp1[[count_value]]$lower_boundaries[2] && rprime$lower_boundaries[1]== list.of.rectangles.mtp1[[count_value]]$lower_boundaries[1] && rprime$upper_boundaries[1]== list.of.rectangles.mtp1[[count_value]]$upper_boundaries[1]))){count_value <- count_value +1}
    while(count_value <= length(list.of.rectangles.mtp1) && (!(abs(rprime$upper_boundaries[2] - list.of.rectangles.mtp1[[count_value]]$lower_boundaries[2])<10^(-10) && abs(rprime$lower_boundaries[1]- list.of.rectangles.mtp1[[count_value]]$lower_boundaries[1])<10^(-10) && abs(rprime$upper_boundaries[1] - list.of.rectangles.mtp1[[count_value]]$upper_boundaries[1])<10^(-10)))){count_value <- count_value +1}
    if(count_value <= length(list.of.rectangles.mtp1)){list.of.rectangles.mtp1[[counter_for_rprime]]$upper_neighbor <- count_value}
    counter_for_rprime <- counter_for_rprime +1
  }

  ##Set stage 2 rectangle sets for each d_type:
  #Set stage 2 rectangles sets to be the same for d_type 1, 3, and 4. (Allow different for d_type 2)
  list.of.rectangles.mtp2 <- list.of.rectangles.mtp3 <- list.of.rectangles.mtp4 <- list.of.rectangles.mtp1;
  number_rectangles_by_actions_for_decision_type2 <- number_rectangles_by_actions_for_decision_type3 <- number_rectangles_by_actions_for_decision_type4 <- number_rectangles_by_actions_for_decision_type1
  list.of.rectangles.mtp <- list()
  number_rectangles_by_actions_for_decision <- c()
  for(d in decisions){
    list.of.rectangles.mtp <- c(list.of.rectangles.mtp,list(switch(d_type[d],list.of.rectangles.mtp1,list.of.rectangles.mtp2,list.of.rectangles.mtp3,list.of.rectangles.mtp4)));
    number_rectangles_by_actions_for_decision <- c(number_rectangles_by_actions_for_decision,switch(d_type[d],number_rectangles_by_actions_for_decision_type1,number_rectangles_by_actions_for_decision_type2,number_rectangles_by_actions_for_decision_type3,number_rectangles_by_actions_for_decision_type4))
  }
  #Can replace above loop if desired to have tailored rectangle sets for each possible enrollment choice.
  #save(list.of.rectangles.dec,file="list_of_rectangles.rdata")

  # compute number of variables in linear program
  number_of_variables <- 0
  current_rectangle <-1
  for(r in list.of.rectangles.dec){
    if(r$preset_decision == 0){# we do not encode variables corresponding to decision region rectangles that are preset, i.e, where preset_decision==1
      list.of.rectangles.dec[[current_rectangle]]$stage_1_rectangle_offset <- number_of_variables
      for(d in decisions){
        if(sum(d==r$allowed_decisions)>0){
          list.of.rectangles.dec[[current_rectangle]]$stage_1_rectangle_and_decision_offset[[d]] <- number_of_variables
          number_of_variables <- number_of_variables+number_rectangles_by_actions_for_decision[d]
        } else {list.of.rectangles.dec[[current_rectangle]]$stage_1_rectangle_and_decision_offset[[d]] <- NA}
      }
    }
    current_rectangle <- current_rectangle+1
  }

  variable_location <- function(r,d,rprime,action){return(r$stage_1_rectangle_and_decision_offset[d]+rprime$stage_2_rectangle_offset+which(rprime$allowed_actions==action))}

  #compute number_equality_constraints_part2
  number_equality_constraints_part2 <- 0
  for(r in list.of.rectangles.dec){
    if(r$preset_decision==0){
      for(d in r$allowed_decisions){
        number_equality_constraints_part2<-number_equality_constraints_part2+(length(list.of.rectangles.mtp[[d]])-1)
      }
    }
  }

  #print("number of variables")
  #print(number_of_variables)
  #print("number of familywise Type I error constraints")
  #print(length(ncp.list))
  number_equality_constraints_part1 <- length(list.of.rectangles.dec)-number_preset_decision_rectangles
  write(number_equality_constraints_part1,file=paste("number_equality_constraints_of_first_type.txt"))
  #print("number of equality constraints of first type")
  #print(number_equality_constraints_part1)
  #print("number of equality constraints of second type")
  #print(number_equality_constraints_part2)
  write(number_equality_constraints_part2,file =paste("number_equality_constraints_of_second_type.txt"))
  write(length(ncp.list),file =paste("number_A1_constraints.txt"))
  write(ceiling(length(ncp.list)/constraints_per_A1_file),file =paste("number_A1_files.txt"))
  power.constraints.matrix <- power.constraints
  power.constraints <- as.vector(power.constraints)
  save(power.constraints,file="power_constraints.rdata")
  save(list.of.rectangles.mtp,file=paste("list.of.rectangles.mtp",LP.iteration,".rdata",sep=""))

  ## Includes constraints the restrict multiple testing procedure not depend on stage 1 statistics (given cumulative statistics Z^C and decision d)
  ## Implements unequal rectangle sizes, with setting of rectangles to specific values
  ## Generate Discretized LP based on settings in file

  generate_LP <- function(task_id,...){
    max_task_id_for_computing_FWER_constraints <- ceiling(length(ncp.list)/constraints_per_A1_file)
    #print("max_task_id_for_computing_FWER_constraints")
    #print(max_task_id_for_computing_FWER_constraints)

    if(task_id <= max_task_id_for_computing_FWER_constraints){
      ## Construct Familywise Type I error constraints
      constraint_list <- c()
      counter <- 1
      max_error_prob <- 0

      for(ncp in ncp.list[((task_id-1)*constraints_per_A1_file+1):min(length(ncp.list),((task_id)*constraints_per_A1_file))])
      {
        # construct vectors for storing reallocated probabilities for preset decision rectangles
        for(d in decisions){
          for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
            list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- rep(0,length(list.of.rectangles.mtp[[d]][[rprime_counter]]$allowed_actions))
          }
        }

        rvec <- c()
        ## Construct constraint that P_{ncp}(Type I error) \leq \alpha
        true_nulls_violated_by_action_set <- map_from_P_to_type_I_error_indicator_over_set_of_actions(ncp)
        for(r in list.of.rectangles.dec[1:(length(list.of.rectangles.dec)-number_reference_rectangles)]){
          for(d in r$allowed_decisions){
            if(d_type[d]==1){
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## Get P_{ncp_vec}(Z \in rectangle r)
                prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=c(r$lower_boundaries,rprime$lower_boundaries),upper=c(r$upper_boundaries,rprime$upper_boundaries),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                if(r$preset_decision==0){rvec <- c(rvec,prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions])}else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions]
                }
              }
            } else if(d_type[d]==2) {
              ## Get P_{ncp_vec}(Z \in rectangle r)
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                intersection_rectangle_lower_boundaries <- pmax(r$lower_boundaries,rprime$lower_boundaries)
                intersection_rectangle_upper_boundaries <- pmin(r$upper_boundaries,rprime$upper_boundaries)
                if(sum(intersection_rectangle_lower_boundaries < intersection_rectangle_upper_boundaries)==2){ #case where intersection rectangle non-empty
                  prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=intersection_rectangle_lower_boundaries,upper=intersection_rectangle_upper_boundaries,algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){rvec <- c(rvec,prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions])}else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions]
                }
              }
            } else if(d_type[d]==3) {
              r_lower_boundary_z_2 <- r$lower_boundaries[2]
              r_upper_boundary_z_2 <- r$upper_boundaries[2]
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                effective_z_2_lower_boundary <- max(c(r_lower_boundary_z_2,rprime$lower_boundaries[2]))
                effective_z_2_upper_boundary <- min(c(r_upper_boundary_z_2,rprime$upper_boundaries[2]))
                if(effective_z_2_lower_boundary < effective_z_2_upper_boundary){ # case where rectangle non-empty
                  ## Get P_{ncp_vec}(Z \in rectangle r)
                  prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=c(r$lower_boundaries[1],effective_z_2_lower_boundary,rprime$lower_boundaries[1]),upper=c(r$upper_boundaries[1],effective_z_2_upper_boundary,rprime$upper_boundaries[1]),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){rvec <- c(rvec,prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions])}else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions]
                }
              }
            } else if(d_type[d]==4) { #only subpopulation 2 enrolled further
              r_lower_boundary_z_1 <- r$lower_boundaries[1]
              r_upper_boundary_z_1 <- r$upper_boundaries[1]
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                effective_z_1_lower_boundary <- max(c(r_lower_boundary_z_1,rprime$lower_boundaries[1]))
                effective_z_1_upper_boundary <- min(c(r_upper_boundary_z_1,rprime$upper_boundaries[1]))
                if(effective_z_1_lower_boundary < effective_z_1_upper_boundary){ # case where rectangle non-empty
                  ## Get P_{ncp_vec}(Z \in rectangle r)
                  prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=c(effective_z_1_lower_boundary,r$lower_boundaries[2],rprime$lower_boundaries[2]),upper=c(effective_z_1_upper_boundary,r$upper_boundaries[2],rprime$upper_boundaries[2]),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){rvec <- c(rvec,prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions])}else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*true_nulls_violated_by_action_set[rprime$allowed_actions]
                }
              }
            }
          }
        }
        #Handle reference rectangle contributions
        for(r in list.of.rectangles.dec[(length(list.of.rectangles.dec)+1-number_reference_rectangles):length(list.of.rectangles.dec)]){
          for(d in r$allowed_decisions){
            for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
              rvec <- c(rvec,list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles)
            }
          }
        }

        constraint_list <- rbind(constraint_list,rvec)
        #print(counter)
        #	if(floor(counter/10) == counter/10) {print(paste(counter,"out of ",length(ncp.list),"familywise Type I error constraints generated"))}
        #print(sum(constraint_list[counter,]))
        counter <- counter + 1
      }

      # print estimated maximum error in computing bivariate normal probabilities using mvtnorm::pmvnorm:
      #print("Max Error in Multivariate Normal Computation")
      #print(max_error_prob); save(max_error_prob,file=paste("max_error_prob",task_id,".rdata",sep=""));

      # record components of discretized linear program
      save(constraint_list,file=paste("A1",task_id,".rdata",sep=""))

      if(task_id==1 && dim(constraint_list)[2]!=number_of_variables){print("error in construction of constraints");write(1,file="error_flag")} else{write(dim(constraint_list)[2],file =paste("number_variables.txt"))}

    } else if(task_id==max_task_id_for_computing_FWER_constraints+1){# construct objective function vector


      objective_function_vector <- rep(0,number_of_variables)
      counter <- 1
      for(ncp in prior_mean_support)
      {
        # construct vectors for storing reallocated probabilities for preset decision rectangles
        for(d in decisions){
          for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
            list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- rep(0,length(list.of.rectangles.mtp[[d]][[rprime_counter]]$allowed_actions))
          }
        }

        contribution_to_risk <- c()
        for(r in list.of.rectangles.dec[1:(length(list.of.rectangles.dec)-number_reference_rectangles)]){
          for(d in r$allowed_decisions){
            joint_distribution_parameters <- modified_joint_distribution(prior_component_index=counter,decision=d)
            modified_mean_vector <- joint_distribution_parameters[[1]]
            modified_covariance_matrix <- joint_distribution_parameters[[2]]
            if(d_type[d]==1){
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## Get P_{ncp_vec}(Z \in rectangle r)
                prob_r <- mvtnorm::pmvnorm(mean=modified_mean_vector,sigma=modified_covariance_matrix,lower=c(r$lower_boundaries,rprime$lower_boundaries),upper=c(r$upper_boundaries,rprime$upper_boundaries),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                if(r$preset_decision==0){contribution_to_risk <- c(contribution_to_risk,prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions])}else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions]}
              }
            } else if(d_type[d]==2) {
              ## Get P_{ncp_vec}(Z \in rectangle r)
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                intersection_rectangle_lower_boundaries <- pmax(r$lower_boundaries,rprime$lower_boundaries)
                intersection_rectangle_upper_boundaries <- pmin(r$upper_boundaries,rprime$upper_boundaries)
                if(sum(intersection_rectangle_lower_boundaries < intersection_rectangle_upper_boundaries)==2){ #case where intersection rectangle non-empty
                  prob_r <- mvtnorm::pmvnorm(mean=modified_mean_vector,sigma=modified_covariance_matrix,lower=intersection_rectangle_lower_boundaries,upper=intersection_rectangle_upper_boundaries,algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){contribution_to_risk <- c(contribution_to_risk,prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions])}else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions]}
              }
            } else if(d_type[d]==3) {
              r_lower_boundary_z_2 <- r$lower_boundaries[2]
              r_upper_boundary_z_2 <- r$upper_boundaries[2]
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                effective_z_2_lower_boundary <- max(c(r_lower_boundary_z_2,rprime$lower_boundaries[2]))
                effective_z_2_upper_boundary <- min(c(r_upper_boundary_z_2,rprime$upper_boundaries[2]))
                if(effective_z_2_lower_boundary < effective_z_2_upper_boundary){ # case where rectangle non-empty
                  ## Get P_{ncp_vec}(Z \in rectangle r)
                  prob_r <- mvtnorm::pmvnorm(mean=modified_mean_vector,sigma=modified_covariance_matrix,lower=c(r$lower_boundaries[1],effective_z_2_lower_boundary,rprime$lower_boundaries[1]),upper=c(r$upper_boundaries[1],effective_z_2_upper_boundary,rprime$upper_boundaries[1]),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){contribution_to_risk <- c(contribution_to_risk,prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions])}else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions]}
              }
            } else if(d_type[d]==4) { #only subpopulation 2 enrolled further
              r_lower_boundary_z_1 <- r$lower_boundaries[1]
              r_upper_boundary_z_1 <- r$upper_boundaries[1]
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                effective_z_1_lower_boundary <- max(c(r_lower_boundary_z_1,rprime$lower_boundaries[1]))
                effective_z_1_upper_boundary <- min(c(r_upper_boundary_z_1,rprime$upper_boundaries[1]))
                if(effective_z_1_lower_boundary < effective_z_1_upper_boundary){ # case where rectangle non-empty
                  ## Get P_{ncp_vec}(Z \in rectangle r)
                  prob_r <- mvtnorm::pmvnorm(mean=modified_mean_vector,sigma=modified_covariance_matrix,lower=c(effective_z_1_lower_boundary,r$lower_boundaries[2],rprime$lower_boundaries[2]),upper=c(effective_z_1_upper_boundary,r$upper_boundaries[2],rprime$upper_boundaries[2]),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){contribution_to_risk <- c(contribution_to_risk,prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions])}else{list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles + r$preset_decision_value[d]*prob_r*prior_weights[counter]*loss_function_value[d,rprime$allowed_actions]}
              }
            }
          }
        }

        #Handle reference rectangle contributions
        for(r in list.of.rectangles.dec[(length(list.of.rectangles.dec)+1-number_reference_rectangles):length(list.of.rectangles.dec)]){
          for(d in r$allowed_decisions){
            for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
              contribution_to_risk <- c(contribution_to_risk,list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles)
            }
          }
        }

        objective_function_vector <- objective_function_vector + contribution_to_risk
        counter <- counter + 1
      }

      if(length(objective_function_vector) !=number_of_variables){print("error in construction of constraints");write(2,file="error_flag")} else{save(objective_function_vector,file=paste("c.rdata"))}

      # print estimated maximum error in computing bivariate normal probabilities using mvtnorm::pmvnorm:
      #print("Max Error in Multivariate Normal Computation")
      #print(max_error_prob); save(max_error_prob,file=paste("max_error_prob",task_id,".rdata",sep=""));


    } else if(task_id==max_task_id_for_computing_FWER_constraints+2){
      ##
      ## Construct Sparse Constraints corresponding to (26) and (27) in paper
      ##

      # Generate matrix A^2:
      # Format: (row,column,value)
      # First generate equality constraints representing (24)
      equality_constraints_part1 <- array(0,c(length(list.of.rectangles.dec)*number_decisions*number_actions,3))
      counter <- 1
      constraint_number <- 1
      for(r in list.of.rectangles.dec){
        if(r$preset_decision==0){
          for(d in r$allowed_decisions){
            # use first rectangle in list.of.rectangles.mtp[[d]] as rprime_d in constraint
            rprime <- list.of.rectangles.mtp[[d]][[1]]
            for(action in rprime$allowed_actions){
              equality_constraints_part1[counter,] <- c(constraint_number,variable_location(r,d,rprime,action),1)
              counter <- counter+1
            }
          }
          constraint_number <- constraint_number+1
        }
      }
      equality_constraints_part1 <- equality_constraints_part1[1:(counter-1),]

      # Next generate equality constraints representing (25)
      total_number_stage2_rectangles <- 0
      for(d in decisions){total_number_stage2_rectangles <- total_number_stage2_rectangles + length(list.of.rectangles.mtp[[d]])}

      equality_constraints_part2 <- array(0,c(length(list.of.rectangles.dec)*total_number_stage2_rectangles*2*number_actions,3))
      counter <- 1
      for(r in list.of.rectangles.dec){
        if(r$preset_decision==0){
          for(d in r$allowed_decisions){
            # use first rectangle in list.of.rectangles.mtp[[d]] as rprime_d in constraint
            rprime_1 <- list.of.rectangles.mtp[[d]][[1]]
            for(rprime_2 in list.of.rectangles.mtp[[d]][2:length(list.of.rectangles.mtp[[d]])]){
              for(action in rprime_1$allowed_actions){
                equality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime_1,action),1)
                counter <- counter+1
              }
              for(action in rprime_2$allowed_actions){
                equality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime_2,action),-1)
                counter <- counter+1
              }
              constraint_number <- constraint_number+1
            }
          }
        }
      }

      equality_constraints_part2 <- equality_constraints_part2[1:(counter-1),]

      equality_constraints <- rbind(equality_constraints_part1,equality_constraints_part2)

      # Check FWER constraints have correct number of variables
      if((constraint_number-1)!=(number_equality_constraints_part1+number_equality_constraints_part2)){print("error in construction of constraints");write(3,file="error_flag")} else if(max(equality_constraints[,2]) > number_of_variables){print("ERROR in Generating Equality Constraints: max variable number exceeded");write(4,file="error_flag")} else{
        save(equality_constraints,file=paste("A2.rdata")) #note: these are in sparse matrix format
      }
    } else if(task_id==max_task_id_for_computing_FWER_constraints+3){ #Power constraint (uses vector indicator_contribute_to_combined_pop_power)

      power_constraint_matrix_H01 <- c()
      power_constraint_matrix_H02 <- c()
      power_constraint_matrix_H0C <- c()

      power_constraint_counter <- 1
      for(ncp in power_constraint_list){
        power_constraint_vector_H01 <- power_constraint_vector_H02 <- power_constraint_vector_H0C <- c();
        #    indicator_contribute_to_power <- power_constraint_null_hyp_contribution_list[[power_constraint_counter]]
        # construct vectors for storing reallocated probabilities for preset decision rectangles
        for(d in decisions){
          for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
            list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 <- rep(0,length(list.of.rectangles.mtp[[d]][[rprime_counter]]$allowed_actions));
            list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 <- rep(0,length(list.of.rectangles.mtp[[d]][[rprime_counter]]$allowed_actions));
            list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C <- rep(0,length(list.of.rectangles.mtp[[d]][[rprime_counter]]$allowed_actions));
          }
        }
        for(r in list.of.rectangles.dec[1:(length(list.of.rectangles.dec)-number_reference_rectangles)]){
          for(d in r$allowed_decisions){
            if(d_type[d]==1){
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## Get P_{ncp_vec}(Z \in rectangle r)
                prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=c(r$lower_boundaries,rprime$lower_boundaries),upper=c(r$upper_boundaries,rprime$upper_boundaries),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000)); if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                if(r$preset_decision==0){
                  power_constraint_vector_H01 <- c(power_constraint_vector_H01,prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions]);
                  power_constraint_vector_H02 <- c(power_constraint_vector_H02,prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions]);
                  power_constraint_vector_H0C <- c(power_constraint_vector_H0C,prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions]);
                }else{
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions];
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions];
                  list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions];
                }
              }
            } else if(d_type[d]==2) {
              ## Get P_{ncp_vec}(Z \in rectangle r)
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                intersection_rectangle_lower_boundaries <- pmax(r$lower_boundaries,rprime$lower_boundaries)
                intersection_rectangle_upper_boundaries <- pmin(r$upper_boundaries,rprime$upper_boundaries)
                if(sum(intersection_rectangle_lower_boundaries < intersection_rectangle_upper_boundaries)==2){ #case where intersection rectangle non-empty
                  prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=intersection_rectangle_lower_boundaries,upper=intersection_rectangle_upper_boundaries,algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){
                  power_constraint_vector_H01 <- c(power_constraint_vector_H01,prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions]);
                  power_constraint_vector_H02 <- c(power_constraint_vector_H02,prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions]);
                  power_constraint_vector_H0C <- c(power_constraint_vector_H0C,prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions]);} else{
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions];
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions];
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions];
                  }
              }
            } else if(d_type[d]==3) {
              r_lower_boundary_z_2 <- r$lower_boundaries[2]
              r_upper_boundary_z_2 <- r$upper_boundaries[2]
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                effective_z_2_lower_boundary <- max(c(r_lower_boundary_z_2,rprime$lower_boundaries[2]))
                effective_z_2_upper_boundary <- min(c(r_upper_boundary_z_2,rprime$upper_boundaries[2]))
                if(effective_z_2_lower_boundary < effective_z_2_upper_boundary){ # case where rectangle non-empty
                  ## Get P_{ncp_vec}(Z \in rectangle r)
                  prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=c(r$lower_boundaries[1],effective_z_2_lower_boundary,rprime$lower_boundaries[1]),upper=c(r$upper_boundaries[1],effective_z_2_upper_boundary,rprime$upper_boundaries[1]),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){
                  power_constraint_vector_H01 <- c(power_constraint_vector_H01,prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions]);
                  power_constraint_vector_H02 <- c(power_constraint_vector_H02,prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions]);
                  power_constraint_vector_H0C <- c(power_constraint_vector_H0C,prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions]);} else{
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions];
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions];
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions];
                  }
              }
            } else if(d_type[d]==4) { #only subpopulation 2 enrolled further
              r_lower_boundary_z_1 <- r$lower_boundaries[1]
              r_upper_boundary_z_1 <- r$upper_boundaries[1]
              for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
                rprime <- list.of.rectangles.mtp[[d]][[rprime_counter]]
                ## compute overlap in rectangles in terms of z_2:
                effective_z_1_lower_boundary <- max(c(r_lower_boundary_z_1,rprime$lower_boundaries[1]))
                effective_z_1_upper_boundary <- min(c(r_upper_boundary_z_1,rprime$upper_boundaries[1]))
                if(effective_z_1_lower_boundary < effective_z_1_upper_boundary){ # case where rectangle non-empty
                  ## Get P_{ncp_vec}(Z \in rectangle r)
                  prob_r <- mvtnorm::pmvnorm(mean=mean_vector(ncp,d),sigma=covariance_matrix[[d]],lower=c(effective_z_1_lower_boundary,r$lower_boundaries[2],rprime$lower_boundaries[2]),upper=c(effective_z_1_upper_boundary,r$upper_boundaries[2],rprime$upper_boundaries[2]),algorithm=mvtnorm::GenzBretz(abseps = 0.000000001,maxpts=100000))
                  if(attr(prob_r,"error") > max_error_prob){max_error_prob <- attr(prob_r,"error")}
                } else {# case where rectangle empty
                  prob_r <- 0
                }
                if(r$preset_decision==0){
                  power_constraint_vector_H01 <- c(power_constraint_vector_H01,prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions]);
                  power_constraint_vector_H02 <- c(power_constraint_vector_H02,prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions]);
                  power_constraint_vector_H0C <- c(power_constraint_vector_H0C,prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions]);} else{
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H01_power[rprime$allowed_actions];
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02 + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H02_power[rprime$allowed_actions];
                    list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C <- list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C + r$preset_decision_value[d]*prob_r*indicator_contribute_to_H0C_power[rprime$allowed_actions];
                  }
              }
            }
          }
        }
        #Handle reference rectangle contributions
        for(r in list.of.rectangles.dec[(length(list.of.rectangles.dec)+1-number_reference_rectangles):length(list.of.rectangles.dec)]){
          for(d in r$allowed_decisions){
            for(rprime_counter in 1:length(list.of.rectangles.mtp[[d]])){
              power_constraint_vector_H01 <- c(power_constraint_vector_H01,list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H01);
              power_constraint_vector_H02 <- c(power_constraint_vector_H02,list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H02);
              power_constraint_vector_H0C <- c(power_constraint_vector_H0C,list.of.rectangles.mtp[[d]][[rprime_counter]]$reallocated_probability_from_preset_decision_rectangles_H0C)
            }
          }
        }
        power_constraint_matrix_H01 <- rbind(power_constraint_matrix_H01,power_constraint_vector_H01);
        power_constraint_matrix_H02 <- rbind(power_constraint_matrix_H02,power_constraint_vector_H02);
        power_constraint_matrix_H0C <- rbind(power_constraint_matrix_H0C,power_constraint_vector_H0C);
        power_constraint_counter <- power_constraint_counter+1
      }
      save(power_constraint_matrix_H01,power_constraint_matrix_H02,power_constraint_matrix_H0C,file=paste("A3.rdata"))

      # print estimated maximum error in computing bivariate normal probabilities using mvtnorm::pmvnorm:
      #print("Max Error in Multivariate Normal Computation")
      #print(max_error_prob); save(max_error_prob,file=paste("max_error_prob",task_id,".rdata",sep=""));

    } else if(task_id==max_task_id_for_computing_FWER_constraints+4){
      # Generate sparse inequality constraints (32) and (33) in Section 4.1 that restrict multiple testing procedure not depend on stage 1 statistics (given cumulative statistics Z^C and decision d)

      additional_inequality_constraints_part1 <- array(0,c(length(list.of.rectangles.dec)*sum(number_rectangles_by_actions_for_decision)*(2+number_actions*(number_decisions-1)),3))
      constraint_number <- 1
      counter <- 1
      for(r in list.of.rectangles.dec[1:(length(list.of.rectangles.dec)-number_reference_rectangles)]){
        if(r$preset_decision==0){
          for(d in r$allowed_decisions){
            ## first set r_reference to be reference rectangle corresponding to r
            r_reference<-list.of.rectangles.dec[[length(list.of.rectangles.dec)-number_reference_rectangles+d]]
            for(rprime in list.of.rectangles.mtp[[d]]){
              for(action in rprime$allowed_actions){
                # Constraint (33):
                additional_inequality_constraints_part1[counter,] <- c(constraint_number,variable_location(r_reference,d,rprime,action),1)
                counter <- counter +1
                additional_inequality_constraints_part1[counter,] <- c(constraint_number,variable_location(r,d,rprime,action),-1)
                counter <- counter +1
                r_tilde_position <- r$position_offset
                for(d_tilde in r$allowed_decisions){
                  if(d_tilde != d){
                    rprime_tilde <- list.of.rectangles.mtp[[d_tilde]][[1]]
                    for(action in rprime_tilde$allowed_actions){
                      additional_inequality_constraints_part1[counter,] <- c(constraint_number,variable_location(r,d_tilde,rprime_tilde,action),-1)
                      counter <- counter +1
                    }
                  }
                }
                constraint_number <- constraint_number+1
              }
            }
          }

        }
      }

      if(counter>1){
        # previous output in case nothing generated: {additional_inequality_constraints_part1 <- c(); save(additional_inequality_constraints_part1,file=paste("Inequality_Constraints_to_Restrict_MTP_to_Sufficient_Statistics.rdata"))}

        additional_inequality_constraints_part1 <- additional_inequality_constraints_part1[1:(counter-1),]
        if(max(additional_inequality_constraints_part1[,2]) > number_of_variables){print("ERROR in Generating Inequality_Constraints_to_Restrict_MTP_to_Sufficient_Statistics");write(5,file="error_flag")} else{
          save(additional_inequality_constraints_part1,file=paste("Inequality_Constraints_to_Restrict_MTP_to_Sufficient_Statistics.rdata"))}
      }

    } else if(task_id==max_task_id_for_computing_FWER_constraints+5){                                        # Generate sparse inequality constraints (34) and (35) in Section 4.1 that represent monotonicity constraints in the multiple testing procedure

      additional_inequality_constraints_part2 <- array(0,c(sum(number_rectangles_by_actions_for_decision)*2*number_actions*4,3))
      constraint_number <- 1
      counter <- 1
      for(r in list.of.rectangles.dec[(length(list.of.rectangles.dec)-number_reference_rectangles+1):length(list.of.rectangles.dec)]){## only need to set for reference rectangles
        for(d in r$allowed_decisions){
          for(rprime in list.of.rectangles.mtp[[d]]){
            if((sum(rprime$allowed_actions==2)>0 || sum(rprime$allowed_actions==5)>0 || sum(rprime$allowed_actions==7)>0) && !is.null(rprime$right_neighbor)){ # if possitlbe to reject H01 and exists a right neighbor
              for(action in c(2,5,7)){
                if(sum(rprime$allowed_actions==action)>0){
                  additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime,action),1)
                  counter <- counter +1
                }
              }
              rprime_right_neighbor <-  list.of.rectangles.mtp[[d]][[rprime$right_neighbor]]
              for(action in c(2,5,7)){
                if(sum(rprime_right_neighbor$allowed_actions==action)>0){
                  additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime_right_neighbor,action),-1)
                  counter <- counter +1
                }
              }
              constraint_number <- constraint_number+1
            }
            if((sum(rprime$allowed_actions==3)>0 || sum(rprime$allowed_actions==6)>0 || sum(rprime$allowed_actions==7)>0) && !is.null(rprime$upper_neighbor)){ # if possitlbe to reject H02 and exists a upper neighbor
              for(action in c(3,6,7)){
                if(sum(rprime$allowed_actions==action)>0){
                  additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime,action),1)
                  counter <- counter +1
                }
              }
              rprime_upper_neighbor <-  list.of.rectangles.mtp[[d]][[rprime$upper_neighbor]]
              for(action in c(3,6,7)){
                if(sum(rprime_upper_neighbor$allowed_actions==action)>0){
                  additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime_upper_neighbor,action),-1)
                  counter <- counter +1
                }
              }
              constraint_number <- constraint_number+1
            }

            if((sum(rprime$allowed_actions==4)>0 || sum(rprime$allowed_actions==5)>0 || sum(rprime$allowed_actions==6)>0 || sum(rprime$allowed_actions==7)>0) && (!is.null(rprime$right_neighbor) || !!is.null(rprime$upper_neighbor))){# if possible to reject H0C and exists a right or upper neighbor
              if(!is.null(rprime$right_neighbor)){
                for(action in c(4,5,6,7)){
                  if(sum(rprime$allowed_actions==action)>0){
                    additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime,action),1)
                    counter <- counter +1
                  }
                }
                rprime_right_neighbor <-  list.of.rectangles.mtp[[d]][[rprime$right_neighbor]]
                for(action in c(4,5,6,7)){
                  if(sum(rprime_right_neighbor$allowed_actions==action)>0){
                    additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime_right_neighbor,action),-1)
                    counter <- counter +1
                  }
                }
                constraint_number <- constraint_number+1
              }

              if(!is.null(rprime$upper_neighbor)){
                for(action in c(4,5,6,7)){
                  if(sum(rprime$allowed_actions==action)>0){
                    additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime,action),1)
                    counter <- counter +1
                  }
                }
                rprime_upper_neighbor <-  list.of.rectangles.mtp[[d]][[rprime$upper_neighbor]]
                for(action in c(4,5,6,7)){
                  if(sum(rprime_upper_neighbor$allowed_actions==action)>0){
                    additional_inequality_constraints_part2[counter,] <- c(constraint_number,variable_location(r,d,rprime_upper_neighbor,action),-1)
                    counter <- counter +1
                  }
                }
                constraint_number <- constraint_number+1
              }
            }
          }
        }
      }
      additional_inequality_constraints_part2 <- additional_inequality_constraints_part2[1:(counter-1),]
      if(max(additional_inequality_constraints_part2[,2]) > number_of_variables){print("ERROR in Generating Inequality_Constraints_to_set_monotonicity_in_hypotheses_rejected");write(6,file="error_flag")} else{
        save(additional_inequality_constraints_part2,file=paste("Inequality_Constraints_to_set_monotonicity_in_hypotheses_rejected.rdata"))}
    }
  }

  #
  # Generate Linear program in parallel
  #

  number_jobs <- ceiling(length(ncp.list)/constraints_per_A1_file)+6
  if(type.of.LP.solver != "test_version"){
    parallel::mclapply(c((number_jobs-5):number_jobs,1:(number_jobs-6)),generate_LP,mc.cores=number.cores)} # order of jobs puts computation of power constraints and objective function first since they take longer to compute

  if(type.of.LP.solver=="matlab" || type.of.LP.solver=="cplex"){

    # Convert linear program to matlab format

    R.matlab::writeMat("alphaValue.mat",alphaValue=total.alpha)

    number_A1_files <- scan("number_A1_files.txt")
    A1 = numeric(0)
    for(i in 1:number_A1_files){
      tmp = load(paste("A1",i,".rdata",sep=""))
      tmp = constraint_list
      R.matlab::writeMat(paste("A1",i,".mat",sep=""),tmp=tmp)
    }

    tmp = load("A2.rdata")
    tmp = equality_constraints
    R.matlab::writeMat("A2.mat",A2 =tmp)

    tmp = load("A3.rdata")
    A3  = rbind(power_constraint_matrix_H01, power_constraint_matrix_H02,power_constraint_matrix_H0C)
    R.matlab::writeMat("A3.mat",A3=A3)

    tmp = load("power_constraints.rdata")
    a3  = power.constraints
    R.matlab::writeMat("a3.mat",a3=a3)

    try((tmp = load("Inequality_Constraints_to_Restrict_MTP_to_Sufficient_Statistics.rdata")),silent=TRUE)
    if(any(ls()=="additional_inequality_constraints_part1")){
      tmp = additional_inequality_constraints_part1
      col = read.table("number_variables.txt")
      col = col$V1
      R.matlab::writeMat("A4.mat",A4=tmp)
      rm(additional_inequality_constraints_part1)
      R.matlab::writeMat("a4status.mat",a4status=1)
    } else {R.matlab::writeMat("a4status.mat",a4status=0)}

    tmp11 = read.table("number_equality_constraints_of_first_type.txt")
    tmp11 = tmp11$V1
    R.matlab::writeMat("a21.mat",a21 = tmp11)

    R.matlab::writeMat("iteration.mat",iteration = LP.iteration)

    tmp = load("Inequality_Constraints_to_set_monotonicity_in_hypotheses_rejected.rdata")
    tmp = additional_inequality_constraints_part2

    R.matlab::writeMat("A5.mat",A5=tmp)

    tmp = load("c.rdata")
    obj = objective_function_vector
    R.matlab::writeMat("cc.mat",cc = obj)

    # Solve linear program
    # Previously used call: system('matlab -nojvm -r "cplex_optimize_multiple_testing_procedure()" > output_LP_solver')

    package_name = "AdaptiveDesignOptimizerSparseLP";
    if(type.of.LP.solver=="cplex"){# Solve linear program by call to cplex via matlab
      path_to_file = system.file("cplex", "cplex_optimize_multiple_testing_procedure.m", package = package_name);
      function_to_call = "cplex_optimize_multiple_testing_procedure()";} else if(type.of.LP.solver=="matlab"){# Solve linear program by call to matlab
        path_to_file = system.file("matlab", "matlab_optimize_multiple_testing_procedure.m", package = package_name);
        function_to_call = "matlab_optimize_multiple_testing_procedure()";}
    matlab_add_path = dirname(path_to_file);
    if(!is.null(LP.solver.path)){
      matlabcode = c(
        paste0("addpath(genpath('", LP.solver.path, "'))"),
        paste0("addpath(genpath('", matlab_add_path, "'))"),
        function_to_call)} else {
          matlabcode = c(
            paste0("addpath(genpath('", matlab_add_path, "'))"),
            function_to_call)}
    output_matlab = capture.output(matlabr::run_matlab_code(matlabcode));
    # Extract results from linear program solver and examine whether feasible solution was found
    sln = R.matlab::readMat(paste("sln2M",LP.iteration,".mat",sep=""))}
  #system('matlab -nojvm -r "cplex_optimize_design()" > output_LP_solver')
  else if(type.of.LP.solver=="test_version"){
    package_name = "AdaptiveDesignOptimizerSparseLP";
    path_to_file = system.file("cplex", "cplex_optimize_multiple_testing_procedure.m", package = package_name);  matlab_add_path = dirname(path_to_file);
    function_to_call = "cplex_optimize_multiple_testing_procedure()";
    if(!is.null(LP.solver.path)){
      matlabcode = c(
        paste0("addpath(genpath('", LP.solver.path, "'))"),
        paste0("addpath(genpath('", matlab_add_path, "'))"),
        function_to_call)} else {
          matlabcode = c(
            paste0("addpath(genpath('", matlab_add_path, "'))"),
            function_to_call)}
    output_matlab = capture.output(matlabr::run_matlab_code(matlabcode));
    # Extract results from linear program solver and examine whether feasible solution was found
    sln = R.matlab::readMat(paste("sln2M",LP.iteration,".mat",sep=""))} else{print("Sorry, this function is only available for use with Matlab or CPLEX"); return(0);}

  save(sln,file=paste("sln2M",LP.iteration,".rdata",sep=""))
  input.parameters <- list(subpopulation.1.proportion,total.alpha,data.generating.distributions,stage.1.sample.sizes,stage.2.sample.sizes.per.enrollment.choice,objective.function.weights,power.constraints,type.of.LP.solver,discretization.parameter,number.cores,ncp.list,list.of.rectangles.dec,LP.iteration,prior.covariance.matrix,round.each.multiple.testing.procedure.rectangle.to.integer,plots.to.round.simply,rounding.threshold.H01,rounding.threshold.H02,rounding.threshold.H0C,power.constraint.tolerance,LP.solver.path);
  names(input.parameters) <- list("subpopulation.1.proportion","total.alpha","data.generating.distributions","stage.1.sample.sizes","stage.2.sample.sizes.per.enrollment.choice","objective.function.weights","power.constraints","type.of.LP.solver","discretization.parameter","number.cores","ncp.list","list.of.rectangles.dec","LP.iteration","prior.covariance.matrix","round.each.multiple.testing.procedure.rectangle.to.integer","plots.to.round.simply","rounding.threshold.H01","rounding.threshold.H02","rounding.threshold.H0C","power.constraint.tolerance","LP.solver.path");
  optimized.policy <- extract_solution(list.of.rectangles.dec,decisions,list.of.rectangles.mtp,actions,sln);
  save(optimized.policy,input.parameters,list.of.rectangles.dec,list.of.rectangles.mtp,ncp.list,sln,file=paste("optimized.design",LP.iteration,".rdata",sep=""))
  print(paste("Adaptive Design Optimization Completed. Optimal design is stored in the file: optimized_design",LP.iteration,".rdata",sep=""))

  if(((type.of.LP.solver=="matlab" || type.of.LP.solver=="cplex" || type.of.LP.solver=="test_version") && (sln$status==1 || sln$status==5 )) || (type.of.LP.solver=="gurobi" && sln$status == "OPTIMAL")){
    print(paste("Feasible Solution was Found"))
    print("Fraction of solution components with integral value solutions")
    print(round(sum(sln$z>1-1e-10 | sln$z<10e-10)/length(sln$z),3))
  } else {print("Problem was Infeasible"); print("Linear program exit status"); print(sln$status);
    print("Please consider modifying the problem inputs, e.g., by relaxing the power constraints or by increasing the sample size, and submitting a new problem. Thank you for using this trial design optimizer.")
    stop();}

  if(round.each.multiple.testing.procedure.rectangle.to.integer){## If Final iteration, round solution and save; only does this if decision rule was rounded and set to be deterministic

    z_solution <- sln$z

    postscript(paste("decision_rule.eps"),height=8,horizontal=FALSE,onefile=FALSE,width=8)
    par(mfrow=c(1,1),las=1)
    par(mar=c(7,6.5,5.6,2.1))
    plot(0,type="n",xaxt="n",xlim=c(-3,3),ylim=c(-3,3),main="Decision Regions for Stage 2 Enrollment",xlab=expression(paste(Z[1]^1)),ylab=expression(paste(Z[2]^1)),cex.lab=2, cex.axis=2, cex.main=2, cex.sub=2)
    axis(1,at=seq(-3,3,by=1),labels=(-3:3),cex.axis=2)
    axis(2,at=seq(-3,3,by=1),labels=(-3:3),cex.axis=2)
    par(las=0)
    for(r in list.of.rectangles.dec){
      tau <- 0
      rect(max(r$lower_boundaries[1]-tau,-10),max(r$lower_boundaries[2]-tau,-10),min(r$upper_boundaries[1]+tau,10),min(r$upper_boundaries[2]+tau,10),col=r$allowed_decisions, border=NA)
      #print(r$allowed_decisions)
    }
    decision_list <- c()
    for(d in decisions){decision_list <- c(decision_list,list(as.vector(stage.2.sample.sizes.per.enrollment.choice[d,])))}
    legend("bottomleft",legend=decision_list,title = "Stage 2 sample size per subpop.",pch=rep(15,length(decisions)),col=decisions,cex=1,bg="white")
    dev.off()

    ## Loop to create rounded version of multiple testing procedure
    load("A3.rdata") # load power constraints
    z_rounded <- z_solution
    z_integral_components <- rep(0,length(z_solution))

    round_up_fractional_H0C_extra_step <- FALSE
    if(round_up_fractional_H0C_extra_step){
      while(any(power_constraint_matrix_H0C %*% z_integral_components < power.constraints.matrix[,3]-power.constraint.tolerance))
      {
        #print(power_constraint_matrix_H0C[4,] %*% z_integral_components)
        for(d_plot in decisions){
          r_reference_index <- length(list.of.rectangles.dec) - number_reference_rectangles + d_plot
          r_reference <- list.of.rectangles.dec[[r_reference_index]]
          for(d in r_reference$allowed_decisions){
            for(rprime in list.of.rectangles.mtp[[d]]){
              variable_start_position <- variable_location(r_reference,d,rprime,rprime$allowed_actions[1])
              variable_end_position <- variable_location(r_reference,d,rprime,rprime$allowed_actions[length(rprime$allowed_actions)])
              action_indicator <- z_rounded[variable_start_position:variable_end_position];
              if(sum(action_indicator[c(2,5,7)])>rounding.threshold.H01){H01_reject <- 1}else{H01_reject <- 0}
              if(sum(action_indicator[c(3,6,7)])>rounding.threshold.H02){H02_reject <- 1}else{H02_reject <- 0}
              if(any(d_plot == plots.to.round.simply)){# Round using input thresholds, irrespective of neighbor rectangles
                if((H01_reject && H02_reject) || sum(action_indicator[c(4,5,6,7)])>1-1e-10){H0C_reject <- 1}else{H0C_reject <- 0}
                deterministic_action <- ifelse((!H01_reject) && (!H02_reject) && (!H0C_reject),1,
                                               ifelse((H01_reject) && (!H02_reject) && (!H0C_reject),2,
                                                      ifelse((!H01_reject) && (H02_reject) && (!H0C_reject),3,
                                                             ifelse((!H01_reject) && (!H02_reject) && (H0C_reject),4,
                                                                    ifelse((H01_reject) && (!H02_reject) && (H0C_reject),5,
                                                                           ifelse((!H01_reject) && (H02_reject) && (H0C_reject),6,
                                                                                  ifelse((H01_reject) && (H02_reject) && (H0C_reject),7,8)))))));
                z_integral_components[variable_start_position:variable_end_position] <- z_rounded[variable_start_position:variable_end_position] <- rep(0,variable_end_position-variable_start_position+1);
                z_integral_components[variable_start_position+deterministic_action - 1] <- z_rounded[variable_start_position+deterministic_action - 1] <- 1;
              } else{
                if(sum(action_indicator>1-1e-10 | action_indicator<10e-10)==7){
                  deterministic_action <- which(action_indicator==max(action_indicator));
                  z_integral_components[variable_start_position:variable_end_position] <- z_rounded[variable_start_position:variable_end_position] <- rep(0,variable_end_position-variable_start_position+1);
                  z_integral_components[variable_start_position+deterministic_action - 1] <- z_rounded[variable_start_position+deterministic_action - 1] <- 1;} else{
                    # need to consider rounding for H0C
                    # If if fractional H0C component above threhold, and if upper neighbor and right neighbor reject H0C, then reject H0C
                    # Get upper neighbor (if any) and check if rejects H0C
                    if(!is.null(rprime$upper_neighbor)){
                      rprime_upper_neighbor <- list.of.rectangles.mtp[[d]][[rprime$upper_neighbor]]
                      upper_neighbor_start_position <- variable_location(r_reference,d,rprime_upper_neighbor,rprime_upper_neighbor$allowed_actions[1]);
                      upper_neighbor_end_position <- variable_location(r_reference,d,rprime_upper_neighbor,rprime_upper_neighbor$allowed_actions[length(rprime_upper_neighbor$allowed_actions)]);
                      upper_neighbor_action_indicator <- z_rounded[upper_neighbor_start_position:upper_neighbor_end_position];
                    }
                    # Get right neighbor (if any) and check if rejects H0C
                    if(!is.null(rprime$right_neighbor)){
                      rprime_right_neighbor <- list.of.rectangles.mtp[[d]][[rprime$right_neighbor]]
                      right_neighbor_start_position <- variable_location(r_reference,d,rprime_right_neighbor,rprime_right_neighbor$allowed_actions[1]);
                      right_neighbor_end_position <- variable_location(r_reference,d,rprime_right_neighbor,rprime_right_neighbor$allowed_actions[length(rprime_right_neighbor$allowed_actions)]);
                      right_neighbor_action_indicator <- z_rounded[right_neighbor_start_position:right_neighbor_end_position];
                    }

                    if((H01_reject && H02_reject) || sum(action_indicator[c(4,5,6,7)])>1-1e-10 || (sum(action_indicator[c(4,5,6,7)])>rounding.threshold.H0C && (is.null(rprime$upper_neighbor) || sum(upper_neighbor_action_indicator[c(4,5,6,7)])>1-1e-10) && (is.null(rprime$right_neighbor) || sum(right_neighbor_action_indicator[c(4,5,6,7)])>1-1e-10))){# round H0C to 1 and permanently fix rounding
                      H0C_reject <- 1
                      deterministic_action <- ifelse((!H01_reject) && (!H02_reject) && (!H0C_reject),1,
                                                     ifelse((H01_reject) && (!H02_reject) && (!H0C_reject),2,
                                                            ifelse((!H01_reject) && (H02_reject) && (!H0C_reject),3,
                                                                   ifelse((!H01_reject) && (!H02_reject) && (H0C_reject),4,
                                                                          ifelse((H01_reject) && (!H02_reject) && (H0C_reject),5,
                                                                                 ifelse((!H01_reject) && (H02_reject) && (H0C_reject),6,
                                                                                        ifelse((H01_reject) && (H02_reject) && (H0C_reject),7,8)))))));
                      #print(d); print(rprime); if(!is.null(rprime$right_neighbor)){print(rprime$right_neighbor)};if(!is.null(rprime$upper_neighbor)){print(rprime$upper_neighbor)}; print(action_indicator);  print(deterministic_action);
                      z_integral_components[variable_start_position:variable_end_position] <- z_rounded[variable_start_position:variable_end_position] <- rep(0,variable_end_position-variable_start_position+1);
                      z_integral_components[variable_start_position+deterministic_action - 1] <- z_rounded[variable_start_position+deterministic_action - 1] <- 1;
                      if(!any(power_constraint_matrix_H0C %*% z_integral_components < power.constraints.matrix[,3]-power.constraint.tolerance)){break;}
                      #print(power_constraint_matrix_H0C[4,] %*% z_integral_components)
                      #browser()
                    }
                  }
              }}}}
        #rounding.threshold.H0C <- 0.4
      }
    }
    # Only used for decisions d of d_type[d]==3 or ==4, and encodes maximum possible z_statistic value in x-coordinate for d_type[d]==3 and in y-coordinate for d_type[d]==4. (This can be computed for the case where the decision regions are set in advance, since d_type[d]==3 involves no update to x-coordinate z-statistic, and analogously for y-coordinate under d_type[d]==4.)
    max_d_type_value <- rep(-Inf,length(decisions))

    for(r in list.of.rectangles.dec[1:(length(list.of.rectangles.dec)-number_reference_rectangles)]){   if(r$preset_decision>0){
      probability_vector <- r$preset_decision_value;
    } else {
      for(d in decisions){
        rprime <- list.of.rectangles.mtp[[d]][[1]]
        variable_start_position <- variable_location(r,d,rprime,1);
        variable_end_position <- variable_location(r,d,rprime,length(actions));
        probability_vector[d] <- sum(z_rounded[variable_start_position:variable_end_position]);
      }
    }
      for(d in decisions){
        if(d_type[d]==3 && probability_vector[d]>1e-10){
          max_d_type_value[d] <- max(max_d_type_value[d],r$upper_boundaries[2])} else
            if(d_type[d]==4 && probability_vector[d]>1e-10){
              max_d_type_value[d] <- max(max_d_type_value[d],r$upper_boundaries[1])}
      }
    }
    #print(max_d_type_value);

    for(d_plot in decisions){
      postscript(paste("rejection_regions_",d_plot,".eps",sep=""),height=8,horizontal=FALSE,onefile=FALSE,width=8)
      par(mar=c(7.5,6.5,5.6,2.1))
      plot(0,type="n",xaxt="n",yaxt="n",xlim=c(-3,3),ylim=c(-3,3),main=paste("Rejection Regions at End of Stage 2 \n Under 1st Stage Decision ",d_plot,sep=""),xlab=expression(paste(Z[1]^F)),ylab=expression(paste(Z[2]^F)),cex.lab=2, cex.axis=2, cex.main=2, cex.sub=2)
      axis(1,at=seq(-3,3,by=1),labels=-3:3,cex.axis=2)
      axis(2,at=seq(-3,3,by=1),labels=-3:3,cex.axis=2)

      r_reference_index <- length(list.of.rectangles.dec) - number_reference_rectangles + d_plot
      r_reference <- list.of.rectangles.dec[[r_reference_index]]
      for(d in r_reference$allowed_decisions){
        for(rprime in list.of.rectangles.mtp[[d]]){
          variable_start_position <- variable_location(r_reference,d,rprime,rprime$allowed_actions[1])
          variable_end_position <- variable_location(r_reference,d,rprime,rprime$allowed_actions[length(rprime$allowed_actions)])
          action_indicator <-z_rounded[variable_start_position:variable_end_position]

          if(sum(action_indicator>1-1e-10 | action_indicator<10e-10)==7){
            deterministic_action <- col_value <- which(action_indicator==max(action_indicator))} else{
              if(sum(action_indicator[c(2,5,7)])>rounding.threshold.H01){H01_reject <- 1}else{H01_reject <- 0}
              if(sum(action_indicator[c(3,6,7)])>rounding.threshold.H02){H02_reject <- 1}else{H02_reject <- 0}
              if((H01_reject && H02_reject) || sum(action_indicator[c(4,5,6,7)])>rounding.threshold.H0C){H0C_reject <- 1}else{H0C_reject <- 0};
              #if(H0C_reject && !(sum(z_integral_components[variable_start_position+3,variable_start_position+6]>rounding.threshold.H0C))){browser()}
              deterministic_action <- col_value <- ifelse((!H01_reject) && (!H02_reject) && (!H0C_reject),1,
                                                          ifelse((H01_reject) && (!H02_reject) && (!H0C_reject),2,
                                                                 ifelse((!H01_reject) && (H02_reject) && (!H0C_reject),3,
                                                                        ifelse((!H01_reject) && (!H02_reject) && (H0C_reject),4,
                                                                               ifelse((H01_reject) && (!H02_reject) && (H0C_reject),5,
                                                                                      ifelse((!H01_reject) && (H02_reject) && (H0C_reject),6,
                                                                                             ifelse((H01_reject) && (H02_reject) && (H0C_reject),7,8)))))))
            }
          z_integral_components[variable_start_position:variable_end_position] <- z_rounded[variable_start_position:variable_end_position] <- rep(0,variable_end_position-variable_start_position+1);
          z_integral_components[variable_start_position+deterministic_action - 1] <- z_rounded[variable_start_position+deterministic_action - 1] <- 1;

          if(d_type[d_plot]==2){ #if decision is to stop after stage 1
            for(r in list.of.rectangles.dec[1:(length(list.of.rectangles.dec)-number_reference_rectangles)]){      ## compute overlap in rectangles between r and rprime:
              intersection_rectangle_lower_boundaries <- pmax(r$lower_boundaries,rprime$lower_boundaries)
              intersection_rectangle_upper_boundaries <- pmin(r$upper_boundaries,rprime$upper_boundaries)
              if(sum(intersection_rectangle_lower_boundaries < intersection_rectangle_upper_boundaries)==2){ #if they intersect, check for positive probability of d=d_plot at r:
                if(r$preset_decision>0){
                  probability_of_d_plot <- r$preset_decision_value[d_plot];
                } else {
                  variable_start_position <- variable_location(r,d_plot,rprime,1);
                  variable_end_position <- variable_location(r,d_plot,rprime,length(actions));
                  probability_of_d_plot <- sum(sln$z[variable_start_position:variable_end_position]);
                }
                if(probability_of_d_plot > 1e-10){ # if positive probability, then plot rectangle
                  rect(max(rprime$lower_boundaries[1]-tau,-10),max(rprime$lower_boundaries[2]-tau,-10),min(rprime$upper_boundaries[1]+tau,10),min(rprime$upper_boundaries[2]+tau,10),col=col_value-1,border=NA)}
              }}} else if(d_type[d_plot]==3 && rprime$lower_boundaries[2]< max_d_type_value[d_plot]){
                rect(max(rprime$lower_boundaries[1]-tau,-10),max(rprime$lower_boundaries[2]-tau,-10),min(rprime$upper_boundaries[1]+tau,10),min(rprime$upper_boundaries[2]+tau,10),col=col_value-1,border=NA)

              } else if(d_type[d_plot]==4 && rprime$lower_boundaries[1]<max_d_type_value[d_plot]){
                rect(max(rprime$lower_boundaries[1]-tau,-10),max(rprime$lower_boundaries[2]-tau,-10),min(rprime$upper_boundaries[1]+tau,10),min(rprime$upper_boundaries[2]+tau,10),col=col_value-1,border=NA)
              } else if(d_type[d_plot]==1){
                rect(max(rprime$lower_boundaries[1]-tau,-10),max(rprime$lower_boundaries[2]-tau,-10),min(rprime$upper_boundaries[1]+tau,10),min(rprime$upper_boundaries[2]+tau,10),col=col_value-1,border=NA)
              }
        }
      }
      legend("bottomleft",legend=c("none", "H01", "H02", "H0C", "H01 and H0C", "H02 and H0C", "all"),title = "Reject Null Hypotheses:",pch=c(15,15,15,15,15,15,15),col=actions-1,cex=1,bg="white")
      dev.off()
    }

    save(z_rounded,file="z_rounded.rdata")

    max_FWER <- 0
    for(task_id in 1:(ceiling(length(ncp.list)/constraints_per_A1_file))){
      load(file=paste("A1",task_id,".rdata",sep=""))
      #print(ncp.list[[which((constraint_list %*% sln$z) ==max(constraint_list %*% sln$z))]])
      fwer_candidates <- constraint_list %*% z_rounded
      if(max(fwer_candidates)>max_FWER){
        max_FWER <- max(max_FWER,max(constraint_list %*% z_rounded))
        #print(constraint_list %*% z_rounded)
        #print(ncp.list[(1+(counter-1)*6):(counter*6)])
      }
    }
    print("Maximum familywise Type I error rate among Type I error constraints")
    print(round(max_FWER,3));

    load("A3.rdata")
    print("User defined power constraints (desired power); each row corresponds to a data generating distribution; each column corresponds to H01, H02, H0C desired power, respectively.")
    power.requirement.matrix <- cbind(data.generating.distributions,power.constraints.matrix)
    rownames(power.requirement.matrix) <- paste("Scenario",1:dim(data.generating.distributions)[1])
    print(round(power.requirement.matrix,3))
    print("Probability of rejecting each null hypothesis (last 3 columns) under each data generating distribution (row)")
    rejection.probabilities <- cbind(power_constraint_matrix_H01 %*% z_rounded,power_constraint_matrix_H02 %*% z_rounded,power_constraint_matrix_H0C %*% z_rounded)
    colnames(rejection.probabilities) <- c("H01","H02","H0C")
    rejection_probability_matrix <- cbind(data.generating.distributions,rejection.probabilities);
    rownames(rejection_probability_matrix) <- paste("Scenario",1:dim(data.generating.distributions)[1])
    print(round(rejection_probability_matrix,3))
  }

  # Clean up files used to specify LP
  if(cleanup.temporary.files){
    system('rm A*.rdata')
    system('rm c.rdata')
    system('rm number_variables.txt')
    system('rm Inequality_Constraints_to_Restrict_MTP_to_Sufficient_Statistics.rdata')
    system('rm Inequality_Constraints_to_set_monotonicity_in_hypotheses_rejected.rdata')
    system('rm number_equality_constraints_of_first_type.txt')
    system('rm number_equality_constraints_of_second_type.txt')
    system('rm number_A1_constraints.txt')
    system('rm number_A1_files.txt')
    system('rm power_constraints.rdata')
    system('rm sln*.rdata')
    if(type.of.LP.solver=="matlab" || type.of.LP.solver=="cplex"){
      system('rm A*.mat')
      system('rm a*.mat')
      system('rm cc.mat')
      system('rm list.of.rectangles.mtp*.rdata')
      system('rm iteration.mat')
      system('rm output_LP_solver')
      system('rm sln2M*.mat')
      system('rm max_error_prob*')
    }
  }

}
mrosenblum/AdaptiveDesignOptimizerSparseLP documentation built on Feb. 13, 2020, 12:59 p.m.