All analyses

Things we can do without access to IPD

Adjusting p-value for decision at hand

glms <- cache_rds({lst(
  cld = glm(cld ~ trt, family = binomial, data = dat),
  mort = glm(mort ~ trt, family = binomial, data = dat)
)}, dir = cache, file = "marginal_glms") 

map(glms, summary)
conf_vector <- seq(from = 0.95, to = 0.7, length.out = 10)

power <- FALSE
i <- 1
while(power == FALSE){

  pwr <- epiR::epi.sscc(OR = 0.7, # Coincides to risk difference of 8% based on 36% baseline risk
               p0 = 0.36,
               n = 1350,
               conf.level = conf_vector[[i]],
               power = NA)

  power = pwr$power >= 0.9
  i = i + 1
}

new_alpha <- 1 - conf_vector[[i]]

Before we get into more direct decision theory approaches, one simple way we can make decisions a little more local is just by re-evaluating our values of alpha and beta. Let's use CLD as a rough proxy for risk of disability and say that maybe locally we are willing to risk more false positives in order to decrease our possibility of missing a true effect. Instead of the default assumption that a false positive is four times worse than a false negative let's say in this case we decide that the potential harm from this intervention is such that we're willing to risk a two to one trade off. If we fix the sample size and assume we woudn't want to miss a risk difference of 8% (OR of 0.7) then this would equate to a p-value of r round(new_alpha, 2).

This equates to a decision rule that "If I can be confident in the direction of the effect" using alpha/beta to weigh risks of false negative/positive then you act "as if" there is/is not a difference.

MCDA from marginals

In the first step we will need marginal probabilities with uncertainty from each outcome we'd like to include in the mcda. An easy way to do this is just to run a glm and then sample predicted probabilities from the variance covariance matrix.

outs <- purrr::set_names(c("mort", "sev_ivh", "sepsis", "cld", "nec"))

mcda_marg_dat <- mcda_prep(outs, data = pres_dat)


mcda_marg <- cache_rds({

  lst(dat = mcda_marg_dat,
      mcda =  do_mcda(5, mcda_marg_dat))

 },
                       dir = cache, file = "marg_mcda")

Approaches that we need IPD for

Ordinal model

An alternative approach might be to arrange outcomes from best to worse and then use an ordinal model to run the analysis interpreting the treatment effect as a kind of improvement in quasi utility.

ord <- c("sepsis", "nec", "cld", "sev_ivh", "mort")



dat_ord_orig <- make_ord(data = pres_dat, name = "ord1", order = ord)

dat_ord_synth <- make_ord(data = pres_dat_syn, name = "ord1", order = ord)


ord_models <- cache_rds({
  lst(
  orig = rms::orm(ord1 ~ trt, data = dat_ord_orig),
  synth = rms::orm(ord1 ~ trt, data = dat_ord_synth)
)
}, dir = cache, file = "ord_models") 


ord_models

MCDA with conditional

A drawback of the ordinal model is that it doesn't let us specify how much worse one outcome is than another. We can extend the marginal MCDA to account for combinations of outcomes being worse by leveraging our synthetic dataset. This could be extended further by having non-linear partial utilities or other considerations.

outs_prep <- c("mort", "sepsis", "nec", "cld_ivh")

mcda_comb_list <- cache_rds({

  set.seed(124)
  mcda_comb_orig_dat <- mcda_prep(outs_prep, data = pres_dat)

  mcda_comb_orig <- do_mcda(4, mcda_comb_orig_dat)

  set.seed(124)

  mcda_comb_syn_dat <- mcda_prep(outs_prep, data = pres_dat_syn)
  mcda_comb_syn <- do_mcda(4, mcda_comb_syn_dat)

  lst(orig_dat =  mcda_comb_orig_dat,
      orig_mcda =  mcda_comb_orig,
      comb_dat = mcda_comb_syn_dat,
      comb_mcda = mcda_comb_syn)
}, dir = cache, file = "comb_mcdas")


timdisher/neoDecision documentation built on Dec. 23, 2021, 10:56 a.m.