Fitting Example Using dfddm

req_suggested_packages <- c("reshape2", "ggplot2")
pcheck <- lapply(req_suggested_packages, requireNamespace, 
                 quietly = TRUE)
if (any(!unlist(pcheck))) {
   message("Required package(s) for this vignette are not available/installed and code will not be executed.")
   knitr::opts_chunk$set(eval = FALSE)
}
knitr::opts_chunk$set(
  collapse = TRUE,
  error = TRUE,
  comment = "#>"
)
op <- options(width = 100, digits = 4)

Function dfddm evaluates the density function (or probability density function, PDF) for the Ratcliff diffusion decision model (DDM) using different methods for approximating the full PDF, which contains an infinite sum. An overview of the mathematical details of the different approximations is provided in the Math Vignette. An empirical validation of the implemented methods is provided in the Validity Vignette. Timing benchmarks for the present methods and comparison with existing methods are provided in the Benchmark Vignette.

Our implementation of the DDM has the following parameters: $a \in (0, \infty)$ (threshold separation), $v \in (-\infty, \infty)$ (drift rate), $t_0 \in [0, \infty)$ (non-decision time/response time constant), $w \in (0, 1)$ (relative starting point), $sv \in (0, \infty)$ (inter-trial-variability of drift), and $\sigma \in (0, \infty)$ (diffusion coefficient of the underlying Wiener Process).

Introduction {#intro}


This vignette contains two examples of how to use fddm, in particular the dfddm function, in fitting the DDM to real-world data. We will load a dataset that is included in the fddm package and fit the Ratcliff DDM to the response time data contained within the dataset. We will show a simple fitting procedure for estimating the DDM parameter values for only a single individual in the study in addition to a more involved fitting procedure that includes DDM parameter estimation for all of the individuals in the study. After running this more involved optimization, we provide a rudimentary analysis of the fitted parameter estimates that groups the parameter estimates by the expertise of the study's participants.

Example Fitting {#ex}


In this example, we will fit the DDM to the med_dec data that comes with fddm. This dataset contains the accuracy condition reported in @trueblood_impact_2018, which investigates medical decision making among medical professionals (pathologists) and novices (i.e., undergraduate students). The task of participants was to judge whether pictures of blood cells show cancerous cells (i.e., blast cells) or non-cancerous cells (i.e., non-blast cells). The dataset contains 200 decisions per participant, based on pictures of 100 true cancerous cells and pictures of 100 true non-cancerous cells. We load the fddm package, read the data, and remove any invalid responses from the data.

library("fddm")
data(med_dec, package = "fddm")
med_dec <- med_dec[which(med_dec[["rt"]] >= 0), ]

Log-likelihood Function {#ex-ll-fun}

Our approach will be a straightforward maximum likelihood estimation (MLE). Since we will be using the optimization function nlminb, we must write an objective function for it to optimize. By default nlminb finds the minimum of the objective function instead of the maximum, so we will simply negate our likelihood function. In addition, we will employ the common practice of using the log-likelihood as this tends to be more stable while still maintaining the same minima (negated maxima) as the regular likelihood function.

We are going to be fitting the parameters $v$, $a$, $t_0$, $w$, and $sv$; however, we want to fit two distinct drift rates, one for the upper boundary ($v_u$) and one for the lower boundary ($v_\ell$). In order to make this distinction, we require the input of the truthful classification of each decision (i.e. what the correct response is for each entry). Note that our log-likelihood function depends on the number of response times, the number of responses, and the number of truthful classifications all being equal.

As we are using the optimization function nlminb, the first argument to our log-likelihood function needs to be a vector of the initial values of the six parameters that are being optimized: $v_u$, $v_\ell$, $a$, $t_0$, $w$, and $sv$. The rest of the arguments will be the other necessary inputs to dfddm that are not optimized: the vector of response times, the vector of responses, the vector of the truthful classifications, and the allowable error tolerance for the density function (optional). Details on all of these inputs can be found in the dfddm documentation.

Upon being called, the log-likelihood function first separates the input response times and responses by their truthful classification to yield two new response time vectors and two new response vectors. The response times and responses are then input into separate density functions using a separate $v$ parameter, $v_u$ or $v_\ell$. These separate densities are then combined, and the log-likelihood function heavily penalizes any combination of parameters that returns a log-density of $-\infty$ (equivalent to a regular density of $0$). Lastly, the actual log-likelihood is returned as the negative of the sum of all of the log-densities.

ll_fun <- function(pars, rt, resp, truth, err_tol) {
  v <- numeric(length(rt))

  # the truth is "upper" so use vu
  v[truth == "upper"] <- pars[[1]]
  # the truth is "lower" so use vl
  v[truth == "lower"] <- pars[[2]]

  dens <- dfddm(rt = rt, response = resp, a = pars[[3]], v = v, t0 = pars[[4]],
                w = pars[[5]], sv = pars[[6]], err_tol = 1e-6, log = TRUE)

  return( ifelse(any(!is.finite(dens)), 1e6, -sum(dens)) )
}

Simple Fitting Routine {#ex-simple}

As an intermediate step, we will fit the DDM to only one participant from the med_dec data. We select the individual whose data we will use for fitting before preparing the data by defining upper and lower responses and the correct response bounds.

onep <- med_dec[ med_dec[["id"]] == "2" & med_dec[["group"]] == "experienced", ]
onep[["resp"]] <- ifelse(onep[["response"]] == "blast", "upper", "lower")
onep[["truth"]] <- ifelse(onep[["classification"]] == "blast", "upper", "lower")
str(onep)

We then pass the data and log-likelihood function with the necessary additional arguments to an optimization function. As we are using the optimization function nlminb for this example, we must input as the first argument the initial values of our DDM parameters that we want optimized. These are input in the order: $v_u$, $v_\ell$, $a$, $t_0$, $w$, and $sv$; we also need to define upper and lower bounds for each parameters. Fitting the DDM to this dataset is basically instantaneous using this setup.

fit <- nlminb(c(0, 0, 1, 0, 0.5, 0), objective = ll_fun,
              rt = onep[["rt"]], resp = onep[["resp"]], truth = onep[["truth"]],
              # limits:   vu,   vl,   a,  t0, w,  sv
              lower = c(-Inf, -Inf, .01,   0, 0,   0),
              upper = c( Inf,  Inf, Inf, Inf, 1, Inf))
fit

Fitting the Entire Dataset {#ex-data}

Here we will run a more rigorous fitting on the entire med_dec dataset to obtain parameter estimates for each participant in the study. To do this, we define a function to run the data fitting for us; we want it to output a dataframe containing the parameter estimates for each individual in the data. The inputs will be the dataset, the allowable error tolerance for the density function, how the "upper" response is presented in the dataset, and indices of the columns in the dataset containing: identification of the individuals in the dataset, the response times, the responses, and the truthful classifications.

After some data checking, the fitting function will extract the unique individuals from the dataset and run the parameter optimization for the responses and response times for each individual. The optimizations themselves are initialized with random initial parameter values to aid in the avoidance of local minima in favor of global minima. Moreover, the optimization will run 5 times for each individual, with 5 different sets of random initial parameter values. The value of the minimized log-likelihood function will be compared across all 5 runs, and the smallest such value will indicate the best fit. The parameter estimates, convergence code, and minimized value of the log-likelihood function produced by this best fit will be saved for that individual.

rt_fit <- function(data, id_idx = NULL, rt_idx = NULL, response_idx = NULL,
                   truth_idx = NULL, response_upper = NULL) {

  # Format data for fitting
  if (all(is.null(id_idx), is.null(rt_idx), is.null(response_idx),
      is.null(truth_idx), is.null(response_upper))) {
    df <- data # assume input data is already formatted
  } else {
    if(any(data[,rt_idx] < 0)) {
      stop("Input data contains negative response times; fit will not be run.")
    }
    if(any(is.na(data[,response_idx]))) {
      stop("Input data contains invalid responses (NA); fit will not be run.")
    }

    nr <- nrow(data)
    df <- data.frame(id = character(nr),
                     rt = double(nr),
                     response = character(nr),
                     truth = character(nr),
                     stringsAsFactors = FALSE)

    if (!is.null(id_idx)) { # relabel identification tags
      for (i in 1:length(id_idx)) {
        idi <- unique(data[,id_idx[i]])
        for (j in 1:length(idi)) {
          df[["id"]][data[,id_idx[i]] == idi[j]] <- paste(
            df[["id"]][data[,id_idx[i]] == idi[j]], idi[j], sep = " ")
        }
      }
      df[["id"]] <- trimws(df[["id"]], which = "left")
    }

    df[["rt"]] <- as.double(data[,rt_idx])

    df[["response"]] <- "lower"
    df[["response"]][data[,response_idx] == response_upper] <- "upper"

    df[["truth"]] <- "lower"
    df[["truth"]][data[,truth_idx] == response_upper] <- "upper"
  }

  # Preliminaries
  ids <- unique(df[["id"]])
  nids <- max(length(ids), 1) # if inds is null, there is only one individual
  ninit_vals <- 5

  # Initilize the output dataframe
  cnames <- c("ID", "Convergence", "Objective",
              "vu_fit", "vl_fit", "a_fit", "t0_fit", "w_fit", "sv_fit")
  out <- data.frame(matrix(ncol = length(cnames), nrow = nids))
  colnames(out) <- cnames
  temp <- data.frame(matrix(ncol = length(cnames)-1, nrow = ninit_vals))
  colnames(temp) <- cnames[-1]

  # Loop through each individual and starting values
  for (i in 1:nids) {
    out[["ID"]][i] <- ids[i]

    # extract data for id i
    dfi <- df[df[["id"]] == ids[i],]
    rti <- dfi[["rt"]]
    respi <- dfi[["response"]]
    truthi <- dfi[["truh"]]

    # starting value for t0 must be smaller than the smallest rt
    min_rti <- min(rti)

    # create initial values for this individual
    init_vals <- data.frame(vu = rnorm(n = ninit_vals, mean = 4, sd = 2),
                            vl = rnorm(n = ninit_vals, mean = -4, sd = 2),
                            a  = runif(n = ninit_vals, min = 0.5, max = 5),
                            t0 = runif(n = ninit_vals, min = 0, max = min_rti),
                            w  = runif(n = ninit_vals, min = 0, max = 1),
                            sv = runif(n = ninit_vals, min = 0, max = 5))

    # loop through all of the starting values
    for (j in 1:ninit_vals) {
      mres <- nlminb(init_vals[j,], ll_fun,
                     rt = rti, resp = respi, truth = truthi,
                     # limits:   vu,   vl,   a,  t0, w,  sv
                     lower = c(-Inf, -Inf, .01,   0, 0,   0),
                     upper = c( Inf,  Inf, Inf, Inf, 1, Inf))
      temp[["Convergence"]][j] <- mres[["convergence"]]
      temp[["Objective"]][j] <- mres[["objective"]]
      temp[j, -c(1, 2)] <- mres[["par"]]
    }

    # determine best fit for the individual
    min_idx <- which.min(temp[["Objective"]])
    out[i, -1] <- temp[min_idx,]
  }
  return(out)
}

We load the dataset, remove any invalid rows from the dataset, and run the fitting; the dataframe of the fitting results is output below.

data(med_dec, package = "fddm")
med_dec <- med_dec[which(med_dec[["rt"]] >= 0),]
fit <- rt_fit(med_dec, id_idx = c(2,1), rt_idx = 8, response_idx = 7,
              truth_idx = 5, response_upper = "blast")
fit

Rudimentary Analysis {#ex-data-ana}

To show some basic results of our fitting, we will plot the fitted values of $v_u$ and $v_\ell$ grouped by the experience level of the participant to demonstrate how these parameters differ among novices, inexperienced professionals, and experienced professionals.

library("reshape2")
library("ggplot2")

fitp <- data.frame(fit[, c(1, 4, 5)]) # make a copy to manipulate for plotting
colnames(fitp)[-1] <- c("vu", "vl")

for (i in 1:length(unique(fitp[["ID"]]))) {
  first <- substr(fitp[["ID"]][i], 1, 1)
  if (first == "n") {
    fitp[["ID"]][i] <- "novice"
  } else if (first == "i") {
    fitp[["ID"]][i] <- "inexperienced"
  } else {
    fitp[["ID"]][i] <- "experienced"
  }
}

fitp <- melt(fitp, id.vars = "ID", measure.vars = c("vu", "vl"),
             variable.name = "vuvl", value.name = "estimate")

ggplot(fitp, aes(x = factor(ID, levels = c("novice", "inexperienced", "experienced")),
                 y = estimate,
                 color = factor(vuvl, levels = c("vu", "vl")))) +
  geom_point(alpha = 0.4, size = 4) +
  labs(title = "Parameter Estimates for vu and vl",
       x = "Experience Level", y = "Parameter Estimate",
       color = "Drift Rate") +
  theme_bw() +
  theme(panel.border = element_blank(),
        plot.title = element_text(size = 23),
        plot.subtitle = element_text(size = 16),
        axis.text.x = element_text(size = 16),
        axis.text.y = element_text(size = 16),
        axis.title.x = element_text(size = 20,
                                    margin = margin(10, 5, 5, 5, "pt")),
        axis.title.y = element_text(size = 20),
        legend.title = element_text(size = 20),
        legend.text = element_text(size = 16))

Before we begin analysis of this plot, note that the drift rate corresponding to the upper threshold should always be positive, and the drift rate corresponding to the lower threshold should always be negative. Since there are a few fitted values that switch this convention, the novice participants show evidence of consistently responding incorrectly to the stimulus. In contrast, both the inexperienced and experienced participants show a clean division of drift rates around zero.

In addition, we notice that the more experienced participants tend to have higher fitted drift rates in absolute value. A more extreme drift rate means that the participant receives and processes information more efficiently than a more mild drift rate. The overall pattern is that the novices are on average the worst at receiving information, the experienced professionals are the best, and the inexperienced professionals are somewhere in the middle. This pattern indicates that experienced professionals are indeed better at their job than untrained undergraduate students!

{.unlisted .unnumbered}

R Session Info {.unlisted .unnumbered}

sessionInfo()
options(op)  # reset options

References



Try the fddm package in your browser

Any scripts or data that you put into this service are public.

fddm documentation built on Sept. 10, 2022, 1:06 a.m.