inst/shiny-examples/ARPsimulator/markdown/ES_graph.md

Effect size graph

Another way to think about the results of a single-case study is by considering an estimate of effect size. Effect sizes are quantitative measures of the magnitude of treatment effects, which reflect the degree of change in an outcome for a given case as a result of intervention. A wide variety of effect size metrics has been proposed for use with single-case designs, as described in detail below. For any particular study, an effect size can only be estimated (as oppose to observed with 100% certainty) because the outcome measurements are influenced by chance fluctuations. However, since the simulator can produce hypothetical data from many identical repetitions of a study, it is possible consider the sampling distribution of the effect sizes. The sampling distribution is a way of summarizing the range of possible effect size estimates that could be observed in the study, given the specified behavioral parameters, study design, and measurement procedures.

The Effect sizes tab in the lower pane of the simulator displays the sampling distribution of an effect size, given a set of assumptions as specified in the input boxes of the upper pane. Initially, no graph will be displayed. You should begin by examining and modifying the options in the left-hand panel, which are described further below. Then hit the Simulate! button to produce a graph of the estimated sampling distribution of the effect size estimate. This graph is a density plot, or smoothed histogram, which is a common way of representing a sampling distribution. Separate density plots will be displayed for each case in the study. For a given case, the horizontal axis of the graph corresponds to the range of possible values for the effect size estimate. The vertical height of the density corresponds to the relative frequency with which a given value of the effect size estimate is obtained. For example, suppose that the height of the density at an effect size of 80 is twice the height at an effect size of 45. This means that you are twice as likely to observe an effect size estimate around 80 as you are to observe an effect size around 45. Also, the area of the density plot is proportional to the probability of obtaining an effect size estimate in a given range.

Options

The effect size sampling distribution tab has four further options:

  1. The effect size measure that is calculated for each case. Currently, seven different effect size options are available. See below for more details.
  2. In order to calculate the non-overlap measures of effect size, one must first specify the direction of improvement, meaning whether an increase or a decrease in the outcome is desirable. For example, a treatment would normally be intended to decrease problem behavior, whereas another treatment might be intended to increase social initiations.
  3. The number of samples per case controls how many hypothetical studies will be simulated in order to estimate the sampling distribution of the effect size for each case. The default is 100. Increasing the number of samples per case will produce a more accurate estimate of the sampling distribution, at a cost of increased computation time.
  4. The show average check-box controls whether dashed lines are displayed to indicate the average effect size estimate for each case.

Effect size measures

Seven different effect size measures are currently implemented, including many of the non-overlap measures as well as the within-case standardized mean difference statistic. These measures were selected because they are commonly used as effect sizes for single-case designs. Other effect size measures that account for time trends and auto-correlation are excluded because the basic model embedded in the ARPsimulator assumes stability of baseline trends and independence of measurements across sessions. The seven included effect sizes are defined as follows. For non-overlap measures, the definitions assume that an increase (decrease) in the outcome represents an improvement.



Try the ARPobservation package in your browser

Any scripts or data that you put into this service are public.

ARPobservation documentation built on Aug. 25, 2023, 5:19 p.m.