Run a Monte Carlo simulation given a data.frame of conditions and simulation functions

Description

This function runs a Monte Carlo simulation study given a set of predefined simulation functions, design conditions, and number of replications. Results can be saved as temporary files in case of interruptions and may be restored by re-running runSimulation, provided that the respective temp file can be found in the working directory. runSimulation supports parallel and cluster computing, global and local debugging, error handling (including fail-safe stopping when functions fail too often, even across nodes), and tracking of error and warning messages. For convenience, all functions available in the R workspace are exported across all computational nodes so that they are more easily accessible (however, other R objects are not, and therefore must be passed to the fixed_objects input to become available across nodes). For a didactic presentation of the package refer to Sigal and Chalmers (in press).

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
runSimulation(design, replications, generate, analyse, summarise,
  fixed_objects = NULL, packages = NULL, filename = "SimDesign-results",
  save = FALSE, save_results = FALSE, save_seeds = FALSE,
  load_seed = NULL, seed = NULL, parallel = FALSE,
  ncores = parallel::detectCores(), cl = NULL, MPI = FALSE,
  max_errors = 50, as.factor = TRUE, save_generate_data = FALSE,
  save_details = list(), edit = "none", progress = FALSE,
  verbose = TRUE)

## S3 method for class 'SimDesign'
print(x, drop.extras = FALSE, drop.design = FALSE,
  format.time = TRUE, ...)

## S3 method for class 'SimDesign'
head(x, ...)

## S3 method for class 'SimDesign'
tail(x, ...)

## S3 method for class 'SimDesign'
summary(object, ...)

Arguments

design

a data.frame object containing the Monte Carlo simulation conditions to be studied, where each row represents a unique condition

replications

number of replication to perform per condition (i.e., each row in design). Must be greater than 0

generate

user-defined data and parameter generating function. See Generate for details

analyse

user-defined computation function which acts on the data generated from Generate. See Analyse for details

summarise

optional (but recommended) user-defined summary function to be used after all the replications have completed within each design condition. Omitting this function will return a list of matrices (or a single matrix, if only one row in design is supplied) or more general objects (such as lists) containing the results returned form Analyse. Ommiting this function is only recommended for didactic purposes because it leaves out a large amount of information (e.g., try-errors, warning messages, etc) and generally is not as flexible internally. See the save_results option for a better alternative to storing the Generate-Analyse results

fixed_objects

(optional) an object (usually a list) containing additional user-defined objects that should remain fixed across conditions. This is useful when including long fixed vectors/matrices of population parameters, data that should be used across all conditions and replications (e.g., including a fixed design matrix for linear regression), or simply can be used to control constant global elements such as sample size

packages

a character vector of external packages to be used during the simulation (e.g., c('MASS', 'mvtnorm', 'simsem') ). Use this input when parallel = TRUE or MPI = TRUE to use non-standard functions from additional packages, otherwise the functions must be made available by using explicit library or require calls within the provided simulation functions. Alternatively, functions can be called explicitly without attaching the package with :: (e.g., mvtnorm::rmvnorm())

filename

(optional) the name of the .rds file to save the final simulation results to when save = TRUE. When NULL, the final simulation object is not saved to the drive. As well, if the same file name already exists in the working directly at the time of saving then a new file will be generated instead and a warning will be thrown; this helps avoid accidentally overwriting existing files. Default is 'SimDesign-results'

save

logical; save the simulation state and final results to the hard-drive? This is useful for simulations which require an extended amount of time. When TRUE, a temp file will be created in the working directory which allows the simulation state to be saved and recovered (in case of power outages, crashes, etc). To recover you simulation at the last known location simply rerun the same code you used to initially define the simulation and the object will automatically be detected and read-in. Upon completion, and if filename is not NULL, the final results will also be saved to the working directory. Default is FALSE

save_results

logical; save the results returned from Analyse to external .rds files located in the defined save_results_dirname directory/folder? Use this if you would like to keep track of the individual parameters returned from the analyses. Each saved object will contain a list of three elements containing the condition (row from design), results (as a list or matrix), and try-errors. When TRUE, a temp file will be used to track the simulation state (in case of power outages, crashes, etc). When TRUE, temporary files will also be saved to the working directory (in the same was as when save = TRUE to better track the state of the simulation. See SimResults for an example of how to read these .rds files back into R after the simulation is complete. Default is FALSE

save_seeds

logical; save the .Random.seed states prior to performing each replication into plain text files located in the defined save_seeds_dirname directory/folder? Use this if you would like to keep track of the simulation state within each replication and design condition. Primarily, this is useful for completely replicating any cell in the simulation if need be, especially when tracking down hard-to-find errors and bugs. As well, see the load_seed input to load a given .Random.seed to exactly replicate the generated data and analysis state (handy for debugging). When TRUE, temporary files will also be saved to the working directory (in the same was as when save = TRUE to better track the state of the simulation. Default is FALSE

load_seed

a character object indicating which file to load from when the .Random.seeds have be saved (after a run with save_seeds = TRUE). E.g., load_seed = 'design-row-2/seed-1' will load the first seed in the second row of the design input. Note that it is important NOT to modify the design input object, otherwise the path may not point to the correct saved location. Default is NULL

seed

a vector of integers to be used for reproducibility. The length of the vector must be equal the number of rows in design. This argument calls set.seed or clusterSetRNGStream for each condition, respectively, but will not be run when MPI = TRUE. Default is NULL, indicating that no seed is set for each condition

parallel

logical; use parallel processing from the parallel package over each unique condition?

ncores

number of cores to be used in parallel execution. Default uses all available

cl

cluster object defined by makeCluster used to run code in parallel. If NULL and parallel = TRUE, a local cluster object will be defined which selects the maximum number cores available and will be stop the cluster when the simulation is complete. Note that supplying a cl object will automatically set the parallel argument to TRUE

MPI

logical; use the foreach package in a form usable by MPI to run simulation in parallel on a cluster? Default is FALSE

max_errors

the simulation will terminate when more than this number of constitutive errors are thrown in any given condition. The purpose of this is to indicate that likely something problematic is going wrong in the generate-analyse phases and should be inspected. Default is 50

as.factor

logical; coerce the input design elements into factors when the simulation is complete? If the columns inputs are numeric then these will be treated as ordered. Default is TRUE

save_generate_data

logical; save the data returned from Generate to external .rds files located in the defined save_generate_data_dirname directory/folder? It is generally recommended to leave this argument as FALSE because saving datasets will often consume a large amount of disk space, and by and large saving data is not required or recommended for simulations. A more space-friendly version is available when using the save_seed flag. When TRUE, temporary files will also be saved to the working directory (in the same was as when save = TRUE to better track the state of the simulation. Default is FALSE

save_details

a list pertaining to information about how and where files should be saved when save, save_results, or save_generate_data are triggered.

safe

logical; trigger whether safe-saving should be performed. When TRUE files will never be overwritten accidentally, and where appropriate the program will either stop or generate new files with unique names. Default is TRUE

compname

name of the computer running the simulation. Normally this doesn't need to be modified, but in the event that a manual node breaks down while running a simulation the results from the temp files may be resumed on another computer by changing the name of the node to match the broken computer. Default is the result of evaluating unname(Sys.info()['nodename'])

tmpfilename

the name of the temporary .rds file when any of the save flag is used. This file will be read-in if it is in the working directory and the simulation will continue at the last point this file was saved (useful in case of power outages or broken nodes). Finally, this file will be deleted when the simulation is complete. Default is the system name (compname) appended to 'SIMDESIGN-TEMPFILE_'

save_results_dirname

a string indicating the name of the folder to save result objects to when save_results = TRUE. If a directory/folder does not exist in the current working directory then a unique one will be created automatically. Default is 'SimDesign-results_' with the associated compname appended

save_seeds_dirname

a string indicating the name of the folder to save .Random.seed objects to when save_seeds = TRUE. If a directory/folder does not exist in the current working directory then one will be created automatically. Default is 'SimDesign-seeds_' with the associated compname appended

save_generate_data_dirname

a string indicating the name of the folder to save data objects to when save_generate_data = TRUE. If a directory/folder does not exist in the current working directory then one will be created automatically. Within this folder nested directories will be created associated with each row in design. Default is 'SimDesign-generate-data_' with the compname appended

edit

a string indicating where to initiate a browser() call for editing and debugging. General options are 'none' (default) and 'all', which are used to disable debugging and to debug all the user defined functions, respectively. Specific options include: 'generate' to edit the data simulation function, 'analyse' to edit the computational function, and 'summarise' to edit the aggregation function.

Alternatively, users may place browser calls within the respective functions for debugging at specific lines (note: parallel computation flags will automatically be disabled when a browser() is detected)

progress

logical; display a progress bar for each simulation condition? This is useful when simulations conditions take a long time to run. Uses the pbapply package to display the progress. Default is FALSE

verbose

logical; print messages to the R console? Default is TRUE

x

SimDesign object returned from runSimulation

drop.extras

logical; don't print information about warnings, errors, simulation time, and replications? Default is FALSE

drop.design

logical; don't include information about the (potentially factorized) simulation design? This may be useful if you wish to cbind() the original design data.frame to the simulation results instead of using the auto-factorized version. Default is FALSE

format.time

logical; format SIM_TIME into a day/hour/min/sec character vector? Default is TRUE

...

additional arguments

object

SimDesign object returned from runSimulation

Details

The strategy for organizing the Monte Carlo simulation work-flow is to

1)

Define a suitable design data.frame object containing fixed conditional information about the Monte Carlo simulations. This is often expedited by using the expand.grid function, and if necessary using the subset function to remove redundant or non-applicable rows

2)

Define the three step functions to generate the data (Generate), analyse the generated data by computing the respective parameter estimates, detection rates, etc (Analyse), and finally summarise the results across the total number of replications (Summarise). Note that these functions can be automatically generated by using the SimFunctions function.

3)

Pass the above objects to the runSimulation function, and declare the number of replications to perform with the replications input. This function will accept a design data.frame object and will return a suitable data.frame object with the simulation results

4)

Analyze the output from runSimulation, possibly using ANOVA techniques (SimAnova) and generating suitable plots and tables

For a skeleton version of the work-flow, which is often useful when initially defining a simulation, see SimFunctions. This function will write template simulation code to one/two files so that modifying the required functions and objects can begin immediately with minimal error. This means that you can focus on your Monte Carlo simulation immediately rather than worrying about the administrative code-work required to organize the simulation work-flow.

Additional information for each condition are also contained in the data.frame object returned by runSimulation: REPLICATIONS to indicate the number of Monte Carlo replications, SIM_TIME to indicate how long (in seconds) it took to complete all the Monte Carlo replications for each respective design condition, SEED if the seed argument was used, columns containing the number of replications which had to be re-run due to errors (where the error messages represent the names of the columns prefixed with a ERROR: string), and columns containing the number of warnings prefixed with a WARNING: string.

Additional examples, presentation files, and tutorials can be found on the package wiki located at https://github.com/philchalmers/SimDesign/wiki.

Value

a data.frame (also of class 'SimDesign') with the original design conditions in the left-most columns, simulation results and ERROR/WARNING's (if applicable) in the middle columns, and additional information (such as REPLICATIONS, SIM_TIME, and SEED) in the right-most columns.

Saving data, results, seeds, and the simulation state

To conserve RAM, temporary objects (such as data generated across conditions and replications) are discarded; however, these can be saved to the hard-disk by passing the appropriate flags. For longer simulations it is recommended to use save = TRUE to temporarily save the simulation state, and to use the save_results flag to write the analysis results the to hard-disc.

The generated data can be saved by passing save_generate_data = TRUE, however it is often more memory efficient to use the save_seeds option instead to only save R's .Random.seed state instead (still allowing for complete reproducibility); individual .Random.seed terms may also be read in with the load_seed input to reproduce the exact simulation state at any given replication. Finally, providing a vector of seeds is also possible to ensure that each simulation condition is completely reproducible under the single/multi-core method selected.

Finally, when the Monte Carlo simulation is complete it is recommended to write the results to a hard-drive for safe keeping, particularly with the save and filename arguments provided (for reasons that are more obvious in the parallel computation descriptions below). Using the filename argument (along with save = TRUE) supplied is much safer than using something like saveRDS directly because files will never accidentally be overwritten, and instead a new file name will be created when a conflict arises; this type of safety is prevalent in many aspects of the package and helps to avoid many unrecoverable (yet surprisingly common) mistakes.

Resuming temporary results

In the event of a computer crash, power outage, etc, if save = TRUE was used then the original code used to execute runSimulation() need only be re-run to resume the simulation. The saved temp file will be read into the function automatically, and the simulation will continue one the condition where it left off before the simulation state was terminated.

A note on parallel computing

When running simulations in parallel (either with parallel = TRUE or MPI = TRUE) R objects defined in the global environment will generally not be visible across nodes. Hence, you may see errors such as Error: object 'something' not found if you try to use an object that is defined in the workspace but is not passed to runSimulation. To avoid this type or error, simply pass additional objects to the fixed_objects input (usually it's convenient to supply a named list of these objects). Fortunately, however, custom functions defined in the global environment are exported across nodes automatically. This makes it convenient when writing code because custom functions will always be available across nodes if they are visible in the R workspace. As well, note the packages input to declare packages which must be loaded via library() in order to make specific non-standard R functions available across nodes.

Cluster computing

SimDesign code may be released to a computing system which supports parallel cluster computations using the industry standard Message Passing Interface (MPI) form. This simply requires that the computers be setup using the usual MPI requirements (typically, running some flavor of Linux, have password-less open-SSH access, IP addresses have been added to the /etc/hosts file or ~/.ssh/config, etc). More generally though, these resources are widely available through professional organizations dedicated to super-computing.

To setup the R code for an MPI cluster one need only add the argument MPI = TRUE, wrap the appropriate MPI directives around runSimulation, and submit the files using the suitable BASH commands to execute the mpirun tool. For example,

library(doMPI)
cl <- startMPIcluster()
registerDoMPI(cl)
runSimulation(design=Design, replications=1000, save=TRUE, filename='mysimulation', generate=Generate, analyse=Analyse, summarise=Summarise, MPI=TRUE)
closeCluster(cl)
mpi.quit()

The necessary SimDesign files must be uploaded to the dedicated master node so that a BASH call to mpirun can be used to distribute the work across slaves. For instance, if the following BASH command is run on the master node then 16 processes will be summoned (1 master, 15 slaves) across the computers named localhost, slave1, and slave2 in the ssh config file.

mpirun -np 16 -H localhost,slave1,slave2 R --slave -f simulation.R

Network computing

If you access have to a set of computers which can be linked via secure-shell (ssh) on the same LAN network then Network computing (a.k.a., a Beowulf cluster) may be a viable and useful option. This approach is similar to MPI computing approach except that it offers more localized control and requires more hands-on administrative access to the master and slave nodes. The setup generally requires that the master node has SimDesign installed and the slave/master nodes have all the required R packages pre-installed (Unix utilities such as dsh are very useful for this purpose). Finally, the master node must have ssh access to the slave nodes, each slave node must have ssh access with the master node, and a cluster object (cl) from the parallel package must be defined on the master node.

Setup for network computing is generally more straightforward and controlled than the setup for MPI jobs in that it only requires the specification of a) the respective IP addresses within a defined R script, and b) the user name (if different from the master node's user name. Otherwise, only a) is required). However, on Linux I have found it is also important to include relevant information about the host names and IP addresses in the /etc/hosts file on the master and slave nodes, and to ensure that the selected port (passed to makeCluster) on the master node is not hindered by a firewall.

As an example, using the following code the master node (primary) will spawn 7 slaves and 1 master, while a separate computer on the network with the associated IP address will spawn an additional 6 slaves. Information will be collected on the master node, which is also where the files and objects will be saved using the save inputs (if requested).

library(parallel)
primary <- '192.168.2.1'
IPs <- list(list(host=primary, user='myname', ncore=8), list(host='192.168.2.2', user='myname', ncore=6))
spec <- lapply(IPs, function(IP) rep(list(list(host=IP$host, user=IP$user)), IP$ncore))
spec <- unlist(spec, recursive=FALSE)
cl <- makeCluster(master=primary, spec=spec)
Final <- runSimulation(..., cl=cl)
stopCluster(cl)

The object cl is passed to runSimulation on the master node and the computations are distributed across the respective IP addresses. Finally, it's usually good practice to use stopCluster(cl) when all the simulations are said and done to release the communication between the computers, which is what the above code shows.

Alternatively, if you have provided suitable names for each respective slave node, as well as the master, then you can define the cl object using these instead (rather than supplying the IP addresses in your R script). This requires that the master node has itself and all the slave nodes defined in the /etc/hosts and ~/.ssh/config files, while the slave nodes require themselves and the master node in the same files (only 2 IP addresses required on each slave). Following this setup, and assuming the user name is the same across all nodes, the cl object could instead be defined with

library(parallel)
primary <- 'master'
IPs <- list(list(host=primary, ncore=8), list(host='slave', ncore=6))
spec <- lapply(IPs, function(IP) rep(list(list(host=IP$host)), IP$ncore))
spec <- unlist(spec, recursive=FALSE)
cl <- makeCluster(master=primary, spec=spec)
Final <- runSimulation(..., cl=cl)
stopCluster(cl)

Or, even more succinctly if all communication elements required are identical to the master node,

library(parallel)
primary <- 'master'
spec <- c(rep(primary, 8), rep('slave', 6))
cl <- makeCluster(master=primary, spec=spec)
Final <- runSimulation(..., cl=cl)
stopCluster(cl)

Poor man's cluster computing for independent nodes

In the event that you do not have access to a Beowulf-type cluster (described in the section on "Network Computing") but have multiple personal computers then the simulation code can be manually distributed across each independent computer instead. This simply requires passing a smaller value to the replications argument on each computer and later aggregating the results using the aggregate_simulations function.

For instance, if you have two computers available on different networks and wanted a total of 500 replications you could pass replications = 300 to one computer and replications = 200 to the other along with a filename argument (or simply saving the final objects as .rds files manually after runSimulation() has finished). This will create two distinct .rds files which can be combined later with the aggregate_simulations function. The benefit of this approach over MPI or setting up a Beowulf cluster is that computers need not be linked on the same network, and, should the need arise, the temporary simulation results can be migrated to another computer in case of a complete hardware failure by moving the saved temp files to another node, modifying the suitable compname input to save_details (or, if the filename and tmpfilename were modified, matching those files accordingly), and resuming the simulation as normal.

Note that this is also a useful tactic if the MPI or Network computing options require you to submit smaller jobs due to time and resource constraint-related reasons, where fewer replications/nodes should be requested. After all the jobs are completed and saved to their respective files, aggregate_simulations can then collapse the files as if the simulations were run all at once. Hence, SimDesign makes submitting smaller jobs to super-computing resources considerably less error prone than managing a number of smaller jobs manually .

References

Sigal, M. J., & Chalmers, R. P. (in press). Play it again: Teaching statistics with Monte Carlo simulation. Journal of Statistics Education.

See Also

Generate, Analyse, Summarise, SimFunctions, SimClean, SimAnova, SimResults, aggregate_simulations, Attach

Examples

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
#-------------------------------------------------------------------------------
# Example 1: Sampling distribution of mean

# This example demonstrate some of the simpler uses of SimDesign,
# particularly for classroom settings. The only factor varied in this simulation
# is sample size.

# skeleton functions to be saved and edited
SimFunctions()

#### Step 1 --- Define your conditions under study and create design data.frame

Design <- data.frame(N = c(10, 20, 30))

#~~~~~~~~~~~~~~~~~~~~~~~~
#### Step 2 --- Define generate, analyse, and summarise functions

# help(Generate)
Generate <- function(condition, fixed_objects = NULL){
    dat <- with(condition, rnorm(N, 10, 5)) # distributed N(10, 5)
    dat
}

# help(Analyse)
Analyse <- function(condition, dat, fixed_objects = NULL){
    ret <- mean(dat) # mean of the sample data vector
    ret
}

# help(Summarise)
Summarise <- function(condition, results, fixed_objects = NULL){
    ret <- c(mu=mean(results), SE=sd(results)) # mean and SD summary of the sample means
    ret
}


#~~~~~~~~~~~~~~~~~~~~~~~~
#### Step 3 --- Collect results by looping over the rows in design

# run the simulation
Final <- runSimulation(design=Design, replications=1000,
                       generate=Generate, analyse=Analyse, summarise=Summarise)
Final


#~~~~~~~~~~~~~~~~~~~~~~~~
#### Extras
# compare SEs estimates to the true SEs from the formula sigma/sqrt(N)
5 / sqrt(Design$N)

# To store the results from the analyse function either
#   a) omit a definition of of summarise(), or
#   b) pass save_results = TRUE to runSimulation() and read the results in with SimResults()

# e.g., the a) approach
results <- runSimulation(design=Design, replications=1000,
                       generate=Generate, analyse=Analyse)
str(results)
head(results[[1]])

# or b) approach
Final <- runSimulation(design=Design, replications=1000, save_results=TRUE,
                       generate=Generate, analyse=Analyse, summarise=Summarise)
results <- SimResults(Final)
str(results)
head(results[[1]]$results)

# remove the saved results from the hard-drive if you no longer want them
SimClean(results = TRUE)




#-------------------------------------------------------------------------------
# Example 2: t-test and Welch test when varying sample size, group sizes, and SDs

# skeleton functions to be saved and edited
SimFunctions()

## Not run: 
# in real-world simulations it's often better/easier to save
# these functions directly to your hard-drive with
SimFunctions('my-simulation')

## End(Not run)

#### Step 1 --- Define your conditions under study and create design data.frame

Design <- expand.grid(sample_size = c(30, 60, 90, 120),
                      group_size_ratio = c(1, 4, 8),
                      standard_deviation_ratio = c(.5, 1, 2))
dim(Design)
head(Design)

#~~~~~~~~~~~~~~~~~~~~~~~~
#### Step 2 --- Define generate, analyse, and summarise functions

Generate <- function(condition, fixed_objects = NULL){
    N <- condition$sample_size      # alternatively, could use Attach() to make objects available
    grs <- condition$group_size_ratio
    sd <- condition$standard_deviation_ratio
    if(grs < 1){
        N2 <- N / (1/grs + 1)
        N1 <- N - N2
    } else {
        N1 <- N / (grs + 1)
        N2 <- N - N1
    }
    group1 <- rnorm(N1)
    group2 <- rnorm(N2, sd=sd)
    dat <- data.frame(group = c(rep('g1', N1), rep('g2', N2)), DV = c(group1, group2))
    dat
}

Analyse <- function(condition, dat, fixed_objects = NULL){
    welch <- t.test(DV ~ group, dat)
    ind <- t.test(DV ~ group, dat, var.equal=TRUE)

    # In this function the p values for the t-tests are returned,
    #  and make sure to name each element, for future reference
    ret <- c(welch = welch$p.value, independent = ind$p.value)
    ret
}

Summarise <- function(condition, results, fixed_objects = NULL){
    #find results of interest here (e.g., alpha < .1, .05, .01)
    ret <- EDR(results, alpha = .05)
    ret
}


#~~~~~~~~~~~~~~~~~~~~~~~~
#### Step 3 --- Collect results by looping over the rows in design

# first, test to see if it works
Final <- runSimulation(design=Design, replications=5,
                       generate=Generate, analyse=Analyse, summarise=Summarise)
head(Final)

## Not run: 
# complete run with 1000 replications per condition
Final <- runSimulation(design=Design, replications=1000, parallel=TRUE,
                       generate=Generate, analyse=Analyse, summarise=Summarise)
head(Final, digits = 3)
View(Final)

## save final results to a file upon completion (not run)
runSimulation(design=Design, replications=1000, parallel=TRUE, save=TRUE, filename = 'mysim',
              generate=Generate, analyse=Analyse, summarise=Summarise)



## Debug the generate function. See ?browser for help on debugging
##   Type help to see available commands (e.g., n, c, where, ...),
##   ls() to see what has been defined, and type Q to quit the debugger
runSimulation(design=Design, replications=1000,
              generate=Generate, analyse=Analyse, summarise=Summarise,
              parallel=TRUE, edit='generate')

## Alternatively, place a browser() within the desired function line to
##   jump to a specific location
Summarise <- function(condition, results, fixed_objects = NULL){
    #find results of interest here (e.g., alpha < .1, .05, .01)
    ret <- EDR(results[,nms], alpha = .05)
    browser()
    ret
}

runSimulation(design=Design, replications=1000,
              generate=Generate, analyse=Analyse, summarise=Summarise,
              parallel=TRUE)




## EXTRA: To run the simulation on a MPI cluster, use the following setup on each node (not run)
# library(doMPI)
# cl <- startMPIcluster()
# registerDoMPI(cl)
# Final <- runSimulation(design=Design, replications=1000, MPI=TRUE, save=TRUE,
#                        generate=Generate, analyse=Analyse, summarise=Summarise)
# saveRDS(Final, 'mysim.rds')
# closeCluster(cl)
# mpi.quit()


## Similarly, run simulation on a network linked via ssh
##  (two way ssh key-paired connection must be possible between master and slave nodes)
##
## define IP addresses, including primary IP
# primary <- '192.168.2.20'
# IPs <- list(
#     list(host=primary, user='phil', ncore=8),
#     list(host='192.168.2.17', user='phil', ncore=8)
# )
# spec <- lapply(IPs, function(IP)
#                    rep(list(list(host=IP$host, user=IP$user)), IP$ncore))
# spec <- unlist(spec, recursive=FALSE)
#
# cl <- parallel::makeCluster(type='PSOCK', master=primary, spec=spec)
# Final <- runSimulation(design=Design, replications=1000, parallel = TRUE, save=TRUE,
#                        generate=Generate, analyse=Analyse, summarise=Summarise, cl=cl)

#~~~~~~~~~~~~~~~~~~~~~~~~
###### Post-analysis: Analyze the results via functions like lm() or SimAnova(), and create
###### tables(dplyr) or plots (ggplot2) to help visualize the results.
###### This is where you get to be a data analyst!

library(dplyr)
Final2 <- tbl_df(Final)
Final2 %>% summarise(mean(welch), mean(independent))
Final2 %>% group_by(standard_deviation_ratio, group_size_ratio) %>%
   summarise(mean(welch), mean(independent))

# quick ANOVA analysis method with all two-way interactions
SimAnova( ~ (sample_size + group_size_ratio + standard_deviation_ratio)^2, Final)

# or more specific anovas
SimAnova(independent ~ (group_size_ratio + standard_deviation_ratio)^2,
    Final)

# make some plots
library(ggplot2)
library(reshape2)
welch_ind <- Final[,c('group_size_ratio', "standard_deviation_ratio",
    "welch", "independent")]
dd <- melt(welch_ind, id.vars = names(welch_ind)[1:2])

ggplot(dd, aes(factor(group_size_ratio), value)) +
    geom_abline(intercept=0.05, slope=0, col = 'red') +
    geom_abline(intercept=0.075, slope=0, col = 'red', linetype='dotted') +
    geom_abline(intercept=0.025, slope=0, col = 'red', linetype='dotted') +
    geom_boxplot() + facet_wrap(~variable)

ggplot(dd, aes(factor(group_size_ratio), value, fill = factor(standard_deviation_ratio))) +
    geom_abline(intercept=0.05, slope=0, col = 'red') +
    geom_abline(intercept=0.075, slope=0, col = 'red', linetype='dotted') +
    geom_abline(intercept=0.025, slope=0, col = 'red', linetype='dotted') +
    geom_boxplot() + facet_grid(variable~standard_deviation_ratio) +
    theme(legend.position = 'none')


## End(Not run)