inst/troubleshooting.md

Common Questions

xs2 = xs
xs2@groupidx = sample(xs2@groupidx, 10)
start = Sys.time()
group.warpgroup(xs2, ...)
end = Sys.time()
paste("Full dataset will take", length(xs@groupidx)/length(xs2@groupidx), "times as long to process.")
paste("Subset took", end - start)
missing.values = which(is.na(xs@peaks[,c("mz", "mzmin", "mzmax", "rt", "rtmin", "rtmax", "into")]), arr.ind = T)
for (i in seq(nrow(missing.values))) {
  ps = xs@groupidx[[xs@peaks[i,"new.gidx"]]]
  xs@peaks[i, missing.values[i,]] = mean(xs@peaks[ps, missing.values[i,]], na.rm=T)
}

xs = warpgroup::refreshGroups(xs)

Parallel Computing

Warpgroup supports parallel computation of a job via the foreach backend. Simply mount any foreach compatible backend and warpgroup will automatically use that to distribute the job. doParallel is great for local cores and doRedis is great for distributing the job among multiple machines. For parallel computing something like the following example code should be run before starting group.warpgroup.

doParallel

library(doParallel)
cl = makeCluster(detectCores() - 1)
registerDoParallel(cl)

doRedis

Master node

library(doRedis)
registerDoRedis("worklist.name", "IP.OF.REDIS.SERVER")

Processing node(s)

library(doRedis)
startLocalWorkers(n, "worklist.name", "IP.OF.REDIS.SERVER")

Thank You

Thank you to the following for beta testing and bug report assistance. - Kevin Cho - Robert Sander Jansen



nathaniel-mahieu/warpgroup documentation built on May 23, 2019, 12:19 p.m.