library("rco") library("microbenchmark") library("ggplot2") autoplot.microbenchmark <- function(obj) { levels(obj$expr) <- paste0("Expr_", seq_along(levels(obj$expr))) microbenchmark:::autoplot.microbenchmark(obj) } speed_up <- function(obj) { levels(obj$expr) <- paste0("Expr_", seq_along(levels(obj$expr))) obj <- as.data.frame(obj) summaries <- do.call(rbind, by(obj$time, obj$expr, summary)) res <- c() for (i in seq_len(nrow(summaries) - 1) + 1) { res <- rbind(res, summaries[1, ] / summaries[i, ]) } rownames(res) <- levels(obj$expr)[-1] return(res) }
Constant folding is an optimization technique that eliminates expressions that calculate a value that can already be determined before code execution. These are typically calculations that only reference constant values or expressions that reference variables whose values are constant.
For example, consider the statement:
i <- 320 * 200 * 32
Most compilers would not actually generate two multiply instructions. Instead, they identify constructs such as these and substitute the computed values (in this case, 2048000
). This way, the code will be replaced by:
i <- 2048000
A simple example would be to have to convert the unit of many temporary samples from hours to miliseconds miliseconds <- 1000 * 60 * 60 * hours
.
code <- paste( "hours_vector <- runif(1000, 0, 24)", "ms_vector <- numeric(1000)", "# of course it would be much efficient to do vectorized operations xP", "for (i in seq_along(hours_vector)) {", " ms_vector[i] <- 1000 * 60 * 60 * hours_vector[i]", "}", sep = "\n" ) cat(code)
Then, the automatically optimized code would be:
opt_code <- opt_constant_folding(list(code)) cat(opt_code$codes[[1]])
And if we measure the execution time of each one, and the speed-up:
bmark_res <- microbenchmark({ eval(parse(text = code)) }, { eval(parse(text = opt_code)) }) autoplot(bmark_res) speed_up(bmark_res)
Actually, opt_constant_folding
will fold expressions that are conformed solely by operators and constants which tokens parsed with utils::getParseData
function are:
rco:::ops rco:::constants rco:::precedence_ops
When folding, we can have floating-point precision issues, for instance, consider the following code:
x <- 1 / (2 + 1) y <- 1 / (2 + 1) z <- 1 / (2 + 1) x + y + z == 1
If we fold it, we would have:
code <- paste( "x <- 1/(2+1)", "y <- 1/(2+1)", "z <- 1/(2+1)", "x + y + z == 1", sep = "\n" ) opt_code <- opt_constant_folding(list(code), fold_floats = TRUE)$codes[[1]] cat(opt_code)
However, this code is not equivalent due to precision:
eval(parse(text = code))
eval(parse(text = opt_code))
In this case, we can use the parameter fold_floats
. If set to FALSE, then the optimizer will fold every expression except those which will lose precision:
opt_code <- opt_constant_folding(list(code), fold_floats = FALSE)$codes[[1]] cat(opt_code)
Consider this example where a sub-expression folding causes precision loss, but as it is not important in overall, then it can be folded:
opt_code <- opt_constant_folding(list(paste( "x <- 1/(2+1)", # will not fold it because we lose precision "y <- 1/(2+1) > 3", # however, folded or not, it is not > 3, so folds it sep = "\n" )), fold_floats = FALSE)$codes[[1]] cat(opt_code)
For example: fold 0 * x
to 0
However, this could change code semantics, in the second case, if x
does not exist then the code would not throw an error, meanwhile, in the first case, it would.
The R parser has left associativity, so x + 10 + 200
is ((x + 10) + 200)
. So this is not being folded to x + 210
.
If we consider operators with associativity, we could replace x + 10 + 200
to 10 + 200 + x
, and then fold it to 210 + x
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.