knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
library(Lab6AdvancedR)
Question: How much time does it takes to run the algorithm for n = 16 objects?
Answer: 0.5 s
Question: How much time does it takes to run the algorithm for n = 500 objects?
Answer: 13 s
Question How much time does it takes to run the algorithm for n = 1000000 objects?
Answer: 0.2 s
Question: What performance gain could you get by trying to improve your code?
Answer:
The brute-force algorithm optimization was performed via changing the for loop into the R built sum() function. The performance increase was from 1.8 s to 0.5 s for the n = 16 case.
The dynamic algorithm optimization was performed via removing one redundant if condition. The performance increase was from 15 s to 13 s for the n = 500 case.
The greedy algorithm was optimized by directly creating new columns in the data.frame cx instead of creating the columns and then using cbind. The performance increase was from 1 s to 0.2 s for the n = 1000000 case.
Using profvis, several bottlenecks were identified. In particular, in the function, where data from the server are gathered, the main time-consuming processes are the connection to the server and the conversion of the obtained data from JSON to list. As the data of interest change only once per hour, the decision was made to store the data locally, and update upon the user request only after one hour has past.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.