Part of the analytic development process is ensuring that the algorithms you develop will be reproducible and portable for an end-user. Often an end-user’s machine is not configured in the same way as the developer’s – leading to unexpected side effects or errors. To ensure that your methods are reproducible & portable, your analytic data products will be reviewed on multiple ‘new’ machines representing those used by potential end-users. In lieu of a final exam, each student will be tasked to review two analytic projects developed by students in the course. Using the information presented in the GitHub repository README file and any documentation files provided with the analytic, the reviewer will review the analytic in these areas:
Accuracy
Compilation
Ease of use
Reviewers will capture their findings by (1) generating an issue on the analytic project’s Github page for each finding and (2) present all findings in a final report (.Rmd file – html_document). Both the report and the Github issues should include a brief discussion of the problem with one or more reproducible examples.
After reviewing the analytic, the reviewer will then assign it one of the following grades. Note: I will perform a second review on all projects to ensure that grades are assigned fairly. If my final grade differs greatly from the grade provided by the reviewer – it will be an indication of poor engagement by the reviewer.
Outstanding – ready to publish/deploy (50 pts)
Excellent – very little rework required (45 pts)
Satisfactory – extensive rework required (40 pts)
Unsatisfactory – complete restart required (35 pts)
# Define funtion to randomly assign N # students as reviewers reviews <- function(n.choices = 13) { base = seq(1,as.integer(n.choices)) rev1 = rep(0, length(base)) rev2 = rep(0, length(base)) for(i in 1:(length(base)-1)) { choices1 <- base[base != base[i] & !base %in% rev1[1:i]] rev1[i] <- sample(choices1, size = 1) } rev1[length(base)] <- base[!base %in% rev1] for(i in 1:(length(base)-1)) { choices2 <- base[base != base[i] & base != rev1[i] & !base %in% rev2[1:i]] rev2[i] <- sample(choices2, size = 1) } rev2[length(base)] <- base[!base %in% rev2] df <- data.frame(base = base, rev1 = rev1, rev2 = rev2) return(df) }
students <- c('Uhorchak', 'Munson', 'Ramirez', 'Smith', 'Trigo', 'Kalhoff', 'Butt', 'Elliott', 'Johnson', 'Sevier', 'Small', 'Gallagher', 'Stuntz', 'Meyer') git_url <- c('https://github.com/nuhorchak/RClean', 'https://github.com/evan-l-munson/SAoTD', 'https://github.com/RachelRamirez/NPS', 'https://github.com/JSmith146/CoRpEx', 'https://github.com/citation891/MCAC', 'https://github.com/AFIT-R/instaExtract', 'https://github.com/SpencerButt/IDPS-LAAD', 'https://github.com/jtelliott/attrition-forecast', 'https://github.com/slackliner33/Yahoo_DFS_Optimizer', 'https://github.com/williamcsevier/TextML', 'https://github.com/msmall318/Boots', 'https://github.com/gallagherj2008/SWAT', '', 'https://github.com/AFIT-R/MODA')
The table below lists each developer, the URL of their respective project, and the assigned reviewers.
df <- reviews(length(students)) review_table <- data.frame(Developer = students, Git_URL = git_url, Reviewer1 = students[df[,2]], Reviewer2 = students[df[,3]]) knitr::kable(review_table, caption = 'List of developers, projects, and assigned reviewers for OPER782 (Winter 2018)')
The assigned reviewers are to:
Download the project using devtools::install_github('devname/projectname')
Review the documentation provided
Test out the project in accordance with the documentation
Write a one-page review of the project using the rubric provided above.
Fork the project repository
Add your review to the inst/ directory of the project
Submit a pull request to the project developer to have them include the review as part of their project.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.