suppressPackageStartupMessages({
  library(valtools)
  library(knitr)
  library(kableExtra)
  library(magrittr)
  library(devtools)
  library(drugdevelopR)
})

opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  eval = TRUE,
  echo = FALSE,
  results = "asis",
  message = FALSE,
  tidy = FALSE
)

options(
  knitr.kable.NA = '',
  knitr.duplicate.label = "allow"
)
all_sig <- vt_scrape_sig_table() 

\newpage

General introduction and validation plan {-}

In the planning of confirmatory studies, the determination of the sample size is essential, as it has a significant impact on the chances of achieving the study objective. Based on the work of Götte et al. [@goette2015], methods for optimal sample size and go/no-go decision rules for the example of phase II/III drug development programs within a utility-based, Bayesian-frequentist framework were developed, which was improved by Preussler in her dissertation “Integrated Planning of Pilot and Subsequent Confirmatory Study in Clinical Research - Finding Optimal Designs in a Utility-Based Framework” [@preussler2020] to include further extensions. Additionally, methods for multiple endpoints were implemented [@kieser2018]. In order to facilitate the practical application of these approaches, R Shiny applications for a basic setting as well as for three extensions were implemented. The extension of this project to a fully functional software product was funded by the German Research Organisation DFG within a project entitled "Integrated Planning of Drug Development Programs - drugdevelopR". The package was developed at the Institute of Medical Biometry (IMBI) in Heidelberg, Germany.

In order to assure unchanging scientific quality, in order to document that the package’s user requirements were met and in order to provide evidence of the package’s validity to the general public, we supply the following human-readable validation report. Within this report, we present the results of a validation suite developed at IMBI specifically for this package. The validation suite comprises several benchmark/testing scenarios of optimal drug development planning as presented in published work with scientific quality assurance. Published results are compared with the results of the software. The validation plan aims for full coverage of all package methods.

The validation suite was programmed using the valtools R package [@hughes2021]. We closely followed the R package validation framework [@phuse2021] of the PHUSE Working Group on “Data Visualisation & Open Source Technology”.

With the aim of avoiding confusion, it should be noted that the framework makes a clear distinction between testing (unit testing) and validating a package. Testing on the one hand means checking the program from the software developer’s perspective. The so-called unit tests usually cover small sections of the program, each checking one specific part of the software, e.g. a function. With their help, the programmer assures that the code works as intended. Unit tests aim for a code coverage close to one-hundred percent. Validation on the other hand means checking the program from the end user’s point of view. By checking larger parts of software within one test case, the validation process provides evidence that the program will deliver the expected results in a production environment.

Both unit tests and a validation suite were implemented for the drugdevelopR package. However, this report only covers validation. Whenever we refer to test cases or test code within this document, we mean tests for validation and not unit tests.

In the following, we supply a brief introduction to this validation framework. The framework comprises four distinct steps:

  1. Requirements: Programmers, subject matter experts and end users formulate clear, general expectations of the program’s functionality. (In our case, subject matter experts and end users coincide. Hence, the requirements were discussed between the programmer and two experts.) The requirements are readable by any expert without any programming experience. Neither program code nor function names are defined within this part of the validation framework. Risk assessments for the likelihood of errors within each requirement and for the impact of possible errors are documented for each requirement.
  2. Test cases: For each of the requirements, the programmer writes one or several test cases. These are concise plain-text descriptions of how to verify that the requirement has been met by the package. It clearly specifies program input, the function names of the functions to be used, and expected program output. However, no program code is supplied. Each test case usually covers a use case that could be expected in a real-life application of the program.
  3. Test code: Another programmer, who is not involved in the package code development, will then implement the test cases in R. The test code is clearly structured and thereby demonstrates that it follows the corresponding test case. The external programmer writes the test code solely using the description of the test cases and the package’s documentation, without deeper insight into the package’s code. This has the advantage that misunderstandings, poor documentation and other pitfalls will be discovered by an independent user before software release.
  4. Validation report: As the last step, the primary programmer generates a human-readable validation report. Within this report, requirements and test cases will be listed and the results of the test code are presented. Thus, the whole validation process and its outcome are available to anyone who wants to verify the package’s validity. After changes to the program, the validation report can be easily generated (possibly after adding additional test cases for new functionality).

Release details {-}

Package Information {-}

Change Log {-}

vt_scrape_change_log() %>% 
  vt_kable_change_log()

Validation Environment {-}

vt_scrape_val_env() %>% 
  vt_kable_val_env()

Authors {-}

Requirements {-}

vt_scrape_requirement_editors() %>% 
  vt_kable_requirement_editors(latex_options = "HOLD_position")

Test Case Authors {-}

vt_scrape_test_case_editors() %>%
 vt_kable_test_case_editors(latex_options = "HOLD_position")

Test Code Authors {-}

vt_scrape_test_code_editors() %>%
 vt_kable_test_code_editors(latex_options = "HOLD_position",
                            longtable_clean_cut = FALSE)

Traceability {-}

vt_scrape_coverage_matrix() %>% 
 vt_kable_coverage_matrix(longtable_clean_cut = FALSE)

\clearpage

Risk Assessment {-}

vt_scrape_risk_assessment() %>% 
  vt_kable_risk_assessment()

\newpage

User Requirement Specification {-}

In the following section, we will specify functionality that the end user can expect from the drugdevelopR package. We will use the following terms in the text, following the definitions from [@preussler2020]:

child_files_req <- vt_get_child_files(validation_order = "requirements")
vt_file(vt_path(child_files_req),dynamic_referencing = FALSE)

Test Cases {-}

child_files_test <- vt_get_child_files(validation_order = "test_cases")
vt_file(vt_path(child_files_test), dynamic_referencing = FALSE)

Test Results {-}

n_fail <- rep(0, 5)
n_pass <- rep(0, 5)
# Currently, this code causes a LaTeX error in valtools. This can be manually
# fixed by simply downloading or fork via
# devtools::install_github("LukasDSauer/valtools")
start_time01 <- Sys.time()
body <- capture.output(vt_file(vt_path("test_code/01_BasicSettingTestCode.R"),
                               dynamic_referencing = FALSE))
stop_time01 <- Sys.time()
dur01 <- stop_time01 - start_time01
n_fail[1] <- sum(grepl("\\{Fail\\}", body))
n_pass[1] <- sum(grepl("\\{Pass\\}", body))
cat(body, sep = "\n")
start_time02 <- Sys.time()
body <- capture.output(vt_file(vt_path("test_code/02_BiasAdjustmentTestCode.R"),
                               dynamic_referencing = FALSE))
stop_time02 <- Sys.time()
dur02 <- stop_time02 - start_time02
n_fail[2] <- sum(grepl("\\{Fail\\}", body))
n_pass[2] <- sum(grepl("\\{Pass\\}", body))
cat(body, sep = "\n")
start_time03 <- Sys.time()
body <- capture.output(vt_file(vt_path("test_code/03_MultitrialTestCode.R"),
                               dynamic_referencing = FALSE))
stop_time03 <- Sys.time()
dur03 <- stop_time03 - start_time03
n_fail[3] <- sum(grepl("\\{Fail\\}", body))
n_pass[3] <- sum(grepl("\\{Pass\\}", body))
cat(body, sep = "\n")
start_time04 <- Sys.time()
body <- capture.output(vt_file(vt_path("test_code/04_MultiarmTestCode.R"),
                               dynamic_referencing = FALSE))
stop_time04 <- Sys.time()
dur04 <- stop_time04 - start_time04
n_fail[4] <- sum(grepl("\\{Fail\\}", body))
n_pass[4] <- sum(grepl("\\{Pass\\}", body))
cat(body, sep = "\n")
start_time05 <- Sys.time()
body <- capture.output(vt_file(vt_path("test_code/05_MultipleTestCode.R"),
                               dynamic_referencing = FALSE))
stop_time05 <- Sys.time()
dur05 <- stop_time05 - start_time05
n_fail[5] <- sum(grepl("\\{Fail\\}", body))
n_pass[5] <- sum(grepl("\\{Pass\\}", body))
cat(body, sep = "\n")
data.frame(Setting = c("01 Basic", "02 Bias", "03 Multitrial", "04 Multiarm",
                       "05 Multiple", "Total"),
           Failures = c(n_fail,
                        sum(n_fail)),
           Passes = c(n_pass,
                        sum(n_pass))) %>% 
  kable(caption = "Summary of failures and passes") %>% 
  kable_styling(latex_options = "HOLD_position")
in_hours <- function(x){
  x <- paste(signif(as.numeric(x, units = "hours"), digits = 3), "hours")
  return(x)
}

data.frame(Setting = c("01 Basic",
                       "02 Bias", "03 Multitrial", "04 Multiarm",
                       "05 Multiple", "Total"
                       ),
           Duration = c(in_hours(dur01),
                        in_hours(dur02),
                        in_hours(dur03),
                        in_hours(dur04),
                        in_hours(dur05),
                        in_hours(dur01 + dur02 + dur03 + dur04 + 
                                             dur05))) %>% 
  kable(caption = "Duration of test code runs") %>% 
  kable_styling(latex_options = "HOLD_position")

References {-}



Sterniii3/drugdevelopR documentation built on Jan. 26, 2024, 6:17 a.m.