KANUTE - behaviour testing between softwares and versions

Example: Kadmon

Dave T. Gerrard, University of Manchester

Produced r Sys.Date()

Dave's website

Table of contents

What is the working directory?

getwd()

options(width=500)

[Back to top](#Top)


# <a name="overview"></a>Overview ##

This is a framework and piece of software to support:-
- checking if a specific behaviour of a piece of software is constant between versions.
- checking if the behaviour of one piece of software is identical to the supposed same behaviour in another piece of software.

It is __not__ intended for performance benchmarking or for comparing _real world_ noisy data.

The process has two parts:-

1. A set of unit tests. These can be written in any language but must produce a pre-defined output. Tests and Outputs are organised in a structured testing folder/directory. Users are free to develop individual tests over a long period of time. Reports can then be run on an ad-hoc basis, using all or just a selection of test results.
2. Software to parse the tests, their output and make comparisons between versions and between softwares. The current implementation is in R (see below) but could be ported to any language.



[Back to top](#Top)


# <a name="test_dir"></a>Tests directory ##

The [testing folder](../) will contain the following sub-folders

- [inputs](../inputs/)      a collection of minimal input data, used by tests to produce outputs
- [tests](../tests/)        the individual test scripts, arranged by __software__
- [outputs](../outputs/)    the individual test results, arranged by __software__ then _version_
- [reports](../reports/)    an optional directory containing reports (like this one)    


The software sub-folders within tests/ should contain a file called SOFTWARE.versions listing the version IDs. This is also a good place to list full paths to executable for each version, if so, the file should be tab delimited. Example: [bowtie.versions](../tests/bowtie/bowtie.versions)

The individual test scripts should specify the name of the output file they generate. Currently, the KANUTE package can detect these in bash scripts if they are assigned to the _TEST_OUT_NAME_ variable.



[Back to top](#Top)


# <a name="Example"></a>Comparing tests example ##

Running the tests is kept separate from comparing the results between different versions or software.

The test comparisons have initially been implemented in R and make use of the md5sum program.

## Starting the R session

The user begins by giving the directory where the tests are stored and defining a subset of software names and version that they are interested in. The following workflow is in R 


```r

########### Reading testing results produced earlier (no execution, only comparison).
#
parent_dir <-  paste(Sys.getenv("SCRATCH"),"kanute", sep="/")
testingFolder <- paste(parent_dir, "TestFolder", sep="/")
testDirTop <- paste(testingFolder, "tests",sep="/")
outputsDirTop <- paste(testingFolder, "outputs",sep="/")
source(paste(parent_dir, "scripts/kanute_funcs.R", sep="/"))
#
#
softwares <- c("bowtie", "bowtie2")

Scan for tests in the testing folder

There is a function scanTestDir() to got through the directory and pick out softwares, versions and tests. This produces a list object.

#
testing.list <- scanTestDir(softwares=softwares, testDirTop=testDirTop)
testing.list
#

Compile md5sums for each test

Using the result of scanTestDir the user can then get md5sum CHECKSUM values for every test with a result file. Again this produces another list.

#
test.library <- compileTestResults(testing.list, outputsDirTop)
#test.library   # don't show this, it's a big hier-archichal list of md5sums
#
#

Compare results across versions

The above list has all the information to check whether any two tests produced the same output, or not. The default behaviour of compareTests is to check for concordance between versions within one piece of software.

test.results <- compareTests(test.library)
test.results
#

Compare results for some tests across softwares

An important feature of KANUTE is to test for differences in results between different softwares. This depends on the user having written tests that produce comparable results. The user must then also provide a table of mappings between specific tests performed on two different softares. Here is an example file

## created a tab-delim file called test.mappings in top of tests/ folder
## mapping.name software1 software1.testA software2   software2.testA TRUE
## the final column is TRUE if expect results to be the same, FALSE if not or NA if unknown.
## May need to be 1/0 instead of TRUE/FALSE
#
test.mappings <- read.delim(paste(testDirTop, "test.mappings", sep="/"), header=F, stringsAsFactors=F)
names(test.mappings) <- c("mapping.name", "software1", "software1.test", "software2", "software2.test", "expectIdentical")
test.mappings

Then, giving this data.frame to compareTests will make a comparison BETWEEN softwares.

test.results <- compareTests(test.library, mapped.tests=test.mappings)
test.results
### specify the reference versions to be used in tests BETWEEN software
test.results <- compareTests(test.library, mapped.tests=test.mappings, ref.versions=list(bowtie="1.0.1", bowtie2="2.2.3"))
test.results

By default, if mapped.tests parameter given, tests within Software are suppressed. Here's how to turn it back on

# Here's how to turn it back on
test.results <- compareTests(test.library, mapped.tests=test.mappings, show.all.tests=TRUE)
#What have we got at the end of all that
test.results

TO DO

Back to top

Back to top

Start of next section

Back to top

About this file

Produced from an R-markdown (Rmd) file ../reports/example.Rmd .

Back to top

Information about the R session:-

sessionInfo()


davetgerrard/kanute documentation built on May 14, 2019, 9:27 p.m.