testNN: Train, validate and test artificial neural networks

Description Usage Arguments Details Value Author(s) Examples

View source: R/nnFunctions.R

Description

Fits multiple neural networks to a dataset; data set has been randomly assigned to each of three categories: train, validate and test. A final neural net is selected based on a fit statistic (either precision, recall or the F1-score). All neural networks are trained to the training dataset. Neural network may vary in the number of hidden layers. Classification thresholds are selected based on the validation data, and then the final neural network is selected based on the test data.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
testNN(
  dat,
  stat = "F",
  maxH = 5,
  repetitions = 3,
  prop = c(8, 1, 1),
  predictors = NULL,
  pca = TRUE,
  thr = 0.95,
  ...
)

Arguments

dat

a previously constructed dataset obtained from manuallySelect.

stat

Fit statistic. May be "precision", "recall", or "F" for the harmonic mean of precision and recall.

maxH

maximum number of hidden layers to test note that more layers will require more time to fit.

repetitions

the number of repetitions for the neural network's training.

prop

the proportion or ratio for each class c(training, validation,test).

predictors

Optional. A set of custom predictors for the neural network. Default uses all columns in dat.

pca

Logical. TRUE by default. Should the set of predictors be compressed to the most informative? In short, should a principal component analysis be conducted to select axis that explain at least a fraction thr (see below) of the variance in the full set of predictors?

thr

Threshold for pca (above).

...

additional parameters, passed to neuralnet.

Details

The neural networks may be selected based on precision, recall or a F1-score (default). In binary classification, precision is the number of correct positive results divided by the number of all positive predictions. Recall is the number of correct positive results divided by the number of positive results that could have been returned if the algorithm was perfect. A F1 score (F-score/ F-measure) is a statistical measure of accuracy. F1 scores considers both the precision and the recall. A F1 score may be seen as a weighted average (harmonic mean) of the precision and recall. Precision, recall and F1 scores are at best 1 and at worst 0.

Value

Returns trained artificial neural net.

Author(s)

Marjolein Bruijning, Caspar A. Hallmann & Marco D. Visser

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
## Not run: 
dir.create("images")
## Create image sequence
traj <- simulTrajec(path="images",
                    nframes=30,nIndividuals=20,domain='square',
                    h=0.01,rho=0.9,movingNoise=TRUE,
                    parsMoving = list(density=20, duration=10, size=1,
                                      speed = 10, colRange = c(0,1)),
                    sizes=runif(20,0.004,0.006))
## Load images
dir <- "images"
allFullImages <- loadImages (dirPictures=dir,nImages=1:30)
stillBack <- createBackground(allFullImages,method="mean")
allImages <- subtractBackground(stillBack)
partIden <- identifyParticles(allImages,threshold=-0.1,
                                   pixelRange=c(3,400))
nframes <- 3
frames <- order(tapply(partIden$patchID,partIden$frame,length),
                decreasing=TRUE)[1:nframes]
mId <- manuallySelect(particles=partIden,frame=frames)
finalNN <- testNN(dat=mId,repetitions=10,maxH=4,prop=c(6,2,2))
summary(finalNN)

## End(Not run)

marjoleinbruijning/trackdem documentation built on Sept. 29, 2021, 7:52 a.m.