tapas_predict: TAPAS Prediction

Description Usage Arguments Value Examples

View source: R/tapas_predict.R

Description

This function takes a probability map for a single subject and predicts the subject-specific threshold to apply based on the TAPAS model generated from tapas_train(). The function will return a list of objects including the TAPAS predicted subject-specific threshold, the lesion mask produced from applying this threshold, and the lesion mask produced from using the group threshold.

Usage

1
tapas_predict(pmap, model, clamp = TRUE, k = 0, verbose = TRUE)

Arguments

pmap

A character file path to a probability map image or an object of class nifti.

model

The TAPAS model fit from tapas_train() of class gam. This model will be used to make subject-specific threshold predictions.

clamp

A logical object set to TRUE by default. This setting uses the clamped subject-specific threshold prediction rather than the prediction fit by the TAPAS model. This only applies to volumes exceeding those at the 10th and 90th percentile calculated using the training data. Using the clamp data avoids extrapolation when the naive volume estimate falls in the tails of the TAPAS model. If FALSE then the the TAPAS model predicted threshold will be used for segmentation rather than the clamped threshold.

k

The minimum number of voxels for a cluster/component. Segmentation clusters of size less than k are removed from the mask, volume estimation, and the Sørensen's–Dice coefficient (DSC) calculation.

verbose

A logical argument to print messages. Set to TRUE by default.

Value

A list containing the TAPAS predicted subject-specific threshold (subject_threshold), the lesion segmentation mask obtained using the TAPAS predicted subject-specific threshold (tapas_binary_mask), and the lesion segmentation mask obtained using the group threshold (group_binary_mask).

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
## Not run: 
# Data is provided in the rtapas package as arrays. Below we will convert them to nifti objects.
# Before we can implement the train_tapas function we have to generate the training data
library(oro.nifti)
# Create a list of gold standard manual segmentation
train_gold_standard_masks = list(gs1 = gs1,
                                 gs2 = gs2,
                                 gs3 = gs3,
                                 gs4 = gs4,
                                 gs5 = gs5,
                                 gs6 = gs6,
                                 gs7 = gs7,
                                 gs8 = gs8,
                                 gs9 = gs9,
                                 gs10 = gs10)
# Convert the gold standard masks to nifti objects
train_gold_standard_masks = lapply(train_gold_standard_masks, oro.nifti::nifti)

# Make a list of the training probability maps
train_probability_maps = list(pmap1 = pmap1,
                             pmap2 = pmap2,
                             pmap3 = pmap3,
                             pmap4 = pmap4,
                             pmap5 = pmap5,
                             pmap6 = pmap6,
                             pmap7 = pmap7,
                             pmap8 = pmap8,
                             pmap9 = pmap9,
                             pmap10 = pmap10)

# Convert the probability maps to nifti objects
train_probability_maps = lapply(train_probability_maps, oro.nifti::nifti)
# Make a list of the brain masks
train_brain_masks = list(brain_mask1 = brain_mask,
                         brain_mask2 = brain_mask,
                         brain_mask3 = brain_mask,
                         brain_mask4 = brain_mask,
                         brain_mask5 = brain_mask,
                         brain_mask6 = brain_mask,
                         brain_mask7 = brain_mask,
                         brain_mask8 = brain_mask,
                         brain_mask9 = brain_mask,
                         brain_mask10 = brain_mask)

# Convert the brain masks to nifti objects
train_brain_masks = lapply(train_brain_masks, oro.nifti::nifti)

# Specify training IDs
train_ids = paste0('subject_', 1:length(train_gold_standard_masks))

# The function below runs on 2 cores. Be sure your machine has 2 cores available or switch to 1.
# Run tapas_data_par function
# You can also use the tapas_data function and generate each subjects data
data = tapas_data_par(cores = 2,
                      thresholds = seq(from = 0, to = 1, by = 0.01),
                      pmap = train_probability_maps,
                      gold_standard = train_gold_standard_masks,
                      mask = train_brain_masks,
                      k = 0,
                      subject_id = train_ids,
                      ret = TRUE,
                      outfile = NULL,
                      verbose = TRUE)

# We can now implement the train_tapas function using the data from tapas_data_par
tapas_model = tapas_train(data = train_data1,
                          dsc_cutoff = 0.03,
                          verbose = TRUE)

# Load a subject reserved for testing and convert to a nifti
# Probability map
pmap11 = oro.nifti::nifti(pmap11)
# Brain mask
brain_mask = oro.nifti::nifti(brain_mask)

# Use TAPAS to predict on a new subject
test_subject_prediction = tapas_predict(pmap = pmap11,
                                        model = tapas_model,
                                        clamp = TRUE,
                                        k = 0,
                                        verbose = TRUE)
# Show subject-specific TAPAS threshold
test_subject_prediction$subject_threshold
# Look at TAPAS binary segmentation from applying the TAPAS threshold
oro.nifti::image(test_subject_prediction$tapas_binary_mask)
# Look at group threshold binary segmentation from applying the group threshold
oro.nifti::image(test_subject_prediction$group_binary_threshold)

## End(Not run)

neuroconductor-devel-releases/rtapas documentation built on May 6, 2020, 4:28 p.m.