segmentationRefinement.train: Segmentation refinement using corrective learning (training)

Description Usage Arguments Value Author(s) Examples

View source: R/segmentationRefinement.R

Description

A random forest implementation of the corrective learning wrapper introduced in Wang, et al., Neuroimage 2011 (http://www.ncbi.nlm.nih.gov/pubmed/21237273). The training process involves building two sets of models from training data for each label in the initial segmentation data.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
segmentationRefinement.train(
  featureImages,
  truthLabelImages,
  segmentationImages,
  featureImageNames = c(),
  labelSet = c(),
  maximumNumberOfSamplesOrProportionPerClass = 1,
  dilationRadius = 2,
  neighborhoodRadius = 0,
  normalizeSamplesPerLabel = TRUE,
  useEntireLabeledRegion = TRUE
)

Arguments

featureImages

a list of lists of feature images. Each list of feature images corresponds to a single subject. Possibilities are outlined in the above-cited paper.

truthLabelImages

a list of "ground-truth" segmentation images, one for each set of feature images.

segmentationImages

a list of estimated segmentation images, one for each set of feature images.

featureImageNames

a vector of character strings naming the set of features. This parameter is optional but does help in investigating the relative importance of specific features.

labelSet

a vector specifying the labels of interest. If not specified, the full set is determined from the truthLabelImages.

maximumNumberOfSamplesOrProportionPerClass

specified the maximum number of samples used to build the model for each element of the labelSet. If <= 1, we use it as as a proportion of the total number of voxels.

dilationRadius

specifies the dilation radius for determining the ROI for each label using binary morphology. Alternatively, the user can specify a float distance value, e.g., "dilationRadius = '2.75mm'", to employ an isotropic dilation based on physical distance. For the latter, the distance value followed by the character string 'mm' (for millimeters) is necessary.

neighborhoodRadius

specifies which voxel neighbors should be included in building the model. The user can specify a scalar or vector.

normalizeSamplesPerLabel

if TRUE, the samples from each ROI are normalized by the mean of the voxels in that ROI. Can also specify as a vector to normalize per feature image.

useEntireLabeledRegion

if TRUE, samples are taken from the full dilated ROI for each label. If FALSE, samples are taken only from the combined inner and outer boundary region determined by the neighborhoodRadius parameter. Can also specify as a vector to determine per label.

Value

list with the models per label (LabelModels), the label set (LabelSet), and the feature image names (FeatureImageNames).

Author(s)

Tustison NJ

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
## Not run: 

 library( ANTsR )
 library( ggplot2 )

 imageIDs <- c( "r16", "r27", "r30", "r62", "r64", "r85" )

 # Perform simple 3-tissue segmentation.  For convenience we are
 # going to use atropos segmentation to define the "ground-truth"
 # segmentations and the kmeans to define the segmentation we
 # want to "correct".  We collect feature images for each image.
 # The gradient and laplacian images chosen below as feature
 # images are simply selected for convenience.

 segmentationLabels <- c( 1, 2, 3 )

 featureImageNames <- c( 'T1', 'Gradient', 'Laplacian' )

 images <- list()
 kmeansSegs <- list()
 atroposSegs <- list()
 featureImages <- list()

 for( i in 1:length( imageIDs ) )
   {
   cat( "Processing image", imageIDs[i], "\n" )
   images[[i]] <- antsImageRead( getANTsRData( imageIDs[i] ) )
   mask <- getMask( images[[i]] )
   kmeansSegs[[i]] <- kmeansSegmentation( images[[i]],
   length( segmentationLabels ), mask, mrf = 0.0 )$segmentation
   atroposSegs[[i]] <- atropos( images[[i]], mask, i = "KMeans[3]",
   m = "[0.25,1x1]", c = "[5,0]" )$segmentation

   featureImageSetPerImage <- list()
   featureImageSetPerImage[[1]] <- images[[i]]
   featureImageSetPerImage[[2]] <- iMath( images[[i]], "Grad", 1.0 )
   featureImageSetPerImage[[3]] <- iMath( images[[i]], "Laplacian", 1.0 )
   featureImages[[i]] <- featureImageSetPerImage
   }

 # Perform training.  We train on images "r27", "r30", "r62", "r64",
 # "r85" and test/predict
 #  on image "r16".

 cat( "\nTraining\n\n" )

 segLearning <- segmentationRefinement.train(
 featureImages = featureImages[2:6],
   truthLabelImages = atroposSegs[2:6],
   segmentationImages = kmeansSegs[2:6],
   featureImageNames = featureImageNames, labelSet = segmentationLabels,
   maximumNumberOfSamplesOrProportionPerClass = 100, dilationRadius = 1,
   normalizeSamplesPerLabel = TRUE, useEntireLabeledRegion = FALSE )

## End(Not run)

neuroconductor-devel/ANTsR documentation built on April 1, 2021, 1:02 p.m.