View source: R/segmentationRefinement.R
segmentationRefinement.train | R Documentation |
A random forest implementation of the corrective learning wrapper introduced in Wang, et al., Neuroimage 2011 (http://www.ncbi.nlm.nih.gov/pubmed/21237273). The training process involves building two sets of models from training data for each label in the initial segmentation data.
segmentationRefinement.train(
featureImages,
truthLabelImages,
segmentationImages,
featureImageNames = c(),
labelSet = c(),
maximumNumberOfSamplesOrProportionPerClass = 1,
dilationRadius = 2,
neighborhoodRadius = 0,
normalizeSamplesPerLabel = TRUE,
useEntireLabeledRegion = TRUE
)
featureImages |
a list of lists of feature images. Each list of feature images corresponds to a single subject. Possibilities are outlined in the above-cited paper. |
truthLabelImages |
a list of "ground-truth" segmentation images, one for each set of feature images. |
segmentationImages |
a list of estimated segmentation images, one for each set of feature images. |
featureImageNames |
a vector of character strings naming the set of features. This parameter is optional but does help in investigating the relative importance of specific features. |
labelSet |
a vector specifying the labels of interest. If not specified, the full set is determined from the truthLabelImages. |
maximumNumberOfSamplesOrProportionPerClass |
specified the maximum number of samples used to build the model for each element of the labelSet. If <= 1, we use it as as a proportion of the total number of voxels. |
dilationRadius |
specifies the dilation radius for determining the ROI for each label using binary morphology. Alternatively, the user can specify a float distance value, e.g., "dilationRadius = '2.75mm'", to employ an isotropic dilation based on physical distance. For the latter, the distance value followed by the character string 'mm' (for millimeters) is necessary. |
neighborhoodRadius |
specifies which voxel neighbors should be included in building the model. The user can specify a scalar or vector. |
normalizeSamplesPerLabel |
if TRUE, the samples from each ROI are normalized by the mean of the voxels in that ROI. Can also specify as a vector to normalize per feature image. |
useEntireLabeledRegion |
if TRUE, samples are taken from the full
dilated ROI for each label. If FALSE, samples are taken only from the
combined inner and outer boundary region determined by the
|
list with the models per label (LabelModels), the label set (LabelSet), and the feature image names (FeatureImageNames).
Tustison NJ
## Not run:
library(ANTsR)
library(ggplot2)
imageIDs <- c("r16", "r27", "r30", "r62", "r64", "r85")
# Perform simple 3-tissue segmentation. For convenience we are
# going to use atropos segmentation to define the "ground-truth"
# segmentations and the kmeans to define the segmentation we
# want to "correct". We collect feature images for each image.
# The gradient and laplacian images chosen below as feature
# images are simply selected for convenience.
segmentationLabels <- c(1, 2, 3)
featureImageNames <- c("T1", "Gradient", "Laplacian")
images <- list()
kmeansSegs <- list()
atroposSegs <- list()
featureImages <- list()
for (i in 1:length(imageIDs))
{
cat("Processing image", imageIDs[i], "\n")
images[[i]] <- antsImageRead(getANTsRData(imageIDs[i]))
mask <- getMask(images[[i]])
kmeansSegs[[i]] <- kmeansSegmentation(images[[i]],
length(segmentationLabels), mask,
mrf = 0.0
)$segmentation
atroposSegs[[i]] <- atropos(images[[i]], mask,
i = "KMeans[3]",
m = "[0.25,1x1]", c = "[5,0]"
)$segmentation
featureImageSetPerImage <- list()
featureImageSetPerImage[[1]] <- images[[i]]
featureImageSetPerImage[[2]] <- iMath(images[[i]], "Grad", 1.0)
featureImageSetPerImage[[3]] <- iMath(images[[i]], "Laplacian", 1.0)
featureImages[[i]] <- featureImageSetPerImage
}
# Perform training. We train on images "r27", "r30", "r62", "r64",
# "r85" and test/predict
# on image "r16".
cat("\nTraining\n\n")
segLearning <- segmentationRefinement.train(
featureImages = featureImages[2:6],
truthLabelImages = atroposSegs[2:6],
segmentationImages = kmeansSegs[2:6],
featureImageNames = featureImageNames, labelSet = segmentationLabels,
maximumNumberOfSamplesOrProportionPerClass = 100, dilationRadius = 1,
normalizeSamplesPerLabel = TRUE, useEntireLabeledRegion = FALSE
)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.