deepFlash: Hippocampal/Enthorhinal segmentation using "Deep Flash"

View source: R/deepFlash.R

deepFlashR Documentation

Hippocampal/Enthorhinal segmentation using "Deep Flash"

Description

Perform hippocampal/entorhinal segmentation in T1 and T1/T2 images using labels from Mike Yassa's lab—https://faculty.sites.uci.edu/myassa/

Usage

deepFlash(
  t1,
  t2 = NULL,
  whichParcellation = "yassa",
  doPreprocessing = TRUE,
  useRankIntensity = TRUE,
  verbose = FALSE
)

Arguments

t1

raw or preprocessed 3-D T1-weighted brain image.

t2

optional raw or preprocessed 3-D T2-weighted brain image.

whichParcellation

string — "yassa". See above label descriptions.

doPreprocessing

perform preprocessing. See description above.

useRankIntensity

If false, use histogram matching with cropped template ROI. Otherwise, use a rank intensity transform on the cropped ROI. Only for 'yassa' parcellation.

verbose

print progress.

Details

https://www.nature.com/articles/s41598-024-59440-6

The labeling is as follows:

  • Label 0 :background

  • Label 5 :left aLEC

  • Label 6 :right aLEC

  • Label 7 :left pMEC

  • Label 8 :right pMEC

  • Label 9 :left perirhinal

  • Label 10:right perirhinal

  • Label 11:left parahippocampal

  • Label 12:right parahippocampal

  • Label 13:left DG/CA2/CA3/CA4

  • Label 14:right DG/CA2/CA3/CA4

  • Label 15:left CA1

  • Label 16:right CA1

  • Label 17:left subiculum

  • Label 18:right subiculum

Preprocessing on the training data consisted of:

  • n4 bias correction,

  • affine registration to deep flash template. which is performed on the input images if doPreprocessing = TRUE.

Value

list consisting of the segmentation image and probability images for each label.

Author(s)

Tustison NJ

Examples

## Not run: 
library( ANTsRNet )
library( keras )

image <- antsImageRead( "t1.nii.gz" )
results <- deepFlash( image )

## End(Not run)

ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.