tidNeuralImageAssessment: Perform MOS-based assessment of an image.

View source: R/qualityAssessment.R

tidNeuralImageAssessmentR Documentation

Perform MOS-based assessment of an image.

Description

Use a ResNet architecture to estimate image quality in 2D or 3D using subjective QC image databases described in

Usage

tidNeuralImageAssessment(
  image,
  mask,
  patchSize = 101L,
  strideLength,
  paddingSize = 0L,
  dimensionsToPredict = 1L,
  whichModel = "tidsQualityAssessment",
  imageScaling,
  doPatchScaling = FALSE,
  verbose = FALSE
)

Arguments

image

the input image. Either 2D or 3D.

mask

optional mask for designating calculation ROI.

patchSize

integer (prime) number for patch size; 101 is good. otherwise, choose "global" for a single global estimate of quality.

strideLength

optional value to speed up computation (typically less than patch size). Integer or vector of image dimension length.

paddingSize

positive or negative integer (or vector of image dimension length) for (de)padding to remove edge effects.

dimensionsToPredict

if image dimension is 3, this parameter specifies which dimension(s) should be used for prediction. If more than one dimension is specified, the results are averaged.

whichModel

model type e.g. string tidsQualityAssessment, koniqMS, koniqMS2 or koniqMS3 where the former predicts mean opinion score (MOS) and MOS standard deviation and the latter koniq models predict mean opinion score (MOS) and sharpness. One may also directly pass a tensorflow model here. In this case, we assume that the input image is scaled by the imageScaling parameter.

imageScaling

a two-vector where the first value is the multiplier and the second value the subtractor so each image will be scaled as img = iMath(img,"Normalize")*m - s .

doPatchScaling

boolean controlling whether each patch is scaled or

verbose

print progress.

Details

https://www.sciencedirect.com/science/article/pii/S0923596514001490

or

https://doi.org/10.1109/TIP.2020.2967829

where the image assessment is either "global", i.e., a single number or an image based on the specified patch size. In the 3-D case, neighboring slices are used for each estimate. Note that parameters should be kept as consistent as possible in order to enable comparison. Patch size should be roughly 1/12th to 1/4th of image size to enable locality. A global estimate can be gained by setting patchSize = "global".

Value

list of QC results predicting both both human rater's mean and standard deviation of the MOS ("mean opinion scores") or sharpness depending on the selected network. Both aggregate and spatial scores are returned, the latter in the form of an image.

Author(s)

Avants BB

Examples

## Not run: 
image <- antsImageRead( getANTsRData( "r16" ) )
mask <- getMask( image )
tid <- tidNeuralImageAssessment( image, mask = mask, patchSize = 101L,
          strideLength = 7L, paddingSize = 0L )
plot( image, tid$MOS, alpha = 0.5)
cat( "mean MOS = ", tid$MOS.mean, "\n" )
cat( "sd MOS = ", tid$MOS.standardDeviationMean, "\n" )

## End(Not run)

ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.