library(knitr)
library(ANTsR)
library( patchMatchR )

"patch that hole." (folk wisdom)

Introduction

Patch-based correspondence is ideal for finding partial matches between image pairs, for example, between pre and post-operative MRI. Depending on the selected features, the matching may be robust to occlusion, noise, scale and rotation. The patch-wise approach also makes the matching more robust to initialization in comparison to more traditional, gradient-based methods that assume that image pairs overlap (in physical space) at initialization and contain no more than a 30 degree rotational difference [@johnson2015itk]. Patch matching therefore provides a complementary set of tools in contrast to those provided by the majority of medical image registration tools.

This package, patchMatchR, provides an experimental framework for efficiently computing patch-based features and patch-based matches between image pairs. patchMatchR uses transformation and physical space definitions that are consistent with the Insight ToolKit (ITK) and Advanced Normalization Tools (ANTs). The input/output of images and transformation files can be used seamlessly between this package, ITKR and ANTsR.

The basic concepts include:

Many of the ideas in this work are inspired by traditional algorithms such as patch match [@barnes2009patchmatch] and SIFT [@lowe2004distinctive] and newer approaches such as google's DELF [@noh2017large]. The latter is very similar to SIFT but uses deep features. As this is a new package, little guidance is currently available for parameter setting. However, the number of parameters is also relatively small and, as such, parameter exploration is encouraged.

Algorithms in patchMatchR

The functions/methods available within patchMatchR include:

The feature extraction and matching methods above will be slower in three dimensions. As such, experimentation in 2D is encouraged.

Examples

Partial matching

Prepare the data. A brain slice and a version of that slice rotated and corrupted.

library( ANTsR )
img = ri( 1 )
cos45 = cos(pi*15/180)
sin45 = sin(pi*15/180)
txRotate <- createAntsrTransform( precision="float", type="AffineTransform", dim=2 )
setAntsrTransformParameters(txRotate, c(cos45,-sin45,sin45,cos45,0,0) )
setAntsrTransformFixedParameters(txRotate, c(128,128))
imgr = applyAntsrTransform(txRotate, img, img)
plot( img, imgr, colorbar = FALSE, alpha =  0.5  )

Corrupt the image.

makeNoise <- function( img, nzsd ) {
  temp = makeImage( dim( img ), rnorm( prod( dim( img ) ), 128, nzsd ) )
  antsCopyImageInfo( img, temp )
}
nz = makeNoise( img, 50 ) %>% smoothImage( 12 )
nzt = thresholdImage( nz, 127, Inf )
imgc = imgr * nzt
layout( matrix(1:2,nrow=1))
plot( img, colorbar = FALSE   )
plot( imgc, colorbar = FALSE   )

Now match the images.

imgmask = randomMask( getMask( img ), 100 ) # may need more points than this
mtch = patchMatch( imgc, img, imgmask, fixedPatchRadius = 7  )
myMatches = matchedPatches( imgc, img, mtch, fixedPatchRadius = 7 )
k = which.min( mtch$MI ) # mutual information
layout( matrix(1:2,nrow=1))
plot( myMatches$fixPatchList[[k]], colorbar=F, doCropping=F )
plot( myMatches$movPatchList[[k]], colorbar=F, doCropping=F )

Compute the transformation using RANSAC - this implementation needs some work in terms of identifying optimal parameters or potentially through an automated parameter search that is internal to the method

# RANSAC(fixedPoints, movingPoints, transformType = "Affine",
#  minNtoFit = 16, maxIterations = 20, errorThreshold = 1,
#  goodProportion = 0.5, lambda = 1e-04, verbose = FALSE)
rrr <- RANSACAlt(
  data.matrix(mtch[,c(3,4)]),
  data.matrix(mtch[,c(7,8)]), transformType = 'Rigid',
  nToTrim = 1, minProportionPoints = 0.11, nCVGroups=4 )


imgtx = applyAntsrTransformToImage(  rrr$finalModel$transform, imgc, img )
antsImageMutualInformation( img, imgc )
antsImageMutualInformation( img, imgtx )
layout( matrix(1:2,nrow=1))
plot( img, imgc )
plot( img, imgtx )

Deep features

Pre-trained deep convolutional networks provide rich feature spaces for image matching. Here, we show how to extract VGG19 features from medical images.

library( keras )
library( abind )
library( ANTsRNet )
library( pheatmap )
img <- ri( 1 ) %>% iMath( "Normalize" )
mask = randomMask( getMask( img ), 20 )
features = deepFeatures( img, mask, patchSize = 32 )
pheatmap( abs( cor( t( features$features ) ) ) )

Deep feature matching

nP1 = 500
nP2 = 2000
psz = 32
img <- iMath( img, "Normalize" )
img2 <- iMath( imgc, "Normalize" )
mask = randomMask( getMask( img ), nP1 )
mask2 = randomMask( getMask( img2 ), nP2 )
blk = 21
blk = "block2_conv2"
matchO = deepPatchMatch(  img2, img, mask, mask2,
  movingPatchSize = psz, fixedPatchSize = psz,
  block_name = blk ) # , knnSpatial = 32
mlm = matchedLandmarks( matchO, img, img2, rep(psz,img@dimension) )
wsel = which( !is.na( matchO$matches ) )
layout( matrix(1:2,nrow=1, byrow=T) )
ss=sample(wsel,1)
plot( as.antsImage( matchO$ffeatures$patches[ss,,] ) , colorbar=F, doCropping=F )
sss = matchO$matches[ss]
plot( as.antsImage( matchO$mfeatures$patches[sss,,] ) , colorbar=F, doCropping=F  )

now follow up with a RANSAC

result <- RANSACAlt( mlm$fixedPoints, mlm$movingPoints, transformType = 'Rigid',
  nToTrim = 1, minProportionPoints = 0.11, nCVGroups=4 )
# visualize
reg = applyAntsrTransformToImage( result$finalModel$transform, imgc, img )
layout( matrix(1:2,nrow=1, byrow=T) )
plot( img, colorbar = F  )
plot( reg, colorbar = F   )

Show some matched points.

idim = img@dimension
k = sample(1:nrow( mlm$fixedPoints), 10, replace=F )
lmImage1 = makePointsImage(
  matrix(mlm$fixedPoints[k,],ncol=idim), img, radius = 7 )
lmImage2 = makePointsImage(
  matrix(mlm$movingPoints[k,],ncol=idim), img2, radius = 7 )
layout( matrix(1:2,nrow=1) )
plot( img*222, lmImage1, doCropping=F  )
plot( img2*222, lmImage2, doCropping=F   )

Summary

We certainly expect some bugs or usability issues to arise and these can only be improved if we get feedback and/or suggestions from those who have interest in these methods. Enjoy and please refer issues or suggestions to patchMatchR issues.

References



stnava/patchMatchR documentation built on March 23, 2022, 6:47 a.m.