uvaSegTrain: Unsupervised variational autoencoder training

View source: R/uvaSeg.R

uvaSegTrainR Documentation

Unsupervised variational autoencoder training

Description

Trains a variational autoencoding with a convolutional network. This is followed by k-means clustering to produce a segmentation and probabilities.

Usage

uvaSegTrain(patches, k, convControl, standardize = TRUE, patches2)

Arguments

patches

input patch matrix, see getNeighborhoodInMask

k

number of embedding layers

convControl

optional named list with control parameters ( see code )

  • hiddenAct activation function for hidden layers eg relu

  • img_chns eg 1 number of channels

  • filters eg 32L

  • conv_kern_sz eg 1L

  • front_kernel_size eg 2L

  • intermediate_dim eg 32L

  • epochs eg 50

  • batch_size eg 32

  • squashAct activation function for squash layers eg sigmoid

  • tensorboardLogDirectory tensorboard logs stored here

standardize

boolean controlling whether patches are standardized

patches2

input target patch matrix, see getNeighborhoodInMask, may be useful for super-resolution

Value

model is output

Author(s)

Avants BB

Examples


## Not run: 

library(ANTsR)
img <- ri( 1 ) %>% resampleImage( c(4,4) ) %>% iMath( "Normalize" )
mask = randomMask( getMask( img ), 50 )
r = c( 3, 3 )
patch = getNeighborhoodInMask( img, mask, r, boundary.condition = "NA" )
uvaSegModel = uvaSegTrain( patch, 6 )

## End(Not run)


ANTsX/ANTsRNet documentation built on April 23, 2024, 1:24 p.m.