neuralStyleTransfer: Neural transfer style

View source: R/neuralTransferStyle.R

neuralStyleTransferR Documentation

Neural transfer style

Description

The popular neural style transfer described here:

Usage

neuralStyleTransfer(
  contentImage,
  styleImages,
  initialCombinationImage = NULL,
  numberOfIterations = 10,
  learningRate = 1,
  totalVariationWeight = 8.5e-05,
  contentWeight = 0.025,
  styleImageWeights = 1,
  contentLayerNames = c("block5_conv2"),
  styleLayerNames = "all",
  contentMask = NULL,
  styleMasks = NULL,
  useShiftedActivations = TRUE,
  useChainedInference = TRUE,
  verbose = FALSE,
  outputPrefix = NULL
)

Arguments

contentImage

ANTs image (1 or 3-component). Content (or base) image.

styleImages

ANTsImage or list of ANTsImages as the style (or reference) image.

initialCombinationImage

ANTsImage (1 or 3-component). Starting point for the optimization. Allows one to start from the output from a previous run. Otherwise, start from the content image. Note that the original paper starts with a noise image.

numberOfIterations

Number of gradient steps taken during optimization.

learningRate

Parameter for Adam optimization.

totalVariationWeight

A penalty on the regularization term to keep the features of the output image locally coherent.

contentWeight

Weight of the content layers in the optimization function.

styleImageWeights

float or vector of floats. Weights of the style term in the optimization function for each style image. Can either specify a single scalar to be used for all the images or one for each image. The style term computes the sum of the L2 norm between the Gram matrices of the different layers (using ImageNet-trained VGG) of the style and content images.

contentLayerNames

vector of strings. Names of VGG layers from which to compute the content loss.

styleLayerNames

vector of strings. Names of VGG layers from which to compute the style loss. If "all", the layers used are c('block1_conv1', 'block1_conv2', 'block2_conv1', 'block2_conv2', 'block3_conv1', 'block3_conv2', 'block3_conv3', 'block3_conv4', 'block4_conv1', 'block4_conv2', 'block4_conv3', 'block4_conv4', 'block5_conv1', 'block5_conv2', 'block5_conv3', 'block5_conv4'). This is a proposed improvement from https://arxiv.org/abs/1605.04603. In the original implementation, the layers used are: c('block1_conv1', 'block2_conv1', block3_conv1', 'block4_conv1', 'block5_conv1').

contentMask

an ANTsImage mask to specify the region for content consideration.

styleMasks

ANTsImage masks to specify the region for style consideration.

useShiftedActivations

boolean to determine whether or not to use shifted activations in calculating the Gram matrix (improvement mentioned in https://arxiv.org/abs/1605.04603).

useChainedInference

boolean corresponding to another proposed improvement from https://arxiv.org/abs/1605.04603.

verbose

boolean to print progress to the screen.

outputPrefix

If specified, outputs a png image to disk at each iteration.

Details

https://arxiv.org/abs/1508.06576 and https://arxiv.org/abs/1605.04603

and taken from François Chollet's implementation

https://keras.io/examples/generative/neural_style_transfer/

and titu1994's modifications:

https://github.com/titu1994/Neural-Style-Transfer

in order to possibly modify and experiment with medical images.

Value

ANTs 3-component image.

Author(s)

Tustison, NJ

Examples

## Not run: 
library( ANTsRNet )


## End(Not run)

ANTsX/ANTsRNet documentation built on Nov. 21, 2024, 4:07 a.m.