ConvolutionTranspose: ConvolutionTranspose

Description Usage Arguments Details

Description

Layer factory function to create a convolution transpose layer.

Usage

1
2
3
4
5
ConvolutionTranspose(filter_shape, num_filters = NULL,
  activation = activation_identity, init = init_glorot_uniform(),
  pad = FALSE, strides = 1, sharing = TRUE, bias = TRUE,
  init_bias = 0, output_shape = NULL, max_temp_mem_size_in_samples = 0,
  reduction_rank = 1, name = "")

Arguments

filter_shape

int or list of int - shape (spatial extent) of the receptive field, not including the input feature-map depth. E.g. (3,3) for a 2D convolution.

num_filters

(int, defaults to None) – number of filters (output feature-map depth), or () to denote scalar output items (output shape will have no depth axis).

activation

(Function) - optional activation Function

init

(scalar or matrix or initializer, defaults to init_glorot_uniform()) – initial value of weights W

pad

(bool or list of bools) – if False, then the operation will be shifted over the “valid” area of input, that is, no value outside the area is used. If pad=True on the other hand, the operation will be applied to all input positions, and positions outside the valid region will be considered containing zero. Use a list to specify a per-axis value.

strides

(int or tuple of ints, defaults to 1) – stride of the operation. Use a list of ints to specify a per-axis value.

bias

(bool) – whether to include bias

init_bias

(scalar or matrix or initializer, defaults to 0) – initial value of weights b

output_shape

(int or tuple of ints) – output shape. When strides > 2, the output shape is non-deterministic. User can specify the wanted output shape. Note the specified shape must satisify the condition that if a convolution is perform from the output with the same setting, the result must have same shape as the input.

name

string (optional) the name of the Function instance in the network

Details

This implements a convolution_transpose operation over items arranged on an N-dimensional grid, such as pixels in an image. Typically, each item is a vector (e.g. pixel: R,G,B), and the result is, in turn, a vector. The item-grid dimensions are referred to as the spatial dimensions (e.g. dimensions of an image), while the vector dimensions of the individual items are often called feature-map depth.

Convolution transpose is also known as fractionally strided convolutional layers, or, deconvolution. This operation is used in image and language processing applications. It supports arbitrary dimensions, strides, and padding.

The forward and backward computation of convolution transpose is the inverse of convolution. That is, during forward pass the input layer’s items are spread into the output same as the backward spread of gradients in convolution. The backward pass, on the other hand, performs a convolution same as the forward pass of convolution.

The size (spatial extent) of the receptive field for convolution transpose is given by filter_shape. E.g. to specify a 2D convolution transpose, filter_shape should be a tuple of two integers, such as (5,5); an example for a 3D convolution transpose (e.g. video or an MRI scan) would be filter_shape=(3,3,3); while for a 1D convolution transpose (e.g. audio or text), filter_shape has one element, such as (3,).

The dimension of the input items (feature-map depth) is not specified, but known from the input. The dimension of the output items generated for each item position is given by num_filters.

A ConvolutionTranspose instance owns its weight parameter tensors W and b, and exposes them as an attributes .W and .b. The weights will have the shape (input_feature_map_depth, num_filters, *filter_shape).


Microsoft/CNTK-R documentation built on May 28, 2019, 1:52 p.m.