op_convolution: Convolution

Description Usage Arguments Details

Description

Computes the convolution of convolution_map (typically a tensor of learnable parameters) with operand (commonly an image or output of a previous convolution/pooling operation). This operation is used in image and language processing applications. It supports arbitrary dimensions, strides, sharing, and padding.

Usage

1
2
op_convolution(convolution_map, operand, strides = c(1), sharing = c(TRUE),
  auto_padding = c(TRUE), max_temp_mem_size_in_samples = 0, name = "")

Arguments

strides

(int or tuple of ints, defaults to 1) – stride of the operation. Use a list of ints to specify a per-axis value.

name

(str) the name of the Function instance in the network

Details

This function operates on input tensors with dimensions [C×M1×M2×…×Mn][C×M1×M2×…×Mn]. This can be understood as a rank-n object, where each entry consists of a CC-dimensional vector. For example, an RGB image would have dimensions [3×W×H][3×W×H], i.e. a [W×H][W×H]-sized structure, where each entry (pixel) consists of a 3-tuple.

convolution convolves the input operand with a n+2n+2 rank tensor of (typically learnable) filters called convolution_map of shape [O×I×m1×m2×…×mn][O×I×m1×m2×…×mn] (typically mi≪Mimi≪Mi). The first dimension, OO, is the nunber of convolution filters (i.e. the number of channels in the output). The second dimension, II, must match the number of channels in the input. The last n dimensions are the spatial extent of the filter. I.e. for each output position, a vector of dimension OO is computed. Hence, the total number of filter parameters is O×I×m1×m2×…×mn


Microsoft/CNTK-R documentation built on May 28, 2019, 1:52 p.m.