# get_grad: Computes model gradient In IshmaelBelghazi/gpuClassifieR: gpuClassifieR C/CUDA linear classifier for R

## Usage

 1 2 3 4 5 get_grad(object, feats, targets, decay = NULL, backend = "R", ...) ## S3 method for class 'model.spec' get_grad(object, feats, targets, decay = NULL, backend = "R", ...) 

## Arguments

 object Linear classifier specification object. feats Numeric Matrix of features. Follows the usual convention of having one example per row. For a model with M features and dataset with N examples the matrix should be N \times K targets Numeric matrix of one hot encoded target. Follows the usual convention of having one target per row. For a model with K classes and a dataset with N examples the matrix should be N \times K decay Numeric scalar. Tikhonov regularization coefficient (weight decay). Should be a non-negative real number. backend Computation back-end ('R', 'C', or 'CUDA') ... other arguments passed to specific methods

## Value

Numeric Matrix of gradients. One for each class. Gradient are arrayed in columns. For a model with M features and K classes the matrix should be M \times K

## Methods (by class)

• model.spec: Computes model gradient for linear classifier specification objects

## Author(s)

Mohamed Ishmael Diwan Belghazi

## Examples

  1 2 3 4 5 6 7 8 9 10 11 12 13 # Generate random initial weights w_init <- matrix(rnorm(784 * 10), 784, 10) # construct model linear_classifier <- Classifier(weights_init=w_init) # Fetch training variables feats <- mini_mnist$train$images targets <- mini_mnist$train$labels # Set decay coefficient decay <- 0.01 # compute gradient at the training set using the three back-ends gradient_R <- get_grad(linear_classifier, feats, targets, decay, 'R') gradient_C <- get_grad(linear_classifier, feats, targets, decay, 'C') gradient_CUDA <- get_grad(linear_classifier, feats, targets, decay, 'CUDA') 

IshmaelBelghazi/gpuClassifieR documentation built on May 7, 2019, 6:45 a.m.