create_model_lstm_cnn: Creates LSTM/CNN network

Description Usage Arguments

View source: R/create_model.R

Description

Creates a netwotk consisting of an arbitrary number of LSTM layers (>0) and an optional CNN layer at the beginning. Last layer is a dense layer with softmax activation.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
create_model_lstm_cnn(
  maxlen = 50,
  dropout = 0,
  recurrent_dropout = 0,
  layer.size = 128,
  layers.lstm = 2,
  solver = "adam",
  use.codon.cnn = FALSE,
  learning.rate = 0.001,
  use.cudnn = TRUE,
  use.multiple.gpus = FALSE,
  merge.on.cpu = TRUE,
  gpu.num = 2,
  num_targets = 4,
  vocabulary.size = 4,
  bidirectional = FALSE,
  compile = TRUE
)

Arguments

maxlen

Length of predictor sequence.

dropout

Fraction of the units to drop for inputs.

recurrent_dropout

Fraction of the units to drop for recurrent state.

layer.size

Number of cells per network layer.

layers.lstm

Number of LSTM layers.

solver

Optimization method, options are "adam", "adagrad", "rmsprop" or "sgd".

use.codon.cnn

First layer is a CNN layer with size of 3 to mimic codons (experimental).

learning.rate

Learning rate for optimizer.

use.cudnn

If true, using layer_cudnn_lstm() instead of layer_lstm() which is if GPU supports cudnn.

use.multiple.gpus

If true, multi_gpu_model() will be used based on gpu_num.

merge.on.cpu

True on default, false recommend if the server supports NVlink, only relevant if use.multiple.gpu is true.

gpu.num

Number of GPUs to be used, only relevant if multiple_gpu is true.

num_targets

Number of possible predictions. Determines number of neurons in dense layer.

vocabulary.size

Number of unique character in vocabulary.

bidirectional

Use bidirectional wrapper for lstm layers.


hiddengenome/deepG documentation built on April 16, 2020, 1:38 a.m.