View source: R/layers.embeddings.R
Embedding | R Documentation |
Turns positive integers (indexes) into dense vectors of fixed size.
Embedding(input_dim, output_dim, embeddings_initializer = "uniform", embeddings_regularizer = NULL, embeddings_constraint = NULL, mask_zero = FALSE, input_length = NULL, input_shape = NULL)
input_dim |
int > 0. Size of the vocabulary, ie. 1 + maximum integer index occurring in the input data. |
output_dim |
int >= 0. Dimension of the dense embedding. |
embeddings_initializer |
Initializer for the embeddings matrix |
embeddings_regularizer |
Regularizer function applied to the embeddings matrix |
embeddings_constraint |
Constraint function applied to the embeddings matrix |
mask_zero |
Whether or not the input value 0 is a special "padding" value that should be masked out. |
input_length |
Length of input sequences, when it is constant. |
input_shape |
only need when first layer of a model; sets the input shape of the data |
Taylor B. Arnold, taylor.arnold@acm.org
Chollet, Francois. 2015. Keras: Deep Learning library for Theano and TensorFlow.
Other layers: Activation
,
ActivityRegularization
,
AdvancedActivation
,
BatchNormalization
, Conv
,
Dense
, Dropout
,
Flatten
, GaussianNoise
,
LayerWrapper
,
LocallyConnected
, Masking
,
MaxPooling
, Permute
,
RNN
, RepeatVector
,
Reshape
, Sequential
if(keras_available()) { X_train <- matrix(sample(0:19, 100 * 100, TRUE), ncol = 100) Y_train <- rnorm(100) mod <- Sequential() mod$add(Embedding(input_dim = 20, output_dim = 10, input_length = 100)) mod$add(Dropout(0.5)) mod$add(GRU(16)) mod$add(Dense(1)) mod$add(Activation("sigmoid")) keras_compile(mod, loss = "mse", optimizer = RMSprop()) keras_fit(mod, X_train, Y_train, epochs = 3, verbose = 0) }
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.