Description Usage Format Value Fields Methods References Examples
This class simulates a biologically realistic layer of neurons in the
Leabra framework. It consists of several unit
objects
in the variable (field) units
and some layer-specific
variables.
1 |
R6Class
object
Object of R6Class
with methods for calculating changes
of activation in a layer of neurons.
units
A list with all unit
objects of the layer.
avg_act
The average activation of all units in the layer (this is an active binding).
n
Number of units in layer.
weights
A receiving x sending weight matrix, where the receiving units
(rows) has the current weight values for the sending units (columns). The
weights will be set by the network
object, because they
depend on the connection to other layers.
ce_weights
Sigmoidal contrast-enhanced version of the weight matrix
weights
. These weights will be set by the network
object.
layer_number
Layer number in network (this is 1 if you create a layer on your own, without the network class).
new(dim, g_i_gain = 2)
Creates an object of this class with default parameters.
dim
A pair of numbers giving the dimensions (rows and columns) of the layer.
g_i_gain
Gain factor for inhibitory conductance, if you want less activation in a layer, set this higher.
get_unit_acts()
Returns a vector with the activations of all units of a layer.
get_unit_scaled_acts()
Returns a vector with the scaled
activations of all units of a layer. Scaling is done with
recip_avg_act_n
, a reciprocal function of the number of active
units.
cycle(intern_input, ext_input)
Iterates one time step with layer object.
intern_input
Vector with inputs from all other layers.
Each input has already been scaled by a reciprocal function of the
number of active units (recip_avg_act_n
) of the sending layer
and by the connection strength between the receiving and sending
layer. The weight matrix ce_weights
is multiplied with this
input vector to get the excitatory conductance for each unit in the
layer.
ext_input
Vector with inputs not coming from another
layer, with length equal to the number of units in this layer. If
empty (NULL
), no external inputs are processed. If the external
inputs are not clamped, this is actually an excitatory conductance
value, which is added to the conductance produced by the internal
input and weight matrix.
clamp_cycle(activations)
Iterates one time step with layer object with clamped activations, meaning that activations are instantaneously set without time integration.
activations
Activations you want to clamp to the units in the layer.
get_unit_act_avgs()
Returns a list with the short, medium and
long term activation averages of all units in the layer as vectors. The
super short term average is not returned, and the long term average is not
updated before being returned (this is done in the function chg_wt()
with the methodupdt_unit_avg_l
). These averages are used by the
network class to calculate weight changes.
updt_unit_avg_l()
Updates the long-term average
(avg_l
) of all units in the layer, usually done after a plus phase.
updt_recip_avg_act_n()
Updates the avg_act_inert
and
recip_avg_act_n
variables, these variables update before the weights
are changed instead of cycle by cycle. This version of the function assumes
full connectivity between layers.
reset(random = FALSE)
Sets the activation and activation averages of all units to 0. Used to begin trials from a stationary point.
random
Logical variable, if TRUE the activations are set randomly between .05 and .95 for every unit instead of 0.
set_ce_weights()
Sets contrast enhanced weight values.
get_unit_vars(show_dynamics = TRUE, show_constants =
FALSE)
Returns a data frame with the current state of all unit variables in the layer. Every row is a unit. You can choose whether you want dynamic values and / or constant values. This might be useful if you want to analyze what happens in units of a layer, which would otherwise not be possible, because most of the variables (fields) are private in the unit class.
show_dynamics
Should dynamic values be shown? Default is TRUE.
show_constants
Should constant values be shown? Default is FALSE.
get_layer_vars(show_dynamics = TRUE, show_constants =
FALSE)
Returns a data frame with 1 row with the current state of the variables in the layer. You can choose whether you want dynamic values and / or constant values. This might be useful if you want to analyze what happens in a layer, which would otherwise not be possible, because some of the variables (fields) are private in the layer class.
show_dynamics
Should dynamic values be shown? Default is TRUE.
show_constants
Should constant values be shown? Default is FALSE.
O'Reilly, R. C., Munakata, Y., Frank, M. J., Hazy, T. E., and Contributors (2016). Computational Cognitive Neuroscience. Wiki Book, 3rd (partial) Edition. URL: http://ccnbook.colorado.edu
Have also a look at https://grey.colorado.edu/emergent/index.php/Leabra (especially the link to the 'MATLAB' code) and https://en.wikipedia.org/wiki/Leabra
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | l <- layer$new(c(5, 5)) # create a 5 x 5 layer with default leabra values
l$g_e_avg # private values cannot be accessed
# if you want to see alle variables, you need to use the function
l$get_layer_vars(show_dynamics = TRUE, show_constants = TRUE)
# if you want to see a summary of all units without constant values
l$get_unit_vars(show_dynamics = TRUE, show_constants = FALSE)
# let us clamp the activation of the 25 units to some random values between
# 0.05 and 0.95
l <- layer$new(c(5, 5))
activations <- runif(25, 0.05, .95)
l$avg_act
l$clamp_cycle(activations)
l$avg_act
# what happened to the unit activations?
l$get_unit_acts()
# compare with activations
activations
# scaled activations are scaled by the average activation of the layer and
# should be smaller
l$get_unit_scaled_acts()
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.