Description Usage Arguments Details Note References See Also Examples
This function creates a network
object for fitting a mistnet model.
1 2 3 4 |
x |
a |
y |
a |
layer.definitions |
a |
loss |
a |
updater |
an |
sampler |
a |
n.importance.samples |
an |
n.minibatch |
an |
training.iterations |
an |
shuffle |
logical. Should the data be shuffled after each epoch? Defaults to TRUE. |
initialize.biases |
logical. Should the network's final layer's biases
be initialized to nonzero values? If |
initialize.weights |
logical. Should the weights in each layer be
initialized automatically? If |
The mistnet
function produces a network
object
that produces a joint distribution over y
given x
. This
distribution is defined by a stochastic feed-forward neural network (Neal
1992), which is trained using a variant of backpropagation described in
Tang and Salakhutdinov (2013) and Harris (2014). During each training
iteration, model descends the gradient defined by its loss
function, averaged over a number of Monte Carlo samples and a number of
rows of data.
A network
concatenates the predictor variables in x
with random variables produced by a sampler
and passes the
resulting data vectors through one or more layer
objects to
make predictions about y
. The weights
and biases
in
each layer
can be trained using the network
's
fit
method (see example below).
network
objects produced by mistnet
are
ReferenceClasses
, and behave differently from other R
objects. In particular, binding a network
or other reference
class object to a new variable name will not produce a copy of the original
object, but will instead create a new alias for it.
Harris, D.J. Building realistic assemblages with a Joint Species Distribution Model. BioRxiv preprint. http://dx.doi.org/10.1101/003947
Neal, R.M. (1992) Connectionist learning of belief networks. Artificial Intelligence, 56, 71-113.
Tang, Y. & Salakhutdinov, R. (2013) Learning Stochastic Feedforward Neural Networks. Advances in Neural Information Processing Systems 26 (eds & trans C.J.C. Burges), L. Bottou), M. Welling), Z. Ghahramani), & K.Q. Weinberger), pp. 530-538.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 | # 107 rows of fake data
x = matrix(rnorm(1819), nrow = 107, ncol = 17)
y = dropoutMask(107, 14)
# Create the network object
net = mistnet(
x = x,
y = y,
layer.definitions = list(
defineLayer(
nonlinearity = rectify.nonlinearity(),
size = 30,
prior = gaussian.prior(mean = 0, sd = 0.1)
),
defineLayer(
nonlinearity = rectify.nonlinearity(),
size = 12,
prior = gaussian.prior(mean = 0, sd = 0.1)
),
defineLayer(
nonlinearity = sigmoid.nonlinearity(),
size = ncol(y),
prior = gaussian.prior(mean = 0, sd = 0.1)
)
),
loss = bernoulliLoss(),
updater = adagrad.updater(learning.rate = .01),
sampler = gaussian.sampler(ncol = 10L, sd = 1),
n.importance.samples = 30,
n.minibatch = 10,
training.iterations = 0
)
# Fit the model
net$fit(iterations = 10)
predict(net, newdata = x, n.importance.samples = 10)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.