Description Usage Arguments Details Value References See Also Examples
Fit singlehiddenlayer neural network, possibly with skiplayer connections.
1 2 3 4 5 6 7 8 9 10 11 12  nnet(x, ...)
## S3 method for class 'formula'
nnet(formula, data, weights, ...,
subset, na.action, contrasts = NULL)
## Default S3 method:
nnet(x, y, weights, size, Wts, mask,
linout = FALSE, entropy = FALSE, softmax = FALSE,
censored = FALSE, skip = FALSE, rang = 0.7, decay = 0,
maxit = 100, Hess = FALSE, trace = TRUE, MaxNWts = 1000,
abstol = 1.0e4, reltol = 1.0e8, ...)

formula 
A formula of the form 
x 
matrix or data frame of 
y 
matrix or data frame of target values for examples. 
weights 
(case) weights for each example – if missing defaults to 1. 
size 
number of units in the hidden layer. Can be zero if there are skiplayer units. 
data 
Data frame from which variables specified in 
subset 
An index vector specifying the cases to be used in the training sample. (NOTE: If given, this argument must be named.) 
na.action 
A function to specify the action to be taken if 
contrasts 
a list of contrasts to be used for some or all of the factors appearing as variables in the model formula. 
Wts 
initial parameter vector. If missing chosen at random. 
mask 
logical vector indicating which parameters should be optimized (default all). 
linout 
switch for linear output units. Default logistic output units. 
entropy 
switch for entropy (= maximum conditional likelihood) fitting. Default by leastsquares. 
softmax 
switch for softmax (loglinear model) and maximum conditional
likelihood fitting. 
censored 
A variant on 
skip 
switch to add skiplayer connections from input to output. 
rang 
Initial random weights on [ 
decay 
parameter for weight decay. Default 0. 
maxit 
maximum number of iterations. Default 100. 
Hess 
If true, the Hessian of the measure of fit at the best set of weights
found is returned as component 
trace 
switch for tracing optimization. Default 
MaxNWts 
The maximum allowable number of weights. There is no intrinsic limit
in the code, but increasing 
abstol 
Stop if the fit criterion falls below 
reltol 
Stop if the optimizer is unable to reduce the fit criterion by a
factor of at least 
... 
arguments passed to or from other methods. 
If the response in formula
is a factor, an appropriate classification
network is constructed; this has one output and entropy fit if the
number of levels is two, and a number of outputs equal to the number
of classes and a softmax output stage for more levels. If the
response is not a factor, it is passed on unchanged to nnet.default
.
Optimization is done via the BFGS method of optim
.
object of class "nnet"
or "nnet.formula"
.
Mostly internal structure, but has components
wts 
the best set of weights found 
value 
value of fitting criterion plus weight decay term. 
fitted.values 
the fitted values for the training data. 
residuals 
the residuals for the training data. 
convergence 

Ripley, B. D. (1996) Pattern Recognition and Neural Networks. Cambridge.
Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth edition. Springer.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20  # use half the iris data
ir < rbind(iris3[,,1],iris3[,,2],iris3[,,3])
targets < class.ind( c(rep("s", 50), rep("c", 50), rep("v", 50)) )
samp < c(sample(1:50,25), sample(51:100,25), sample(101:150,25))
ir1 < nnet(ir[samp,], targets[samp,], size = 2, rang = 0.1,
decay = 5e4, maxit = 200)
test.cl < function(true, pred) {
true < max.col(true)
cres < max.col(pred)
table(true, cres)
}
test.cl(targets[samp,], predict(ir1, ir[samp,]))
# or
ird < data.frame(rbind(iris3[,,1], iris3[,,2], iris3[,,3]),
species = factor(c(rep("s",50), rep("c", 50), rep("v", 50))))
ir.nn2 < nnet(species ~ ., data = ird, subset = samp, size = 2, rang = 0.1,
decay = 5e4, maxit = 200)
table(ird$species[samp], predict(ir.nn2, ird[samp,], type = "class"))

# weights: 19
initial value 55.522326
iter 10 value 44.666535
iter 20 value 24.116422
iter 30 value 0.966963
iter 40 value 0.754279
iter 50 value 0.608768
iter 60 value 0.567790
iter 70 value 0.555242
iter 80 value 0.547182
iter 90 value 0.541274
iter 100 value 0.540703
iter 110 value 0.540227
iter 120 value 0.539832
iter 130 value 0.539637
iter 140 value 0.539448
iter 150 value 0.539436
iter 160 value 0.539432
iter 170 value 0.539430
iter 180 value 0.539430
iter 190 value 0.539429
iter 200 value 0.539429
final value 0.539429
stopped after 200 iterations
cres
true 1 2 3
1 22 0 3
2 0 25 0
3 0 0 25
# weights: 19
initial value 82.533796
iter 10 value 35.057219
iter 20 value 34.948317
iter 30 value 34.869540
iter 40 value 29.337264
iter 50 value 0.581108
iter 60 value 0.517007
iter 70 value 0.462494
iter 80 value 0.430756
iter 90 value 0.424800
iter 100 value 0.421029
iter 110 value 0.418184
iter 120 value 0.418002
iter 130 value 0.417940
iter 140 value 0.417892
iter 150 value 0.417885
iter 160 value 0.417884
final value 0.417884
converged
c s v
c 22 0 3
s 0 25 0
v 0 0 25
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.