Description Usage Arguments Value Author(s) References See Also Examples
View source: R/nnetPredIntS3.R
Get the prediction intervals of new dataset at certain confidence level based on the training datasets and the gradient at weight parameters of the neural network model.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | nnetPredInt(object, ...)
## Default S3 method:
nnetPredInt(object = NULL, xTrain, yTrain, yFit, node, wts, newData,
alpha = 0.05 , lambda = 0.5, funName = 'sigmoid', ...)
## S3 method for class 'nnet'
nnetPredInt(object, xTrain, yTrain, newData, alpha = 0.05, lambda = 0.5,
funName = 'sigmoid', ...)
## S3 method for class 'nn'
nnetPredInt(object, xTrain, yTrain, newData, alpha = 0.05, lambda = 0.5,
funName = 'sigmoid', ...)
## S3 method for class 'rsnns'
nnetPredInt(object, xTrain, yTrain, newData, alpha = 0.05, lambda = 0.5,
funName = 'sigmoid', ...)
|
object |
object of class: nnet as returned by 'nnet' package, nn as returned by 'neuralnet' package, rsnns as returned by 'RSNNS' package. Object set as NULL will use the default method which takes the weight parameters as the input from user. |
xTrain |
matrix or data frame of input values for the training dataset. |
yTrain |
vector of target values for the training dataset. |
newData |
matrix or data frame of the prediction dataset. |
yFit |
vector of the fitted values, as the output produced by the training model, e.g. nnet$fitted.values ('nnet') , nn$net.result[[1]] ('neuralnet') and rsnns$fitted.values ('RSNNS') |
node |
a vector of integers specifying the number of hidden nodes in each layer. Multi-layer network has the structure (s0, s1, ..., sm), in which s0 denotes the dimension for input layer and sm denotes the dimension of the output layer. sm is usually set as 1. |
wts |
a numeric vector of optimal weight parameters as the output of the neural network training model. The order of wts parameter is as follows: For any node i in layer k: c(bias ik, wi1k,wi2k,...wijk). |
|
nnet object, returned by 'nnet' package. We can directly set the wts as: wts = nnet$wts |
|
nn object, returned by 'neuralnet' package. We need to use transWeightListToVect function to transform the list of weights to a single vector first: wts = transWeightListToVect(wtsList, m). |
|
rsnns object, returned by 'RSNNS' package. We need to transform and combine the weight and bias parameters to a single vector: weightMatrix(object) and extractNetInfo(object)$unitDefinitions$unitBias. |
alpha |
confidence level. The confidence level is set to (1-alpha). In default, alpha = 0.05. |
lambda |
decay parameter of weights when the Jacobian matrix of training dataset is singular. In default, lamda is set to 0.5 . |
funName |
activation function name of neuron, e.g. 'sigmoid', 'tanh', etc. In default, it is set to 'sigmoid'. |
... |
additional arguments passed to the method. |
data frame of the prediction intervals, including prediction value, lower and upper bounds of the interval.
yPredValue |
the column of prediction value in the data frame. |
lowerBound |
the column of prediction lower bounds in the data frame. |
upperBound |
the column of prediction upper bounds in the data frame. |
Xichen Ding <rockingdingo@gmail.com>
De Veaux R. D., Schumi J., Schweinsberg J., Ungar L. H., 1998, "Prediction intervals for neural networks via nonlinear regression", Technometrics 40(4): 273-282.
Chryssolouris G., Lee M., Ramsey A., "Confidence interval prediction for neural networks models", IEEE Trans. Neural Networks, 7 (1), 1996, pp. 229-232.
'neuralnet' package by Stefan Fritsch, Frauke Guenther.
'nnet' package by Brian Ripley, William Venables.
'RSNNS' package by Christoph Bergmeir, Jose M. Benitez.
transWeightListToVect
jacobian
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | # Example 1: Using the nn object trained by neuralnet package
set.seed(500)
library(MASS)
data <- Boston
maxs <- apply(data, 2, max)
mins <- apply(data, 2, min)
scaled <- as.data.frame(scale(data, center = mins, scale = maxs - mins)) # normalization
index <- sample(1:nrow(data),round(0.75*nrow(data)))
train_ <- scaled[index,]
test_ <- scaled[-index,]
library(neuralnet) # Training
n <- names(train_)
f <- as.formula(paste("medv ~", paste(n[!n %in% "medv"], collapse = " + ")))
nn <- neuralnet(f,data = train_,hidden = c(5,3),linear.output = FALSE)
plot(nn)
library(nnetpredint) # Getting prediction confidence interval
x <- train_[,-14]
y <- train_[,14]
newData <- test_[,-14]
# S3 generic method: Object of nn
yPredInt <- nnetPredInt(nn, x, y, newData)
print(yPredInt[1:20,])
# S3 default method for user defined weights input, without model object trained:
yFit <- c(nn$net.result[[1]])
nodeNum <- c(13,5,3,1)
m <- 3
wtsList <- nn$weights[[1]]
wts <- transWeightListToVect(wtsList,m)
yPredInt2 <- nnetPredInt(object = NULL, x, y, yFit, nodeNum, wts, newData, alpha = 0.05)
print(yPredInt2[1:20,])
# Compare to the predict values from the neuralnet Compute method
predValue <- compute(nn,newData)
print(matrix(predValue$net.result[1:20]))
# Example 2: Using the nnet object trained by nnet package
library(nnet)
xTrain <- rbind(cbind(runif(150,min = 0, max = 0.5),runif(150,min = 0, max = 0.5)) ,
cbind(runif(150,min = 0.5, max = 1),runif(150,min = 0.5, max = 1))
)
nObs <- dim(xTrain)[1]
yTrain <- 0.5 + 0.4 * sin(2* pi * xTrain %*% c(0.4,0.6)) +rnorm(nObs,mean = 0, sd = 0.05)
plot(xTrain %*% c(0.4,0.6),yTrain)
# Training nnet models
net <- nnet(yTrain ~ xTrain,size = 3, rang = 0.1,decay = 5e-4, maxit = 500)
yFit <- c(net$fitted.values)
nodeNum <- c(2,3,1)
wts <- net$wts
# New data for prediction intervals
library(nnetpredint)
newData <- cbind(seq(0,1,0.05),seq(0,1,0.05))
yTest <- 0.5 + 0.4 * sin(2* pi * newData %*% c(0.4,0.6))+rnorm(dim(newData)[1],
mean = 0, sd = 0.05)
# S3 generic method: Object of nnet
yPredInt <- nnetPredInt(net, xTrain, yTrain, newData)
print(yPredInt[1:20,])
# S3 default method: xTrain,yTrain,yFit,...
yPredInt2 <- nnetPredInt(object = NULL, xTrain, yTrain, yFit, node = nodeNum, wts = wts,
newData, alpha = 0.05, funName = 'sigmoid')
plot(newData %*% c(0.4,0.6),yTest,type = 'b')
lines(newData %*% c(0.4,0.6),yPredInt$yPredValue,type = 'b',col='blue')
lines(newData %*% c(0.4,0.6),yPredInt$lowerBound,type = 'b',col='red') # lower bound
lines(newData %*% c(0.4,0.6),yPredInt$upperBound,type = 'b',col='red') # upper bound
# Example 3: Using the rsnns object trained by RSNNS package
library(RSNNS)
data(iris)
#shuffle the vector
iris <- iris[sample(1:nrow(iris),length(1:nrow(iris))),1:ncol(iris)]
irisValues <- iris[,1:4]
irisTargets <- decodeClassLabels(iris[,5])[,'setosa']
iris <- splitForTrainingAndTest(irisValues, irisTargets, ratio=0.15)
iris <- normTrainingAndTestSet(iris)
model <- mlp(iris$inputsTrain, iris$targetsTrain, size=5, learnFuncParams=c(0.1),
maxit=50, inputsTest=iris$inputsTest, targetsTest=iris$targetsTest)
predictions <- predict(model,iris$inputsTest)
# Generating prediction intervals
library(nnetpredint)
# S3 Method for rsnns class prediction intervals
xTrain <- iris$inputsTrain
yTrain <- iris$targetsTrain
newData <- iris$inputsTest
yPredInt <- nnetPredInt(model, xTrain, yTrain, newData)
print(yPredInt[1:20,])
|
yPredValue lowerBound upperBound
1 0.5663201956 0.4526279335 0.6800124578
2 0.3247979892 0.2114939005 0.4381020779
3 0.3503461914 0.2370632101 0.4636291727
4 0.3640062380 0.2507050514 0.4773074247
5 0.2408687418 0.1273287685 0.3544087150
6 0.3237362052 0.2103489305 0.4371234799
7 0.3655341298 0.2522485954 0.4788196642
8 0.7158949005 0.6011202539 0.8306695471
9 0.4260779770 0.3125434663 0.5396124876
10 0.4091073391 0.2956169362 0.5225977420
11 0.4924698157 0.3789633836 0.6059762478
12 0.3813557944 0.2680704777 0.4946411112
13 0.4123831887 0.2988868753 0.5258795020
14 0.4260266210 0.3125489220 0.5395043200
15 0.3344342126 0.2210856123 0.4477828130
16 0.3313209298 0.2180321311 0.4446097284
17 0.4195196531 0.3060043560 0.5330349503
18 0.3493928305 0.2360790434 0.4627066175
19 0.4958960440 0.3824267898 0.6093652981
20 0.3939510022 0.2806655469 0.5072364574
yPredValue lowerBound upperBound
1 0.5663201956 0.4526279335 0.6800124578
2 0.3247979892 0.2114939005 0.4381020779
3 0.3503461914 0.2370632101 0.4636291727
4 0.3640062380 0.2507050514 0.4773074247
5 0.2408687418 0.1273287685 0.3544087150
6 0.3237362052 0.2103489305 0.4371234799
7 0.3655341298 0.2522485954 0.4788196642
8 0.7158949005 0.6011202539 0.8306695471
9 0.4260779770 0.3125434663 0.5396124876
10 0.4091073391 0.2956169362 0.5225977420
11 0.4924698157 0.3789633836 0.6059762478
12 0.3813557944 0.2680704777 0.4946411112
13 0.4123831887 0.2988868753 0.5258795020
14 0.4260266210 0.3125489220 0.5395043200
15 0.3344342126 0.2210856123 0.4477828130
16 0.3313209298 0.2180321311 0.4446097284
17 0.4195196531 0.3060043560 0.5330349503
18 0.3493928305 0.2360790434 0.4627066175
19 0.4958960440 0.3824267898 0.6093652981
20 0.3939510022 0.2806655469 0.5072364574
[,1]
[1,] 0.5663201956
[2,] 0.3247979892
[3,] 0.3503461914
[4,] 0.3640062380
[5,] 0.2408687418
[6,] 0.3237362052
[7,] 0.3655341298
[8,] 0.7158949005
[9,] 0.4260779770
[10,] 0.4091073391
[11,] 0.4924698157
[12,] 0.3813557944
[13,] 0.4123831887
[14,] 0.4260266210
[15,] 0.3344342126
[16,] 0.3313209298
[17,] 0.4195196531
[18,] 0.3493928305
[19,] 0.4958960440
[20,] 0.3939510022
# weights: 13
initial value 35.518610
iter 10 value 5.975754
iter 20 value 4.219194
iter 30 value 2.479148
iter 40 value 1.612448
iter 50 value 1.555230
iter 60 value 1.536024
iter 70 value 1.528618
iter 80 value 1.520480
iter 90 value 1.516834
iter 100 value 1.502994
iter 110 value 1.395167
iter 120 value 1.330245
iter 130 value 1.282907
iter 140 value 1.260298
iter 150 value 1.240242
iter 160 value 1.195158
iter 170 value 1.157350
iter 180 value 1.123102
iter 190 value 1.115954
iter 200 value 1.111449
iter 210 value 1.109435
iter 220 value 1.108695
iter 230 value 1.108432
iter 240 value 1.108289
iter 250 value 1.108260
iter 260 value 1.108232
iter 270 value 1.108220
final value 1.108217
converged
yPredValue lowerBound upperBound
1 0.5526369225 0.45270868376 0.6525651613
2 0.6426406527 0.54284503191 0.7424362735
3 0.7284377473 0.62893159857 0.8279438960
4 0.8002269444 0.70103913805 0.8994147508
5 0.8512318343 0.75227606847 0.9501876001
6 0.8785426599 0.77970035318 0.9773849667
7 0.8792767188 0.78042368240 0.9781297553
8 0.8434437227 0.74434112544 0.9425463199
9 0.7537304410 0.65395032535 0.8535105567
10 0.6168908086 0.51673396848 0.7170476488
11 0.4770999491 0.37729925078 0.5769006473
12 0.3510361184 0.25119488428 0.4508773525
13 0.2352782736 0.13542901875 0.3351275284
14 0.1515932438 0.05229423331 0.2508922543
15 0.1129646217 0.01403295308 0.2118962903
16 0.1089437743 0.01008929548 0.2077982531
17 0.1300320189 0.03109152089 0.2289725169
18 0.1729738890 0.07380790568 0.2721398723
19 0.2354297341 0.13592702931 0.3349324389
20 0.3121993614 0.21236004132 0.4120386816
Loading required package: Rcpp
yPredValue lowerBound upperBound
1 0.938525982879 0.83042194399 1.0466300218
2 0.007984647777 -0.10001871400 0.1159880096
3 0.011010979165 -0.09699389117 0.1190158495
4 0.055204760198 -0.05295680842 0.1633663288
5 0.011967363949 -0.09603835226 0.1199730802
6 0.040611809735 -0.06745291677 0.1486765362
7 0.943313420329 0.83523298785 1.0513938528
8 0.007269657426 -0.10073341453 0.1152727294
9 0.069752057294 -0.03856671213 0.1780708267
10 0.929602538231 0.82146887591 1.0377362006
11 0.926010980489 0.81782688655 1.0341950744
12 0.037452582871 -0.07061969711 0.1455248629
13 0.923301768225 0.81512824228 1.0314752942
14 0.008155134396 -0.09984847085 0.1161587396
15 0.934239774810 0.82612677099 1.0423527786
16 0.940842614688 0.83275121446 1.0489340149
17 0.036665378393 -0.07138342997 0.1447141868
18 0.933077199274 0.82493566876 1.0412187298
19 0.944333657632 0.83625883877 1.0524084765
20 0.937817505270 0.82971623007 1.0459187805
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.