Description Usage Arguments Details Value References Examples
View source: R/NeuralNetTools_old.R
Relative importance of input variables in neural networks as the sum of the product of raw input-hidden, hidden-output connection weights, proposed by Olden et al. 2004.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | olden(mod_in, ...)
## Default S3 method:
olden(
mod_in,
x_names,
y_names,
out_var = NULL,
bar_plot = TRUE,
x_lab = NULL,
y_lab = NULL,
skip_wts = NULL,
...
)
## S3 method for class 'numeric'
olden(mod_in, struct, ...)
## S3 method for class 'nnet'
olden(mod_in, ...)
## S3 method for class 'mlp'
olden(mod_in, ...)
## S3 method for class 'nn'
olden(mod_in, ...)
## S3 method for class 'train'
olden(mod_in, ...)
|
mod_in |
input model object or a list of model weights as returned from |
... |
arguments passed to or from other methods |
x_names |
chr string of input variable names, obtained from the model object |
y_names |
chr string of response variable names, obtained from the model object |
out_var |
chr string indicating the response variable in the neural network object to be evaluated. Only one input is allowed for models with more than one response. Names must be of the form |
bar_plot |
logical indicating if a |
x_lab |
chr string of alternative names to be used for explanatory variables in the figure, default is taken from |
y_lab |
chr string of alternative names to be used for response variable in the figure, default is taken from |
skip_wts |
vector from |
struct |
numeric vector equal in length to the number of layers in the network. Each number indicates the number of nodes in each layer starting with the input and ending with the output. An arbitrary number of hidden layers can be included. |
This method is similar to Garson's algorithm (Garson 1991, modified by Goh 1995) in that the connection weights between layers of a neural network form the basis for determining variable importance. However, Olden et al. 2004 describe a connection weights algorithm that consistently out-performed Garson's algorithm in representing the true variable importance in simulated datasets. This ‘Olden’ method calculates variable importance as the product of the raw input-hidden and hidden-output connection weights between each input and output neuron and sums the product across all hidden neurons. An advantage of this approach is the relative contributions of each connection weight are maintained in terms of both magnitude and sign as compared to Garson's algorithm which only considers the absolute magnitude. For example, connection weights that change sign (e.g., positive to negative) between the input-hidden to hidden-output layers would have a cancelling effect whereas Garson's algorithm may provide misleading results based on the absolute magnitude. An additional advantage is that Olden's algorithm is capable of evaluating neural networks with multiple hidden layers wheras Garson's was developed for networks with a single hidden layer.
The importance values assigned to each variable are in units that are based directly on the summed product of the connection weights. The actual values should only be interpreted based on relative sign and magnitude between explanatory variables. Comparisons between different models should not be made.
The Olden function also works with networks that have skip layers by adding the input-output connection weights to the final summed product of all input-hidden and hidden-output connections. This was not described in the original method so interpret with caution.
By default, the results are shown only for the first response variable for networks with multiple output nodes. The plotted response variable can be changed with out_var
.
A ggplot
object for plotting if bar_plot = FALSE
, otherwise a data.frame
of relative importance values for each input variable.
Beck, M.W. 2018. NeuralNetTools: Visualization and Analysis Tools for Neural Networks. Journal of Statistical Software. 85(11):1-20.
Garson, G.D. 1991. Interpreting neural network connection weights. Artificial Intelligence Expert. 6(4):46-51.
Goh, A.T.C. 1995. Back-propagation neural networks for modeling complex systems. Artificial Intelligence in Engineering. 9(3):143-151.
Olden, J.D., Jackson, D.A. 2002. Illuminating the 'black-box': a randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling. 154:135-150.
Olden, J.D., Joy, M.K., Death, R.G. 2004. An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling. 178:389-397.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 | ## using numeric input
wts_in <- c(13.12, 1.49, 0.16, -0.11, -0.19, -0.16, 0.56, -0.52, 0.81)
struct <- c(2, 2, 1) #two inputs, two hidden, one output
olden(wts_in, struct)
## using nnet
library(nnet)
data(neuraldat)
set.seed(123)
mod <- nnet(Y1 ~ X1 + X2 + X3, data = neuraldat, size = 5)
olden(mod)
## Not run:
## View the difference for a model w/ skip layers
set.seed(123)
mod <- nnet(Y1 ~ X1 + X2 + X3, data = neuraldat, size = 5, skip = TRUE)
olden(mod)
## using RSNNS, no bias layers
library(RSNNS)
x <- neuraldat[, c('X1', 'X2', 'X3')]
y <- neuraldat[, 'Y1']
mod <- mlp(x, y, size = 5)
olden(mod)
## using neuralnet
library(neuralnet)
mod <- neuralnet(Y1 ~ X1 + X2 + X3, data = neuraldat, hidden = 5)
olden(mod)
## using caret
library(caret)
mod <- train(Y1 ~ X1 + X2 + X3, method = 'nnet', data = neuraldat, linout = TRUE)
olden(mod)
## multiple hidden layers
x <- neuraldat[, c('X1', 'X2', 'X3')]
y <- neuraldat[, 'Y1']
mod <- mlp(x, y, size = c(5, 7, 6), linOut = TRUE)
olden(mod)
## End(Not run)
|
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.