nn2poly | R Documentation |
Implements the main NN2Poly algorithm to obtain a polynomial representation of a trained neural network using its weights and Taylor expansion of its activation functions.
nn2poly(
object,
max_order = 2,
keep_layers = FALSE,
taylor_orders = 8,
...,
all_partitions = NULL
)
object |
An object for which the computation of the NN2Poly algorithm is desired. Currently supports models from the following deep learning frameworks:
It also supports a named At any layer |
max_order |
|
keep_layers |
Boolean that determines if all polynomials computed in
the internal layers have to be stored and given in the output ( |
taylor_orders |
|
... |
Ignored. |
all_partitions |
Optional argument containing the needed multipartitions
as list of lists of lists. If set to |
Returns an object of class nn2poly
.
If keep_layers = FALSE
(default case), it returns a list with two
items:
An item named labels
that is a list of integer vectors. Those vectors
represent each monomial in the polynomial, where each integer in the vector
represents each time one of the original variables appears in that term.
As an example, vector c(1,1,2) represents the term x_1^2x_2
. Note that
the variables are numbered from 1 to p, with the intercept is represented by
An item named values
which contains a matrix in which each column contains
the coefficients of the polynomial associated with an output neuron. That is,
if the neural network has a single output unit, the matrix values
will have
a single column and if it has multiple output units, the matrix values
will
have several columns. Each row will be the coefficient associated with the
label in the same position in the labels list.
If keep_layers = TRUE
, it returns a list of length the number of
layers (represented by layer_i
), where each one is another list with
input
and output
elements. Each of those elements contains an
item as explained before. The last layer output item will be the same element
as if keep_layers = FALSE
.
The polynomials obtained at the hidden layers are not needed to represent the NN but can be used to explore other insights from the NN.
Predict method for nn2poly
output predict.nn2poly()
.
# Build a NN estructure with random weights, with 2 (+ bias) inputs,
# 4 (+bias) neurons in the first hidden layer with "tanh" activation
# function, 4 (+bias) neurons in the second hidden layer with "softplus",
# and 1 "linear" output unit
weights_layer_1 <- matrix(rnorm(12), nrow = 3, ncol = 4)
weights_layer_2 <- matrix(rnorm(20), nrow = 5, ncol = 4)
weights_layer_3 <- matrix(rnorm(5), nrow = 5, ncol = 1)
# Set it as a list with activation functions as names
nn_object = list("tanh" = weights_layer_1,
"softplus" = weights_layer_2,
"linear" = weights_layer_3)
# Obtain the polynomial representation (order = 3) of that neural network
final_poly <- nn2poly(nn_object, max_order = 3)
# Change the last layer to have 3 outputs (as in a multiclass classification)
# problem
weights_layer_4 <- matrix(rnorm(20), nrow = 5, ncol = 4)
# Set it as a list with activation functions as names
nn_object = list("tanh" = weights_layer_1,
"softplus" = weights_layer_2,
"linear" = weights_layer_4)
# Obtain the polynomial representation of that neural network
# In this case the output is formed by several polynomials with the same
# structure but different coefficient values
final_poly <- nn2poly(nn_object, max_order = 3)
# Polynomial representation of each hidden neuron is given by
final_poly <- nn2poly(nn_object, max_order = 3, keep_layers = TRUE)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.