tensorA.package: The tensorA package for tensor arithmetic

Description Details Author(s) See Also Examples

Description

tensorA stands for "tensor arithmetic". A tensor is a mathematical generalization of vector and matrix with many applications in physics, geometry and in the statistics of vectors valued data. However the package is also useful in any case, where computations on sequences of matrices, vectors or even tensors is involved.

Details

Package: tensorA
Type: Package
Version: 0.1
Date: 2006-06-08
License: GPL Version 2 or newer

The tensorA package is made to allow programming for tensors in R on the same level of abstraction as we know from matrices. It provides many of the mathematical operations common in tensor arithmetics including the whole tensor calculus of covariate and contravariate indices, naming of indices, sequence of indices, decompositions of tensors, Einstein and Riemann summing conventions and vectorized computations on datasets of tensors just like the well vectorization of numbers in R. It provides tools to write tensor formulae very close to there paper form and to handle tensors of arbitrary level with simple programs.
The whole documentation of the package is best read in pdf or dvi format since it contains complicated mathematical formulae with multi-indices.

Simply speaking a tensor (see to.tensor) is just a multidimensional array A[,,]. The number of indices (i.e. length(dim(A)) is called the level of the tensor (see level.tensor). A tensor is mathematically it is denoted by a core symbol (e.g. A) with multiple indices:e.g.

A_{ijk}

The indices i,j,k can be seen as names for the dimensions and as integer numbers giving the respective index into the array. However the tensor is an algebraical object with many algebraical operations defined on it, which are also of relevancy for programming, e.g. in the parallel treatment of multiple linear equation systems.

To understand the package we need to understand tensors including their mathematical origin, the corresponding calculus, notation and basic operations.
One mathematical interpretation of a tensor, which is most relevant for physics, that of a multi-linear form of level(A) vectors, i.e. a function of level(A) many vectors to the real or complex numbers, which is linear with respect to each of its arguments. E.g. the two vectors "plane face direction" and "force direction" are mapped to the actual force by the stress tensor.
Row vectors are a special case of that and likewise column vectors as linear forms for row vectors. Matrices are bilinear forms of a row vector and a column vector. Thus Vectors and Matrices are examples of tensors of level 1 and 2.

Another interpretation of a tensor is the that of a linear mapping, quite like a matrix, but from a tensor space (e.g. the space of matrices or vectors seen as tensor) to another tensor space (e.g. again a space of matrices). An example for that is the Hook elasticity tensor mapping the strain tensor (i.e. a matrix describing the local deformation) to the stress tensor (i.e. a matrix describing the local forces). The Hook tensor is a tensor of level 4. Statistically relevant tensors of level 4 are e.g. covariances of matrices mapping two linear forms (i.e. 2 level 2 tensors) on observed matrices to there covariance. The mapping is performed with the tensor product, which is not unlike a matrix product, however more general. Let denote A a matrix and v a vector, we would write r=Ab for the matrix product and r <- A%*%b in R, which is defined as:

r_i = ∑_{j=1}^{j_{\max}} A_{ij}b_j

We know that we have to use the \(j\)-dimension in the summing, since the matrix multiplication rule says "row times column". Since a tensor can have more than two indices there is no row or column specified and we need to specify the inner product differently. To do this in the Einstein-Notation writing the tensor always with indices r_i=A_{ij}b_j and according to the Einstein summing rule the entries of \(r_i\) are given by an implicit sum over all indices which show up twice in this notation:

r_i=∑_{j=1}^{j_{\max}} A_{ij}b_j

This notation allows for a multitude of other products: A_{ij}b_i=t(A)b , A_{ij}b_k=outer(A,b) , A_{ii}b_j=trace(A)b with equal simplicity and without any additional functions. More complicated products involving more than tensors of level two can not even be formulated in pure matrix algebra without re-dimensioning of arrays e.g. b_ib_jb_k, A_{ijk}b_j. The Einstein summing rule is implemented in einstein.tensor and supported by the index sequencing functions $.tensor and |.tensor. A general multiplication allowing to identify and sum over any two indices is implemented in trace.tensor, when the indices are in the same tensor and in mul.tensor, when the indices to sum over are in different tensors.
Tensors with the same level and dimensions (identified by name and dimension) can also be added like matrices according to the rule that the values with the same combination of index values are added (see add.tensor). The implementation takes care of the sequence of the indices and rearranges them accordingly to match dimensions with the same name. E.g. the tensor addition

E_ijk=A_ijk+B_kji

has the effect, which is expressed by the same formula read in entries, which is also true for the more surprising

E_ij=A_ijk+B_kj


Like a matrix a tensor can also be seen as a mapping from one tensor space to another:

A_{i_1...i_d j_1...j_e}x_{j_1...j_e}=b_{i_1...i_d}

In this reading all the standard matrix computations and decompositions get a tensorial interpretation and generalization. The package provides some of these (see svd.tensor).
Another interpretation of tensors is as a sequence of tensors of lower level. E.g. a data matrix is seen as a sequence of vectors in multivariate dataset. The tensorA library provides means to do computation on these in parallel on these sequences of tensors like we can do parallel computation on sequences of numbers. This is typically done by the by= argument present in most functions and giving the index enumerating the elements of the sequence.
E.g. If we have sequence V_{ijd} of variance matrices V_{ij} of some sequence v_{id} of vectors and we would like to transform the vectors with some Matrix M_{i'i} we would get the sequence of transformed variances by V_{ijd} M_{i'i}M_{j'j}. However if the M_{ki} are different for each of the elements in sequence we would have stored them in a tensor M_{kid} and would have to replace M_{kid} with M_{kidd'}=M_{kid} if d=d' and zero otherwise. We can than get our result by

V_{ijd}M_{i'id'd}M_{j'jd'd''}

and we would have a by dimension of by="d". These operations are not strictly mathematical tensor operation, but generalizations of the vectorization approach of R. This is also closely related to diagmul.tensor or diag.tensor.
To complicate things the Einstein rule is only valid in case of tensors represented with respect to a orthogonal basis. Otherwise tensors get lower and upper indices like

A_{i\cdot k}^{\cdot j \cdot}

for representation in the covariate and contravariate form of the basis. In this case the Riemann summing rule applies which only sums over pairs of the same index, where one is in the lower and one is in the upper position. The contravariate form is represented with indices prefixed by ^.
The state of being covariate or contravariate can be changed by the dragging rule, which allows to switch between both state through the multiplication with the geometry tensors g_i^j. This can be done through drag.tensor.

Author(s)

K.Gerald van den Boogaart <boogaart@uni-greifswald.de

See Also

to.tensor, mul.tensor , einstein.tensor, add.tensor, [[.tensor, |.tensor

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
A <- to.tensor( 1:20, c(a=2,b=2,c=5) )
A
ftable(A)
B <- to.tensor( c(0,1,1,0) , c(a=2,"a'"=2))
A %e% B
drag.tensor( A , B, c("a","b"))
A %e% one.tensor(c(c=5))/5     # a mean of matrices
reorder.tensor(A,c("c","b","a"))
A -  reorder.tensor(A,c("c","b","a"))  # =0 since sequence is irrelevant
inv.tensor(A,"a",by="c")  

tensorA documentation built on Nov. 20, 2020, 9:07 a.m.