A family of utilities to control the automatic block size (or length) and shape.
1 2 3 4 5 6 7 8 9 10
The auto block size (automatic block size) in bytes. Note that,
except when the type of the array data is
The auto block size is set to 100 Mb at package startup and can
be reset anytime to this value by calling
A string specifying the type of the array data.
A string specifying the auto block shape (automatic block shape).
The auto block shape is set to
block size != block length
block length = number of array elements in a block
block size = block length * size of the individual elements in memory.
For example, for an integer array, block size (in bytes) is
going to be 4 x block length. For a numeric array
type(x) == "double"), it's going to be 8 x block length.
In its current form, block processing in the DelayedArray package must decide the geometry of the blocks before starting the walk on the blocks. It does this based on several criteria. Two of them are:
The auto block size: maximum size (in bytes) of a block once loaded in memory.
type() of the array (e.g.
The auto block size setting and
type(x) control the maximum
length of the blocks. Other criteria control their shape. So for example
if you set the auto block size to 8GB, this will cap the length of
the blocks to 2e9 if your DelayedArray object
x is of type
integer, and to 1e9 if it's of type
Note that this simple relationship between block size and
block length assumes that blocks are loaded in memory as
ordinary (a.k.a. dense) matrices or arrays. With sparse blocks,
all bets are off. But the max block length is always taken to be
the auto block size divided by
whether the blocks are going to be loaded as dense or sparse arrays.
If they are going to be loaded as sparse arrays, their memory footprint
is very likely to be smaller than if they were loaded as dense arrays
so this is safe (although probably not optimal).
It's important to keep in mind that the auto block size setting
is a simple way for the user to put a cap on the memory footprint of
the blocks. Nothing more. In particular it doesn't control the maximum
amount of memory used by the block processing algorithm. Other variables
can impact dramatically memory usage like parallelization (where more than
one block is loaded in memory at any given time), what the algorithm is
doing with the blocks (e.g. something like
will actually load the entire array data in memory), what delayed
operations are on
x, etc... It would be awesome to have a way to
control the maximum amount of memory used by a block processing algorithm
as a whole but we don't know how to do that.
getAutoBlockSize: The current auto block size in bytes
as a single numeric value.
setAutoBlockSize: The new auto block size in bytes as an
invisible single numeric value.
getAutoBlockLength: The auto block length as a single
getAutoBlockShape: The current auto block shape as a
setAutoBlockShape: The new auto block shape as an invisible
defaultAutoGrid and family to generate automatic
grids to use for block processing of array-like objects.
blockApply and family for convenient block
processing of an array-like object.
makeCappedVolumeBox utility to make
capped volume boxes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
getAutoBlockSize() getAutoBlockLength("double") getAutoBlockLength("integer") getAutoBlockLength("logical") getAutoBlockLength("raw") m <- matrix(runif(600), ncol=12) setAutoBlockSize(140) getAutoBlockLength(type(m)) defaultAutoGrid(m) lengths(defaultAutoGrid(m)) dims(defaultAutoGrid(m)) getAutoBlockShape() setAutoBlockShape("scale") defaultAutoGrid(m) lengths(defaultAutoGrid(m)) dims(defaultAutoGrid(m)) ## Reset the auto block size and shape to factory settings: setAutoBlockSize() setAutoBlockShape()
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.