hypervolume_set: Set operations (intersection / union / unique components)

View source: R/hypervolume_set.R

hypervolume_setR Documentation

Set operations (intersection / union / unique components)


Computes the intersection, union, and unique components of two hypervolumes.


hypervolume_set(hv1, hv2, num.points.max = NULL, 
  verbose = TRUE, check.memory = TRUE, distance.factor = 1)



A n-dimensional hypervolume


A n-dimensional hypervolume


Maximum number of random points to use for set operations. If NULL defaults to 10^(3+sqrt(n)) where n is the dimensionality of the input hypervolumes. Note that this default parameter value has been increased by a factor of 10 since the 1.2 release of this package.


Logical value; print diagnostic output if true.


Logical value; returns information about expected memory usage if true.


Numeric value; multiplicative factor applied to the critical distance for all inclusion tests (see below). Recommended to not change this parameter.


Uses the inclusion test approach to identify points in the first hypervolume that are or are not within the second hypervolume and vice-versa, based on determining whether each random point in each hypervolume is within a critical distance of at least one random point in the other hypervolume.

The intersection is the points in both hypervolumes, the union those in either hypervolume, and the unique components the points in one hypervolume but not the other.

If you have more than two hypervolumes and wish to calculate only an intersection, consider instead using hypervolume_set_n_intersection rather than iteratively applying this function.

By default, the function uses check.memory=TRUE which will provide an estimate of the computational cost of the set operations. The function should then be re-run with check_memory=FALSE if the cost is acceptable. This algorithm's memory and time cost scale quadratically with the number of input points, so large datasets can have disproportionately high costs. This error-checking is intended to prevent the user from large accidental memory allocation.

The computation is actually performed on a random sample from both input hypervolumes, constraining each to have the same point density given by the minimum of the point density of each input hypervolume, and the point density calculated using the volumes of each input hypervolume divided by num.points.max.

Because this algorithm is based on distances calculated between the distributions of random points, the critical distance (point density ^ (-1/n)) can be scaled by a user-specified factor to provide more or less liberal estimates (distance_factor greater than or less than 1).


If check_memory is false, returns a HypervolumeList object, with six items in its HVList slot:


The input hypervolume hv1


The input hypervolume hv2


The intersection of hv1 and hv2


The union of hv1 and hv2


The unique component of hv1 relative to hv2


The unique component of hv2 relative to hv1

Note that the output hypervolumes will have lower random point densities than the input hypervolumes.

You may find it useful to define a Jaccard-type fractional overlap between hv1 and hv2 as hv_set@HVList$Intersection@Volume / hv_set@HVList$Union@Volume.

If check_memory is true, instead returns a scalar with the expected number of pairwise comparisons.

If one of the input hypervolumes has no random points, returns NA with a warning.

See Also



penguins_no_na = as.data.frame(na.omit(penguins))
penguins_adelie = penguins_no_na[penguins_no_na$species=="Adelie",
penguins_chinstrap = penguins_no_na[penguins_no_na$species=="Chinstrap",

hv1 = hypervolume_box(penguins_adelie,name='Adelie')
hv2 = hypervolume_box(penguins_chinstrap,name='Chinstrap')

hv_set <- hypervolume_set(hv1, hv2, check.memory=FALSE)

# examine volumes of each set component

hypervolume documentation built on May 29, 2024, 8:19 a.m.