Sorensen similarity index for hypervolume set operations

Many hypervolume algorithms have computational complexities that scale with the number of random points used to characterize a hypervolume (`@RandomUniformPointsThresholded`

). This value can be reduced to improve runtimes at the cost of lower resolution.

1 | ```
hypervolume_thin(hv, factor = NULL, npoints = NULL)
``` |

`hv` |
An object of class |

`factor` |
A number in (0,1) describing the fraction of random points to keep. |

`npoints` |
A number describing the number random points to keep. |

Either `factor`

or `npoints`

(but not both) must be specified.

A `Hypervolume`

object

1 2 3 4 5 | ```
data(iris)
hv1 = hypervolume(subset(iris, Species=="setosa")[,1:4],bandwidth=0.2)
# downsample to 1000 random points
hv1_thinned = hypervolume_thin(hv1, npoints=1000)
``` |

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs with the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.