initiateSpataObjectVisiumHD | R Documentation |
SPATA2
from platform VisiumHDThis function initiates a SPATA2
object with data generated
using the 10x Genomics VisiumHD
platform.
initiateSpataObjectVisiumHD(
sample_name,
directory_visium,
square_res = "16um",
mtr = "filtered",
genes = NULL,
workers = 1,
batch_size = 1000,
img_active = "lowres",
img_ref = "hires",
resize_images = NULL,
unload = TRUE,
verbose = TRUE
)
sample_name |
Character. The name of the sample. |
directory_visium |
Character. The directory containing the Visium output files. |
square_res |
Character. The square resolution from which to load the data. While c('16um', '8um', '2um')
are the default resolutions, |
mtr |
Character. Specifies which matrix to use, either "filtered" or "raw". Default is "filtered". |
genes |
Character or |
workers |
Integer specifying the number of parallel workers to use for processing. Default is |
batch_size |
Integer specifying the number of spatial spots to process in each batch. Default is |
img_active |
Character. The active image to use, either "lowres" or "hires". Default is "lowres". |
img_ref |
Character. The reference image to use, either "lowres" or "hires". Default is "hires". |
resize_images |
A named list of numeric values between 0-1 used to resize
the respective image as indicated by the slot name. E.g |
unload |
Logical value. If |
verbose |
Logical. If (Warning messages will always be printed.) |
The function requires a directory containing the output files from a 10x Genomics VisiumHD experiment
specified with the argument directory_visium
. This directory (below denoted as ~) must include the following
sub-directories:
~/binned_outputs: A folder with the following subdirectories.
~/binned_outputs/square_002um: The folder containing the data for square_res = '2um'
.
~/binned_outputs/square_008um: The folder containing the data for square_res = '8um'
.
~/binned_outputs/square_016um: The folder containing the data for square_res = '16um'
.
Depending on your input for square_res
only the corresponding subfolder is required. This subfolder should
contain the following files and sub-directories:
~/binned_outputs/<square_res>/filtered_feature_bc_matrix.h5 or ~/raw_feature_bc_matrix.h5: The HDF5 file containing the filtered or raw feature-barcode matrix, respectively.
~/binned_outputs/<square_res>/spatial/tissue_lowres_image.png or ~/spatial/tissue_hires_image.png: The low-resolution or high-resolution tissue image.
~/binned_outputs/<square_res>/spatial/scalefactors_json.json: A JSON file containing the scale factors for the images.
~/binned_outputs/<square_res>/spatial/tissue_position.parquet A .parguet file containing the tissue positions and spatial coordinates.
The function will check for these files and process them to create a SPATA2
object. It reads the count matrix, loads the spatial data,
and initializes the SPATA2
object with the necessary metadata and settings.
A SPATA2
object the VisiumHD platform.
The input for square_res
can deviate from the standard resolution options as long as it is an
even number divisible by one of the standard resolutions. In such cases, data from the next possible
lower resolution is read, and reduceResolutionVisiumHD()
is applied to aggregate the data.
For example, if square_res = '6um'
, data is retrieved from ~/binned_outputs/square_002um
and then aggregated accordingly. The same applies if square_res = 10um
. If square_res = 24um
,
data is read from ~/binned_outputs/square_008um; if square_res = 32um
, data is read from
~/binned_outputs/square_016um, and so on. If the required folder is missing, data from the
next higher resolution folder is used if possible, else an error is thrown.
Note, that aggregating counts by resolution can take a considerable amount of time. Consider prefiltering
the raw counts using genes
and/or increasing the number of cores to use with workers
.
Visium spot coordinates come with column and row indices. The functions initiateSpataObjectVisium()
and initiateSpataObjectVisiumHD()
ensure that col aligns with the direction of the x-coordinates and
that row aligns with the direction of the y-coordinates. If they do not, they are adjusted accordingly.
Hence, these variables should not be used as keys for data merging.
It is crucial to install the package arrow
in a way that arrow::read_parquet()
works. There
are several ways. Installing the package with install.packages('arrow', repos = 'https://apache.r-universe.dev')
worked reliably for us.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.