Overview: Getting Started with neuroim2

if (requireNamespace("ggplot2", quietly = TRUE) && requireNamespace("albersdown", quietly = TRUE)) ggplot2::theme_set(albersdown::theme_albers(family = params$family, preset = params$preset))
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  message = FALSE,
  warning = FALSE
)
suppressPackageStartupMessages(library(neuroim2))

Introduction

neuroim2 gives you a small set of data structures for 3D and 4D neuroimaging data, plus the spatial tools you need to move between file I/O, coordinate systems, regions of interest, and resampling. The package is broad, so this overview is intentionally narrow: it shows the first objects and workflows to learn, then points you to the focused vignettes that carry the rest.

Quick start

Start by reading one image and inspecting its spatial metadata.

img <- read_vol(system.file("extdata", "global_mask2.nii.gz", package = "neuroim2"))

dim(img)
spacing(img)
origin(img)

The most important thing to notice is that a NeuroVol is not just an array. It also carries a NeuroSpace, which tracks voxel spacing, origin, and affine transforms.

What should you read next?

The recommended path through the package is:

  1. vignette("ChoosingBackends", package = "neuroim2") for dense, sparse, mapped, file-backed, and hyper-vector backends.
  2. vignette("coordinate-systems", package = "neuroim2") for voxel, grid, and world-coordinate conversions.
  3. vignette("VolumesAndVectors", package = "neuroim2") for the core manipulation story.
  4. vignette("Resampling", package = "neuroim2") for resample(), downsample(), reorient(), and deoblique().
  5. vignette("AnalysisWorkflows", package = "neuroim2") for ROIs, searchlights, and map-reduce style analyses.

If you only read one follow-on article after this overview, make it vignette("VolumesAndVectors", package = "neuroim2").

The core objects

Most work in neuroim2 starts with three ideas:

Here is the smallest possible example of each.

mask <- img > 0
sum(mask)

vec <- read_vec(system.file("extdata", "global_mask_v4.nii", package = "neuroim2"))
dim(vec)

roi <- spherical_roi(space(vec), c(45, 45, 20), radius = 4)
length(roi)

That is the core mental model for the package:

A small end-to-end workflow

The next common step is to move from a 4D image to a region-level summary.

roi_ts <- series_roi(vec, roi)
roi_mat <- values(roi_ts)
mean_ts <- rowMeans(roi_mat)

stopifnot(
  nrow(roi_mat) == dim(vec)[4],
  ncol(roi_mat) == length(roi),
  all(is.finite(mean_ts))
)

head(mean_ts)

This is a deliberately small example, but it shows the typical neuroim2 workflow:

  1. Load a spatial object.
  2. Define a spatial support such as an ROI.
  3. Extract values with the correct geometry preserved.
  4. Compute summaries at the level you care about.

For broader ROI and searchlight patterns, move directly to vignette("AnalysisWorkflows", package = "neuroim2").

Spatial operations come next

Once you are comfortable reading data and extracting values, the next important layer is spatial transformation.

img_down <- downsample(img, spacing = c(2, 2, 2))

dim(img)
dim(img_down)
spacing(img_down)

For the full story, including orientation handling and affine-aware transforms, use:

When should you change backends?

You do not need a special backend to start. Use the default dense path first, then switch when the workload demands it.

big_vec <- read_vec(
  system.file("extdata", "global_mask_v4.nii", package = "neuroim2"),
  mode = "filebacked"
)

series(big_vec, 45, 45, 20)

The details and tradeoffs belong in vignette("ChoosingBackends", package = "neuroim2").

Where to go next

Core path

Advanced and specialized articles

Reference and help

help(package = "neuroim2")
help.search("roi", package = "neuroim2")

Summary

The package becomes much easier to navigate if you treat this overview as a map, not a manual. Learn NeuroVol, NeuroVec, and ROI extraction here, then move into the focused workflow vignettes for backend choice, spatial transforms, and analysis patterns.



Try the neuroim2 package in your browser

Any scripts or data that you put into this service are public.

neuroim2 documentation built on April 16, 2026, 5:07 p.m.