ov_3dpos_multicamera: 3D position estimate from multiple 2D views

View source: R/transformation.R

ov_3dpos_multicameraR Documentation

3D position estimate from multiple 2D views

Description

3D position estimate from multiple 2D views

Usage

ov_3dpos_multicamera(uv, C, method = "dlt", zinit = 2)

Arguments

uv

matrix or data.frame: u, v positions in 2D images, one row per image (u and v are the image x- and y-coordinates, normalized to the range 0-1)

C

list: a list of the same length as the number of rows in uv. The ith entry of C is the camera matrix (as returned by ov_cmat_estimate()) associated with the image coordinates in row i of uv. NOTE that the camera matrices in C must all return positions with the same orientation (i.e. all on the same court coordinates, oriented the same way)

method

string: either "dlt" (direct linear transform) or "nls" (nonlinear least-squares). The "nls" method finds the real-world x and y coordinates for each point in uv, assuming a certain value of z. It then chooses z to minimize the difference between those real-world x, y positions

zinit

numeric: initial estimate of height (only for method = "nls")

Value

A named list with components xyz (the estimated 3D position) and err (a measure of uncertainty in that position estimate - currently only for method "nls")

References

For general background see e.g. Ballard DH, Brown CM (1982) Computer Vision. Prentice-Hall, New Jersey

Examples


 ## two camera matrices
 refpts1 <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z,
                             0.0533,   0.0326,      3.5,      6.5,  0,
                              0.974,   0.0572,      0.5,      6.5,  0,
                              0.683,    0.566,      0.5,      0.5,  0,
                              0.283,    0.560,      3.5,      0.5,  0,
                              0.214,    0.401,      3.5,      3.5,  0,
                              0.776,    0.412,      0.5,      3.5,  0,
                              0.780,    0.680,      0.5,      3.5,  2.43,
                              0.206,    0.670,      3.5,      3.5,  2.43)

 C1 <- ov_cmat_estimate(x = refpts1[, c("image_x", "image_y")],
                        X = refpts1[, c("court_x", "court_y", "z")])

 refpts2 <- dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y, ~z,
                              0.045,   0.0978,      0.5,      0.5,  0,
                              0.963,   0.0920,      3.5,      0.5,  0,
                              0.753,    0.617,      3.5,      6.5,  0,
                              0.352,    0.609,      0.5,      6.5,  0,
                              0.255,    0.450,      0.5,      3.5,  0,
                              0.817,    0.456,      3.5,      3.5,  0,
                              0.821,    0.731,      3.5,      3.5,  2.43,
                              0.246,    0.720,      0.5,      3.5,  2.43)
 C2 <- ov_cmat_estimate(x = refpts2[, c("image_x", "image_y")],
                        X = refpts2[, c("court_x", "court_y", "z")])

# uv1 <- ov_cmat_apply(C1, matrix(xyz, ncol = 3))c(0.369, 0.775) ## object position in image 1
# uv2 <- c(0.732, 0.688) ## object position in image 2

xyz <- matrix(c(3.4, 1.4, 2.90), ncol = 3)
uv1 <- ov_cmat_apply(C1, xyz) ## object position in image 1
uv2 <- ov_cmat_apply(C2, xyz) ## object position in image 2

## if our measurements are perfect (no noise), we can reconstruct xyz exactly:
ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "dlt")
ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "nls")

## with noise
uv1 <- uv1 + rnorm(2, sd = 0.02)
uv2 <- uv2 + rnorm(2, sd = 0.02)
ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "dlt")
ov_3dpos_multicamera(rbind(uv1, uv2), list(C1, C2), method = "nls")


openvolley/ovideo documentation built on March 19, 2024, 9:52 p.m.