knitr::opts_chunk$set(collapse = TRUE, warning = FALSE, comment = "#>",
                      fig.path = "man/figures/README-", fig.dpi = 120, fig.width = 8, fig.height = 6,
                      out.width = "100%")

opensportml

lifecycle

Installation

## install.packages("remotes")
remotes::install_github("openvolley/opensportml")

The opensportml package provides image and video machine learning tools for sports analytics. Many of its functions are re-exported from the openvolley/ovml and openvolley/ovideo packages, which provide similar functionality but specifically for volleyball.

Currently two versions of the YOLO object detection algorithm are included. These have been implemented on top of the torch R package, meaning that no Python installation is required on your system.

Example

Use a YOLOv4 network to recognize objects in an image. We use an example image bundled with the package:

library(opensportml)
img <- os_example_image()
ovml_ggplot(img)

Construct the network. The first time this function is run, it will download and cache the network weights file (~250MB).

dn <- ovml_yolo()

Now we can use the network to detect objects in our image:

dets <- ovml_yolo_detect(dn, img)
ovml_ggplot(img, dets)

We can transform the image detections to real-world court coordinates. First we need to define the court reference points needed for the transformation. We can use the os_shiny_court_ref helper app for this:

ref <- list(video_width = 1024,
            video_height = 768,
            court_ref = dplyr::tribble(~image_x, ~image_y, ~court_x, ~court_y,
                                       0.0256, 0.386, 12.5, 46,
                                       0.283, 0.117, 100, 0,
                                       0.867, 0.475, 87.5, 154,
                                       0.582, 0.626, 0, 200))
ref <- os_shiny_court_ref(img)

ref should look something like:

ref

Now use it with the ov_transform_points function (note that currently this function expects the image coordinates to be normalized with respect to the image width and height):

court_xy <- ov_transform_points(x = (dets$xmin + dets$xmax)/2/ref$video_width, y = dets$ymin/ref$video_height,
                                ref = ref$court_ref, direction = "to_court")
dets <- cbind(dets, court_xy)

And plot it:

library(ggplot2)
ggplot(dets, aes(x, y)) + 
    os_ggcourt(line_colour = "white") + geom_point(colour = "blue", size = 3) +
    ggplot2::theme(panel.background = ggplot2::element_rect(fill = "#95a264"))

Keep in mind that ov_transform_points is using the middle-bottom of each bounding box and transforming it assuming that this represents a point on the court surface (the floor). Locations associated with truncated object boxes, or objects not on the court surface (a tennis racket in a player's hand, players jumping, people in elevated positions such as the referee's stand) will appear further away from the camera than they actually are.



openvolley/opensportml documentation built on Jan. 26, 2021, 1:34 a.m.