Nothing
knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.width = 7, fig.height = 4 )
This vignette demonstrates a complete end-to-end surveillance analysis: from raw count data to actionable outputs. The workflow mirrors what a public health genomics team would run weekly.
library(lineagefreq) data(sarscov2_us_2022) x <- lfq_data(sarscov2_us_2022, lineage = variant, date = date, count = count, total = total)
Real surveillance data often contains dozens of low-frequency
lineages. collapse_lineages() merges those below a threshold
into an "Other" category.
x_clean <- collapse_lineages(x, min_freq = 0.02) attr(x_clean, "lineages")
fit <- fit_model(x_clean, engine = "mlr") summary(fit)
ga <- growth_advantage(fit, type = "relative_Rt", generation_time = 5) ga
autoplot(fit, type = "advantage", generation_time = 5)
emerging <- summarize_emerging(x_clean) emerging[emerging$significant, ]
fc <- forecast(fit, horizon = 28) autoplot(fc)
How many sequences are needed to reliably detect a lineage at 2% frequency?
sequencing_power( target_precision = 0.05, current_freq = c(0.01, 0.02, 0.05, 0.10) )
All results are compatible with the broom ecosystem.
tidy.lfq_fit(fit) glance.lfq_fit(fit)
A typical weekly workflow:
lfq_data() — ingest new countscollapse_lineages() — clean up rare lineagesfit_model() — estimate dynamicssummarize_emerging() — flag growing lineagesforecast() — project forward 4 weeksautoplot() — generate report figuresAny scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.