knitr::opts_chunk$set( collapse = TRUE, comment = "#>" )
The bid_ingest_telemetry()
function enables you to analyze real user behavior data from {shiny.telemetry}
and automatically identify UX friction points. This creates a powerful feedback loop where actual usage patterns drive design improvements through the BID framework.
First, ensure you have {shiny.telemetry}
set up in your Shiny application:
library(shiny) library(shiny.telemetry) # Initialize telemetry telemetry <- Telemetry$new() ui <- fluidPage( use_telemetry(), # Add telemetry JavaScript titlePanel("Sales Dashboard"), sidebarLayout( sidebarPanel( selectInput( "region", "Region:", choices = c("North", "South", "East", "West") ), dateRangeInput("date_range", "Date Range:"), selectInput( "product_category", "Product Category:", choices = c("All", "Electronics", "Clothing", "Food") ), actionButton("refresh", "Refresh Data") ), mainPanel( tabsetPanel( tabPanel("Overview", plotOutput("overview_plot")), tabPanel("Details", dataTableOutput("details_table")), tabPanel("Settings", uiOutput("settings_ui")) ) ) ) ) server <- function(input, output, session) { # Start telemetry tracking telemetry$start_session() # Your app logic here... } shinyApp(ui, server)
After collecting telemetry data from your users, use bid_ingest_telemetry()
to identify UX issues:
library(bidux) # Analyze telemetry from SQLite database (default) issues <- bid_ingest_telemetry("telemetry.sqlite") # Or from JSON log file issues <- bid_ingest_telemetry("telemetry.log", format = "json") # Review identified issues length(issues) names(issues)
The function analyzes five key friction indicators:
Identifies UI controls that users rarely or never interact with:
# Example: Region filter is never used issues$unused_input_region #> BID Framework - Notice Stage #> Problem: Users are not interacting with the 'region' input control #> Theory: Hick's Law (auto-suggested) #> Evidence: Telemetry shows 0 out of 847 sessions where 'region' was changed
This suggests the region filter might be: - Hidden or hard to find - Not relevant to users' tasks - Confusing or intimidating
Detects when users take too long to start using the dashboard:
issues$delayed_interaction #> BID Framework - Notice Stage #> Problem: Users take a long time before making their first interaction with the dashboard #> Theory: Information Scent (auto-suggested) #> Evidence: Median time to first input is 45 seconds, and 10% of sessions had no interactions at all
This indicates users might be: - Overwhelmed by the initial view - Unsure where to start - Looking for information that's not readily apparent
Identifies systematic errors that disrupt user experience:
issues$error_1 #> BID Framework - Notice Stage #> Problem: Users encounter errors when using the dashboard #> Theory: Norman's Gulf of Evaluation (auto-suggested) #> Evidence: Error 'Data query failed' occurred 127 times in 15.0% of sessions (in output 'overview_plot'), often after changing 'date_range'
This reveals: - Reliability issues with specific features - Input validation problems - Performance bottlenecks
Finds pages or tabs that users rarely visit:
issues$navigation_settings_tab #> BID Framework - Notice Stage #> Problem: The 'settings_tab' page/tab is rarely visited by users #> Theory: Information Architecture (auto-suggested) #> Evidence: Only 42 sessions (5.0%) visited 'settings_tab', and 90% of those sessions ended there
Low visit rates suggest: - Poor information scent - Hidden or unclear navigation - Irrelevant content
Detects rapid repeated changes indicating user confusion:
issues$confusion_date_range #> BID Framework - Notice Stage #> Problem: Users show signs of confusion when interacting with 'date_range' #> Theory: Feedback Loops (auto-suggested) #> Evidence: 8 sessions showed rapid repeated changes (avg 6 changes in 7.5 seconds), suggesting users are unsure about the input's behavior
This suggests: - Unclear feedback when values change - Unexpected behavior - Poor affordances
You can adjust the sensitivity of the analysis:
issues <- bid_ingest_telemetry( "telemetry.sqlite", thresholds = list( unused_input_threshold = 0.1, # Flag if <10% of sessions use input delay_threshold_seconds = 60, # Flag if >60s before first interaction error_rate_threshold = 0.05, # Flag if >5% of sessions have errors navigation_threshold = 0.3, # Flag if <30% visit a page rapid_change_window = 5, # Look for 5 changes within... rapid_change_count = 3 # ...3 seconds ) )
Use the identified issues to drive your BID process:
# Take the most critical issue critical_issue <- issues$error_1 # Start with interpretation interpret_result <- bid_interpret( central_question = "How can we prevent data query errors?", data_story = list( hook = "15% of users encounter errors", context = "Errors occur after date range changes", tension = "Users lose trust when queries fail", resolution = "Implement robust error handling and loading states" ) ) # Notice the specific problem notice_result <- bid_notice( previous_stage = interpret_result, problem = critical_issue$problem, evidence = critical_issue$evidence ) # Anticipate user behavior and biases anticipate_result <- bid_anticipate( previous_stage = notice_result, bias_mitigations = list( anchoring = "Show loading states to set proper expectations", confirmation_bias = "Display error context to help users understand issues" ) ) # Structure improvements structure_result <- bid_structure( previous_stage = anticipate_result ) # Validate and provide next steps validate_result <- bid_validate( previous_stage = structure_result, summary_panel = "Error handling improvements with clear user feedback", next_steps = c( "Implement loading states", "Add error context", "Test with users" ) )
Here's a full example analyzing a dashboard with multiple issues:
# 1. Ingest telemetry issues <- bid_ingest_telemetry("telemetry.sqlite") # 2. Prioritize issues by impact critical_issues <- list( unused_inputs = names(issues)[grepl("unused_input", names(issues))], errors = names(issues)[grepl("error", names(issues))], delays = "delayed_interaction" %in% names(issues) ) # 3. Create a comprehensive improvement plan if (length(critical_issues$unused_inputs) > 0) { # Address unused inputs following the updated BID workflow unused_issue <- issues[[critical_issues$unused_inputs[1]]] improvement_plan <- bid_interpret( central_question = "Which filters actually help users find insights?", data_story = list( hook = "Several filters are never used by users", context = "Dashboard has 5 filter controls", tension = "Too many options create choice overload", resolution = "Show only relevant filters based on user tasks" ) ) |> bid_notice( problem = unused_issue$problem, evidence = unused_issue$evidence ) |> bid_anticipate( bias_mitigations = list( choice_overload = "Hide advanced filters until needed", default_effect = "Pre-select most common filter values" ) ) |> bid_structure() |> bid_validate( summary_panel = "Simplified filtering with progressive disclosure", next_steps = c( "Remove unused filters", "Implement progressive disclosure", "Add contextual help", "Re-test with telemetry after changes" ) ) } # 4. Generate report improvement_report <- bid_report(improvement_plan, format = "html")
Collect Sufficient Data: Ensure you have telemetry from at least 50-100 sessions before analysis for reliable patterns.
Regular Analysis: Run telemetry analysis periodically (e.g., monthly) to catch emerging issues.
Combine with Qualitative Data: Use telemetry insights alongside user interviews and usability testing.
Track Improvements: After implementing changes, collect new telemetry to verify improvements:
# Before changes issues_before <- bid_ingest_telemetry("telemetry_before.sqlite") # After implementing improvements issues_after <- bid_ingest_telemetry("telemetry_after.sqlite") # Compare issue counts cat("Issues before:", length(issues_before), "\n") cat("Issues after:", length(issues_after), "\n")
# Save recurring patterns for future reference telemetry_patterns <- list( date_filter_confusion = "Users often struggle with date range inputs - consider using presets", tab_discovery = "Secondary tabs have low discovery - consider better visual hierarchy", error_recovery = "Users abandon after errors - implement graceful error handling" )
The bid_ingest_telemetry()
function bridges the gap between user behavior data and design decisions. By automatically identifying friction points from real usage patterns, it provides concrete, evidence-based starting points for the BID framework, ultimately leading to more user-friendly Shiny applications.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.