knitr::opts_chunk$set( collapse = TRUE, comment = "#>", eval = FALSE )
library(bidux) library(dplyr)
As data scientists and R developers, we're trained to think systematically, explore data thoroughly, and validate our assumptions. But here's a reality check: your dashboard users don't think like you do.
Consider this scenario: - You see: A well-organized dashboard with 15 filters, 8 visualizations, and comprehensive data coverage - Your users see: An overwhelming interface where they can't find what they need quickly
This isn't because your users are less intelligent; it's because they're human, and humans have predictable cognitive patterns that affect how they process information.
# Your typical approach to data exploration data |> filter(multiple_conditions) |> group_by(several_dimensions) |> summarize( metric_1 = mean(value_1, na.rm = TRUE), metric_2 = sum(value_2), metric_3 = median(value_3), .groups = "drop" ) |> arrange(desc(metric_1))
Think of human attention like computer memory: limited and easily overloaded.
# Explore cognitive load concepts cognitive_concepts <- bid_concepts("cognitive") cognitive_concepts |> select(concept, description, implementation_tips) |> head(3)
For dashboards, this means:
Practical example:
# Instead of showing all 12 KPIs at once: # kpi_grid <- layout_columns( # value_box("Revenue", "$1.2M", icon = "currency-dollar"), # value_box("Customers", "15,432", icon = "people"), # # ... 10 more value boxes # ) # Show key metrics first, details on demand: kpi_summary <- layout_columns( col_widths = c(8, 4), card( card_header("Key Performance Summary"), value_box("Primary Goal", "$1.2M Revenue", icon = "target"), p("vs. $980K target (+22%)") ), card( card_header("Details"), actionButton( "show_details", "View All Metrics", class = "btn-outline-primary" ) ) )
Humans over-rely on the first piece of information they see. In data dashboards, this means the first number shown becomes a reference point for everything else.
# Learn about anchoring bid_concept("anchoring") |> select(description, implementation_tips)
Dashboard implication:
If you show "Sales: $50K" first, users will judge all subsequent numbers relative to $50K, even if it's not the most important metric.
Solution pattern:
# Provide context and reference points sales_card <- card( card_header("Monthly Sales Performance"), layout_columns( value_box( title = "This Month", value = "$87K", showcase = bs_icon("graph-up"), theme = "success" ), div( p("Previous month: $65K", style = "color: #666; margin: 0;"), p("Target: $80K", style = "color: #666; margin: 0;"), p(strong("vs. Target: +9%"), style = "color: #28a745;") ) ) )
The same data can be interpreted completely differently based on how it's presented.
# Explore framing concepts bid_concept("framing") |> select(description, implementation_tips)
Example: Customer satisfaction of 73%
# Negative frame (emphasizes problems) satisfaction_negative <- value_box( "Customer Issues", "27% Unsatisfied", icon = "exclamation-triangle", theme = "danger" ) # Positive frame (emphasizes success) satisfaction_positive <- value_box( "Customer Satisfaction", "73% Satisfied", icon = "heart-fill", theme = "success" ) # Balanced frame (shows progress) satisfaction_balanced <- card( card_header("Customer Satisfaction Progress"), value_box("Current Level", "73%"), p("Improvement needed: 17 percentage points to reach 90% target") )
Research shows that too many choices lead to decision paralysis and user dissatisfaction.
The data scientist instinct:
"Let's give them 20 different chart types and 15 filters so they can explore any question!"
The user reality:
Better approach:
# Instead of 15 filters visible at once ui_complex <- div( selectInput("region", "Region", choices = regions), selectInput("product", "Product", choices = products), selectInput("channel", "Channel", choices = channels), # ... 12 more filters ) # Use progressive disclosure ui_simple <- div( # Show only the most common filters first selectInput("time_period", "Time Period", choices = time_options), selectInput("metric", "Primary Metric", choices = key_metrics), # Advanced filters behind a toggle accordion( accordion_panel( "Advanced Filters", icon = bs_icon("sliders"), selectInput("region", "Region", choices = regions), selectInput("product", "Product", choices = products) # Additional filters here ) ) )
You already know how to test hypotheses with data. Apply the same rigor to UX decisions:
# Your typical A/B test results <- t.test( treatment_group$conversion_rate, control_group$conversion_rate ) # UX equivalent: Test interface variations dashboard_test <- list( control = "Current 5-chart overview page", treatment = "Redesigned with progressive disclosure" ) # Measure: task completion time, user satisfaction, error rates # Analyze: same statistical rigor you'd apply to any experiment
Just as you validate data quality, validate that your interface matches user expectations:
# Document user mental models like you document data assumptions user_assumptions <- bid_interpret( central_question = "How do sales managers think about performance?", data_story = list( hook = "Sales managers need quick performance insights", context = "They have 15 minutes between meetings", tension = "Current reports take too long to interpret", resolution = "Provide immediate visual context with drill-down capability" ) ) # Validate these assumptions like you'd validate data quality summary(user_assumptions)
The BID framework gives you a systematic way to apply behavioral science, similar to how you approach data analysis:
bid_concepts() to explore one behavioral principle at a time# Explore available concepts by category all_concepts <- bid_concepts() table(all_concepts$category) # Start with these fundamental concepts for data dashboards starter_concepts <- c( "Cognitive Load Theory", "Anchoring Effect", "Processing Fluency", "Progressive Disclosure" ) for (concept in starter_concepts) { cat("\n### ", concept, "\n") info <- bid_concept(concept) cat(info$description[1], "\n") cat("Implementation:", info$implementation_tips[1], "\n") }
Understanding user cognition makes you a more complete data professional:
Remember: The best analysis in the world is worthless if users can't understand and act on it. Behavioral science helps bridge that gap.
bid_concepts() to explore all available behavioral science conceptsgetting-started vignette for hands-on BID framework practicetelemetry-integration vignette to measure the impact of your UX improvementsThe goal isn't to become a UX expert; it's to apply your analytical thinking to user experience and create more effective data products.
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.