vad_scores: Direct VAD (Valence-Arousal-Dominance) Prediction

View source: R/vad_scores.R

vad_scoresR Documentation

Direct VAD (Valence-Arousal-Dominance) Prediction

Description

Directly predicts VAD dimensions using classification with definitional labels, bypassing the intermediate step of discrete emotion classification. This approach uses rich, educational descriptions of each VAD pole to help transformer models understand the psychological concepts and make more accurate predictions.

Usage

vad_scores(
  input,
  input_type = "auto",
  dimensions = c("valence", "arousal", "dominance"),
  label_type = "definitional",
  custom_labels = NULL,
  model = "auto",
  ...
)

Arguments

input

Input data. Can be:

  • Character: Text string, image file path, or video URL

  • Character vector: Multiple texts or image paths

  • List: Multiple text strings

input_type

Character. Type of input data:

  • "auto": Automatically detect based on input (default)

  • "text": Text input for transformer classification

  • "image": Image file path(s) for visual classification

  • "video": Video URL(s) for video analysis

dimensions

Character vector. Which VAD dimensions to predict:

  • "valence": Positive vs negative emotional experience

  • "arousal": High vs low activation/energy

  • "dominance": Control vs powerlessness

Default: all three dimensions

label_type

Character. Type of labels to use:

  • "definitional": Rich descriptive labels with definitions (default)

  • "simple": Basic polar labels (positive/negative, etc.)

  • "custom": User-provided custom labels

custom_labels

Optional list. Custom labels when label_type = "custom". Must follow structure: list(valence = list(positive = "...", negative = "..."), ...)

model

Character. Model to use for classification. Depends on input_type:

  • Text: transformer model (see transformer_scores documentation)

  • Image: CLIP model (see image_scores documentation)

  • Video: CLIP model (see video_scores documentation)

...

Additional arguments passed to underlying classification functions (transformer_scores, image_scores, or video_scores)

Details

This function implements direct VAD prediction using the approach: Input → VAD Classification → VAD Scores

Instead of mapping from discrete emotions, each VAD dimension is treated as a separate binary classification task using definitional labels that explain the psychological concepts.

**Definitional Labels (default):** The function uses rich descriptions that educate the model about each dimension:

  • **Valence**: "Positive valence, which refers to pleasant, enjoyable..."

  • **Arousal**: "High arousal, which refers to intense, energetic..."

  • **Dominance**: "High dominance, which refers to feeling in control..."

**Input Type Detection:** When input_type = "auto", the function detects input type based on:

  • URLs starting with "http": Video

  • File paths with image extensions: Image

  • Everything else: Text

**Score Interpretation:** Scores represent the probability that the input exhibits the "high" pole:

  • **Valence**: 1.0 = very positive, 0.0 = very negative

  • **Arousal**: 1.0 = high energy, 0.0 = very calm

  • **Dominance**: 1.0 = very controlling, 0.0 = very powerless

Value

A data.frame with columns:

  • input_id: Identifier for each input (text content, filename, or index)

  • valence: Valence score (0-1, where 1 = positive)

  • arousal: Arousal score (0-1, where 1 = high arousal)

  • dominance: Dominance score (0-1, where 1 = high dominance)

Only requested dimensions are included in output.

Data Privacy

All processing is done locally with downloaded models. Data is never sent to external servers.

Author(s)

Aleksandar Tomasevic <atomashevic@gmail.com>

References

Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161-1178.

Bradley, M. M., & Lang, P. J. (1994). Measuring emotion: the self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry, 25(1), 49-59.

Examples

## Not run: 
# Text VAD analysis
texts <- c("I'm absolutely thrilled!", "I feel so helpless and sad", "This is boring")
text_vad <- vad_scores(texts, input_type = "text")
print(text_vad)

# Image VAD analysis  
image_path <- system.file("extdata", "boris-1.png", package = "transforEmotion")
image_vad <- vad_scores(image_path, input_type = "image")
print(image_vad)

# Single dimension prediction
valence_only <- vad_scores(texts, dimensions = "valence")

# Using simple labels for speed
simple_vad <- vad_scores(texts, label_type = "simple")

# Custom labels for domain-specific applications
custom_labels <- list(
  valence = list(
    positive = "Customer satisfaction and positive brand sentiment",
    negative = "Customer complaints and negative brand sentiment"
  )
)
brand_vad <- vad_scores(texts, dimensions = "valence", 
                        label_type = "custom", custom_labels = custom_labels)

## End(Not run)


transforEmotion documentation built on Jan. 8, 2026, 5:06 p.m.