add_vision_model: User-Friendly Vision Model Management Functions

View source: R/model_management.R

add_vision_modelR Documentation

User-Friendly Vision Model Management Functions

Description

High-level functions for managing vision models in transforEmotion, providing an easy interface for extending the package with custom models.

User-friendly wrapper for registering custom vision models with automatic validation and helpful error messages.

Usage

add_vision_model(
  name,
  model_id,
  description = NULL,
  architecture = "clip",
  test_labels = NULL,
  force = FALSE
)

Arguments

name

A short, memorable name for your model (e.g., "my-emotion-model")

model_id

HuggingFace model identifier or path to local model directory

description

Optional description of the model and its purpose

architecture

Model architecture type. Currently supported:

  • "clip": Standard CLIP models (most compatible)

  • "clip-custom": CLIP variants needing special handling

  • "blip": BLIP models (caption-likelihood scoring)

  • "align": ALIGN dual-encoder models (direct similarity)

test_labels

Optional character vector to test the model immediately

force

Logical indicating whether to overwrite existing model with same name

Value

Invisibly returns TRUE if successful

Author(s)

Aleksandar Tomasevic <atomashevic@gmail.com> Add a Custom Vision Model

Examples

## Not run: 
# Add a fine-tuned CLIP model for emotion recognition
add_vision_model(
  name = "emotion-clip",
  model_id = "openai/clip-vit-large-patch14",
  description = "Large CLIP model for better emotion recognition",
  test_labels = c("happy", "sad", "angry"),
  force = TRUE
)

# Add a local model
add_vision_model(
  name = "my-local-model",
  model_id = "/path/to/my/model",
  description = "My custom fine-tuned model"
)

# Add experimental BLIP model
add_vision_model(
  name = "blip-base",
  model_id = "Salesforce/blip-image-captioning-base",
  architecture = "blip",
  description = "BLIP model for image captioning"
)

# Now use any of them in analysis
result <- image_scores("photo.jpg", c("happy", "sad"), model = "emotion-clip")
batch_results <- image_scores_dir("photos/", c("positive", "negative"), 
                                 model = "my-local-model")

## End(Not run)

transforEmotion documentation built on Jan. 8, 2026, 5:06 p.m.