register_vision_model: Register a Vision Model

View source: R/model_registry.R

register_vision_modelR Documentation

Register a Vision Model

Description

Register a new vision model in the transforEmotion registry, making it available for use with image_scores(), video_scores(), and related functions.

Usage

register_vision_model(
  name,
  model_id,
  architecture = "clip",
  description = NULL,
  preprocessing_config = NULL,
  requires_special_handling = FALSE
)

Arguments

name

A short name/alias for the model (e.g., "my-custom-clip")

model_id

The HuggingFace model identifier or path to local model

architecture

The model architecture type. Currently supported:

  • "clip": Standard CLIP dual-encoder models (default)

  • "clip-custom": CLIP variants requiring special handling

  • "blip": BLIP captioning/VQA models (supported via BLIP adapter)

  • "align": ALIGN dual-encoder models (supported via ALIGN adapter)

description

Optional description of the model

preprocessing_config

Optional list of preprocessing parameters

requires_special_handling

Logical indicating if the model needs custom processing beyond standard CLIP pipeline

Value

Invisibly returns TRUE if registration successful

Examples

## Not run: 
# Register a custom CLIP model
register_vision_model(
  name = "my-emotion-clip",
  model_id = "j-hartmann/emotion-english-distilroberta-base",
  architecture = "clip",
  description = "Custom CLIP fine-tuned on emotion datasets"
)

# Register a local model
register_vision_model(
  name = "local-clip",
  model_id = "/path/to/local/model",
  architecture = "clip",
  description = "Locally stored fine-tuned model"
)

# Register experimental BLIP model
register_vision_model(
  name = "blip-caption",
  model_id = "Salesforce/blip-image-captioning-base",
  architecture = "blip",
  description = "BLIP model for image captioning"
)

## End(Not run)

transforEmotion documentation built on Jan. 8, 2026, 5:06 p.m.