View source: R/model_registry.R
| register_vision_model | R Documentation |
Register a new vision model in the transforEmotion registry, making it available for use with image_scores(), video_scores(), and related functions.
register_vision_model(
name,
model_id,
architecture = "clip",
description = NULL,
preprocessing_config = NULL,
requires_special_handling = FALSE
)
name |
A short name/alias for the model (e.g., "my-custom-clip") |
model_id |
The HuggingFace model identifier or path to local model |
architecture |
The model architecture type. Currently supported:
|
description |
Optional description of the model |
preprocessing_config |
Optional list of preprocessing parameters |
requires_special_handling |
Logical indicating if the model needs custom processing beyond standard CLIP pipeline |
Invisibly returns TRUE if registration successful
## Not run:
# Register a custom CLIP model
register_vision_model(
name = "my-emotion-clip",
model_id = "j-hartmann/emotion-english-distilroberta-base",
architecture = "clip",
description = "Custom CLIP fine-tuned on emotion datasets"
)
# Register a local model
register_vision_model(
name = "local-clip",
model_id = "/path/to/local/model",
architecture = "clip",
description = "Locally stored fine-tuned model"
)
# Register experimental BLIP model
register_vision_model(
name = "blip-caption",
model_id = "Salesforce/blip-image-captioning-base",
architecture = "blip",
description = "BLIP model for image captioning"
)
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.