View source: R/model_management.R
| add_vision_model | R Documentation |
High-level functions for managing vision models in transforEmotion, providing an easy interface for extending the package with custom models.
User-friendly wrapper for registering custom vision models with automatic validation and helpful error messages.
add_vision_model(
name,
model_id,
description = NULL,
architecture = "clip",
test_labels = NULL,
force = FALSE
)
name |
A short, memorable name for your model (e.g., "my-emotion-model") |
model_id |
HuggingFace model identifier or path to local model directory |
description |
Optional description of the model and its purpose |
architecture |
Model architecture type. Currently supported:
|
test_labels |
Optional character vector to test the model immediately |
force |
Logical indicating whether to overwrite existing model with same name |
Invisibly returns TRUE if successful
Aleksandar Tomasevic <atomashevic@gmail.com> Add a Custom Vision Model
## Not run:
# Add a fine-tuned CLIP model for emotion recognition
add_vision_model(
name = "emotion-clip",
model_id = "openai/clip-vit-large-patch14",
description = "Large CLIP model for better emotion recognition",
test_labels = c("happy", "sad", "angry"),
force = TRUE
)
# Add a local model
add_vision_model(
name = "my-local-model",
model_id = "/path/to/my/model",
description = "My custom fine-tuned model"
)
# Add experimental BLIP model
add_vision_model(
name = "blip-base",
model_id = "Salesforce/blip-image-captioning-base",
architecture = "blip",
description = "BLIP model for image captioning"
)
# Now use any of them in analysis
result <- image_scores("photo.jpg", c("happy", "sad"), model = "emotion-clip")
batch_results <- image_scores_dir("photos/", c("positive", "negative"),
model = "my-local-model")
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.