hf_ez_text2text_generation_api_inference | R Documentation |
Text2Text Generation API Inference
hf_ez_text2text_generation_api_inference( string, tidy = TRUE, use_gpu = FALSE, use_cache = FALSE, wait_for_model = FALSE, use_auth_token = NULL, stop_on_error = FALSE, ... )
string |
a general request for the model to perform or answer |
tidy |
Whether to tidy the results into a tibble. Default: TRUE (tidy the results) |
use_gpu |
Whether to use GPU for inference. |
use_cache |
Whether to use cached inference results for previously seen inputs. |
wait_for_model |
Whether to wait for the model to be ready instead of receiving a 503 error after a certain amount of time. |
use_auth_token |
The token to use as HTTP bearer authorization for the Inference API. Defaults to HUGGING_FACE_HUB_TOKEN environment variable. |
stop_on_error |
Whether to throw an error if an API error is encountered. Defaults to FALSE (do not throw error). |
The results of the inference
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.