hf_ez_table_question_answering_api_inference | R Documentation |
Table Question Answering API Inference
hf_ez_table_question_answering_api_inference( query, table, tidy = TRUE, use_gpu = FALSE, use_cache = FALSE, wait_for_model = FALSE, use_auth_token = NULL, stop_on_error = FALSE, ... )
query |
The query in plain text that you want to ask the table |
table |
A dataframe with all text columns. |
tidy |
Whether to tidy the results into a tibble. Default: TRUE (tidy the results) |
use_gpu |
Whether to use GPU for inference. |
use_cache |
Whether to use cached inference results for previously seen inputs. |
wait_for_model |
Whether to wait for the model to be ready instead of receiving a 503 error after a certain amount of time. |
use_auth_token |
The token to use as HTTP bearer authorization for the Inference API. Defaults to HUGGING_FACE_HUB_TOKEN environment variable. |
stop_on_error |
Whether to throw an error if an API error is encountered. Defaults to FALSE (do not throw error). |
The results of the inference
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.