hf_ez_question_answering_api_inference: Question Answering API Inference

View source: R/ez.R

hf_ez_question_answering_api_inferenceR Documentation

Question Answering API Inference

Description

Question Answering API Inference

Usage

hf_ez_question_answering_api_inference(
  question,
  context,
  tidy = TRUE,
  use_gpu = FALSE,
  use_cache = FALSE,
  wait_for_model = FALSE,
  use_auth_token = NULL,
  stop_on_error = FALSE,
  ...
)

Arguments

question

a question to be answered based on the provided context

context

the context to consult for answering the question

tidy

Whether to tidy the results into a tibble. Default: TRUE (tidy the results)

use_gpu

Whether to use GPU for inference.

use_cache

Whether to use cached inference results for previously seen inputs.

wait_for_model

Whether to wait for the model to be ready instead of receiving a 503 error after a certain amount of time.

use_auth_token

The token to use as HTTP bearer authorization for the Inference API. Defaults to HUGGING_FACE_HUB_TOKEN environment variable.

stop_on_error

Whether to throw an error if an API error is encountered. Defaults to FALSE (do not throw error).

Value

The results of the inference

See Also

https://huggingface.co/docs/api-inference/index


farach/huggingfaceR documentation built on Feb. 4, 2023, 10:31 p.m.