hf_ez_fill_mask_api_inference: Fill Mask API Inference

View source: R/ez.R

hf_ez_fill_mask_api_inferenceR Documentation

Fill Mask API Inference

Description

Fill Mask API Inference

Usage

hf_ez_fill_mask_api_inference(
  string,
  tidy = TRUE,
  use_gpu = FALSE,
  use_cache = FALSE,
  wait_for_model = FALSE,
  use_auth_token = NULL,
  stop_on_error = FALSE,
  ...
)

Arguments

string

a string to be filled from, must contain the [MASK] token (check model card for exact name of the mask)

tidy

Whether to tidy the results into a tibble. Default: TRUE (tidy the results)

use_gpu

Whether to use GPU for inference.

use_cache

Whether to use cached inference results for previously seen inputs.

wait_for_model

Whether to wait for the model to be ready instead of receiving a 503 error after a certain amount of time.

use_auth_token

The token to use as HTTP bearer authorization for the Inference API. Defaults to HUGGING_FACE_HUB_TOKEN environment variable.

stop_on_error

Whether to throw an error if an API error is encountered. Defaults to FALSE (do not throw error).

Value

The results of the inference

See Also

https://huggingface.co/docs/api-inference/index


farach/huggingfaceR documentation built on Feb. 4, 2023, 10:31 p.m.