hf_ez_text_generation_api_inference: Text Generation API Inference

View source: R/ez.R

hf_ez_text_generation_api_inferenceR Documentation

Text Generation API Inference

Description

Text Generation API Inference

Usage

hf_ez_text_generation_api_inference(
  string,
  top_k = NULL,
  top_p = NULL,
  temperature = 1,
  repetition_penalty = NULL,
  max_new_tokens = NULL,
  max_time = NULL,
  return_full_text = TRUE,
  num_return_sequences = 1L,
  do_sample = TRUE,
  tidy = TRUE,
  use_gpu = FALSE,
  use_cache = FALSE,
  wait_for_model = FALSE,
  use_auth_token = NULL,
  stop_on_error = FALSE,
  ...
)

Arguments

string

a string to be generated from

top_k

(Default: None). Integer to define the top tokens considered within the sample operation to create new text.

top_p

(Default: None). Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p

temperature

Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability. Default: 1.0

repetition_penalty

(Default: None). Float (0.0-100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes.

max_new_tokens

(Default: None). Int (0-250). The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated.

max_time

(Default: None). Float (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results.

return_full_text

(Default: True). Bool. If set to False, the return results will not contain the original query making it easier for prompting.

num_return_sequences

(Default: 1). Integer. The number of proposition you want to be returned.

do_sample

(Optional: True). Bool. Whether or not to use sampling, use greedy decoding otherwise.#'

use_gpu

Whether to use GPU for inference.

use_cache

Whether to use cached inference results for previously seen inputs.

wait_for_model

Whether to wait for the model to be ready instead of receiving a 503 error after a certain amount of time.

use_auth_token

The token to use as HTTP bearer authorization for the Inference API. Defaults to HUGGING_FACE_HUB_TOKEN environment variable.

stop_on_error

Whether to throw an error if an API error is encountered. Defaults to FALSE (do not throw error).

Value

The results of the inference

See Also

https://huggingface.co/docs/api-inference/index


farach/huggingfaceR documentation built on Feb. 4, 2023, 10:31 p.m.