chatRater: Rating and Evaluating Texts Using Large Language Models

Generates ratings and psycholinguistic metrics for textual stimuli using large language models. It enables users to evaluate idioms and other language materials by combining context, prompts, and stimulus inputs. It supports multiple LLM APIs (such as 'OpenAI', 'DeepSeek', 'Anthropic', 'Cohere', 'Google PaLM', and 'Ollama') by allowing users to switch models with a single parameter. In addition to generating numeric ratings, 'chatRater' provides functions for obtaining detailed psycholinguistic metrics including word frequency (with optional corpus input), lexical coverage (with customizable vocabulary size and test basis), Zipf metric, Levenshtein distance, and semantic transparency.

Package details

AuthorShiyang Zheng [aut, cre]
MaintainerShiyang Zheng <Shiyang.Zheng@nottingham.ac.uk>
LicenseMIT + file LICENSE
Version1.1.0
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("chatRater")

Try the chatRater package in your browser

Any scripts or data that you put into this service are public.

chatRater documentation built on April 4, 2025, 1:03 a.m.