oolong: Create Validation Tests for Automated Content Analysis

Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. 2021) <doi:10.1017/pan.2021.33> tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) <doi:10.1080/10584609.2020.1723752>.

Package details

AuthorChung-hong Chan [aut, cre] (<https://orcid.org/0000-0002-6232-7530>), Marius Sältzer [aut] (<https://orcid.org/0000-0002-8604-4666>)
MaintainerChung-hong Chan <chainsawtiney@gmail.com>
LicenseLGPL (>= 2.1)
Version0.5.0
URL https://github.com/chainsawriot/oolong
Package repositoryView on CRAN
Installation Install the latest version of this package by entering the following in R:
install.packages("oolong")

Try the oolong package in your browser

Any scripts or data that you put into this service are public.

oolong documentation built on Aug. 25, 2023, 5:16 p.m.