Intended to create standard human-in-the-loop validity tests for typical automated content analysis such as topic modeling and dictionary-based methods. This package offers a standard workflow with functions to prepare, administer and evaluate a human-in-the-loop validity test. This package provides functions for validating topic models using word intrusion, topic intrusion (Chang et al. 2009, <https://papers.nips.cc/paper/3700-reading-tea-leaves-how-humans-interpret-topic-models>) and word set intrusion (Ying et al. 2021) <doi:10.1017/pan.2021.33> tests. This package also provides functions for generating gold-standard data which are useful for validating dictionary-based methods. The default settings of all generated tests match those suggested in Chang et al. (2009) and Song et al. (2020) <doi:10.1080/10584609.2020.1723752>.
Package details |
|
---|---|
Author | Chung-hong Chan [aut, cre] (<https://orcid.org/0000-0002-6232-7530>), Marius Sältzer [aut] (<https://orcid.org/0000-0002-8604-4666>) |
Maintainer | Chung-hong Chan <chainsawtiney@gmail.com> |
License | LGPL (>= 2.1) |
Version | 0.6.1 |
URL | https://gesistsa.github.io/oolong/ https://github.com/gesistsa/oolong |
Package repository | View on CRAN |
Installation |
Install the latest version of this package by entering the following in R:
|
Any scripts or data that you put into this service are public.
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.