View source: R/textstat_lexdiv.R
textstat_lexdiv | R Documentation |
Calculate the lexical diversity of text(s).
textstat_lexdiv(
x,
measure = c("TTR", "C", "R", "CTTR", "U", "S", "K", "I", "D", "Vm", "Maas", "MATTR",
"MSTTR", "all"),
remove_numbers = TRUE,
remove_punct = TRUE,
remove_symbols = TRUE,
remove_hyphens = FALSE,
log.base = 10,
MATTR_window = 100L,
MSTTR_segment = 100L,
...
)
x |
an dfm or tokens input object for whose documents lexical diversity will be computed |
measure |
a character vector defining the measure to compute |
remove_numbers |
logical; if |
remove_punct |
logical; if |
remove_symbols |
logical; if |
remove_hyphens |
logical; if |
log.base |
a numeric value defining the base of the logarithm (for measures using logarithms) |
MATTR_window |
a numeric value defining the size of the moving window for computation of the Moving-Average Type-Token Ratio (Covington & McFall, 2010) |
MSTTR_segment |
a numeric value defining the size of the each segment for the computation of the the Mean Segmental Type-Token Ratio (Johnson, 1944) |
... |
not used directly |
textstat_lexdiv
calculates the lexical diversity of documents
using a variety of indices.
In the following formulas, N
refers to the total number of
tokens, V
to the number of types, and f_v(i, N)
to the numbers
of types occurring i
times in a sample of length N
.
"TTR"
:The ordinary Type-Token Ratio:
TTR =
\frac{V}{N}
"C"
:Herdan's C (Herdan, 1960, as cited in Tweedie & Baayen, 1998; sometimes referred to as LogTTR):
C =
\frac{\log{V}}{\log{N}}
"R"
:Guiraud's Root TTR (Guiraud, 1954, as cited in Tweedie & Baayen, 1998):
R = \frac{V}{\sqrt{N}}
"CTTR"
:Carroll's Corrected TTR:
CTTR =
\frac{V}{\sqrt{2N}}
"U"
:Dugast's Uber Index (Dugast, 1978, as cited in Tweedie & Baayen, 1998):
U = \frac{(\log{N})^2}{\log{N} - \log{V}}
"S"
:Summer's index:
S =
\frac{\log{\log{V}}}{\log{\log{N}}}
"K"
:Yule's K (Yule, 1944, as presented in Tweedie & Baayen, 1998, Eq. 16) is calculated by:
K = 10^4 \times
\left[ -\frac{1}{N} + \sum_{i=1}^{V} f_v(i, N) \left( \frac{i}{N} \right)^2 \right]
"I"
:Yule's I (Yule, 1944) is calculated by:
I = \frac{V^2}{M_2 - V}
M_2 = \sum_{i=1}^{V} i^2 * f_v(i, N)
"D"
:Simpson's D (Simpson 1949, as presented in Tweedie & Baayen, 1998, Eq. 17) is calculated by:
D = \sum_{i=1}^{V} f_v(i, N) \frac{i}{N} \frac{i-1}{N-1}
"Vm"
:Herdan's V_m
(Herdan 1955, as presented in
Tweedie & Baayen, 1998, Eq. 18) is calculated by:
V_m = \sqrt{ \sum_{i=1}^{V} f_v(i, N) (i/N)^2 - \frac{i}{V} }
"Maas"
:Maas' indices (a
, \log{V_0}
&
\log{}_{e}{V_0}
):
a^2 = \frac{\log{N} -
\log{V}}{\log{N}^2}
\log{V_0} =
\frac{\log{V}}{\sqrt{1 - \frac{\log{V}}{\log{N}}^2}}
The measure was derived from a formula by
Mueller (1969, as cited in Maas, 1972). \log{}_{e}{V_0}
is equivalent
to \log{V_0}
, only with e
as the base for the logarithms. Also
calculated are a
, \log{V_0}
(both not the same as before) and
V'
as measures of relative vocabulary growth while the text
progresses. To calculate these measures, the first half of the text and the
full text will be examined (see Maas, 1972, p. 67 ff. for details). Note:
for the current method (for a dfm) there is no computation on separate
halves of the text.
"MATTR"
:The Moving-Average Type-Token Ratio (Covington & McFall, 2010) calculates TTRs for a moving window of tokens from the first to the last token, computing a TTR for each window. The MATTR is the mean of the TTRs of each window.
"MSTTR"
:Mean Segmental Type-Token Ratio (sometimes referred to as Split TTR) splits the tokens into segments of the given size, TTR for each segment is calculated and the mean of these values returned. When this value is < 1.0, it splits the tokens into equal, non-overlapping sections of that size. When this value is > 1, it defines the segments as windows of that size. Tokens at the end which do not make a full segment are ignored.
A data.frame of documents and their lexical diversity scores.
Kenneth Benoit and Jiong Wei Lua. Many of the formulas have been reimplemented from functions written by Meik Michalke in the koRpus package.
Covington, M.A. & McFall, J.D. (2010). Cutting the Gordian Knot: The Moving-Average Type-Token Ratio (MATTR) Journal of Quantitative Linguistics, 17(2), 94–100. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1080/09296171003643098")}
Herdan, G. (1955). A New Derivation and Interpretation of Yule's 'Characteristic' K. Zeitschrift für angewandte Mathematik und Physik, 6(4): 332–334.
Maas, H.D. (1972). Über den Zusammenhang zwischen Wortschatzumfang und Länge eines Textes. Zeitschrift für Literaturwissenschaft und Linguistik, 2(8), 73–96.
McCarthy, P.M. & Jarvis, S. (2007). vocd: A Theoretical and Empirical Evaluation. Language Testing, 24(4), 459–488. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1177/0265532207080767")}
McCarthy, P.M. & Jarvis, S. (2010). MTLD, vocd-D, and HD-D: A Validation Study of Sophisticated Approaches to Lexical Diversity Assessment. Behaviour Research Methods, 42(2), 381–392.
Michalke, M. (2014). koRpus: An R Package for Text Analysis (Version 0.05-4). Available from https://reaktanz.de/?c=hacking&s=koRpus.
Simpson, E.H. (1949). Measurement of Diversity. Nature, 163: 688. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1038/163688a0")}
Tweedie. F.J. and Baayen, R.H. (1998). How Variable May a Constant Be? Measures of Lexical Richness in Perspective. Computers and the Humanities, 32(5), 323–352. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.1023/A:1001749303137")}
Yule, G. U. (1944) The Statistical Study of Literary Vocabulary. Cambridge: Cambridge University Press.
library("quanteda")
txt <- c("Anyway, like I was sayin', shrimp is the fruit of the sea. You can
barbecue it, boil it, broil it, bake it, saute it.",
"There's shrimp-kabobs,
shrimp creole, shrimp gumbo. Pan fried, deep fried, stir-fried. There's
pineapple shrimp, lemon shrimp, coconut shrimp, pepper shrimp, shrimp soup,
shrimp stew, shrimp salad, shrimp and potatoes, shrimp burger, shrimp
sandwich.")
tokens(txt) %>%
textstat_lexdiv(measure = c("TTR", "CTTR", "K"))
dfm(tokens(txt)) %>%
textstat_lexdiv(measure = c("TTR", "CTTR", "K"))
toks <- tokens(corpus_subset(data_corpus_inaugural, Year > 2000))
textstat_lexdiv(toks, c("CTTR", "TTR", "MATTR"), MATTR_window = 100)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.