library(knitr) desc <- suppressWarnings(readLines("DESCRIPTION")) regex <- "(^Version:\\s+)(\\d+\\.\\d+\\.\\d+)" loc <- grep(regex, desc) ver <- gsub(regex, "\\2", desc[loc]) verbadge <- sprintf('<a href="https://img.shields.io/badge/Version-%s-orange.svg"><img src="https://img.shields.io/badge/Version-%s-orange.svg" alt="Version"/></a></p>', ver, ver) ```` ```r knit_hooks$set(htmlcap = function(before, options, envir) { if(!before) { paste('<p class="caption"><b><em>',options$htmlcap,"</em></b></p>",sep="") } }) knitr::opts_knit$set(self.contained = TRUE, cache = FALSE) knitr::opts_chunk$set(fig.path = "inst/figure/")
kmeanstext is a collection of optimized tools for clustering text data via kmeans clustering. There are many great R clustering tools to locate topics within documents. Kmeans clustering is a popular method for topic extraction. This package builds upon my hclustext package to extend the hclustext package framework to kmeans. One major difference between the two techniques is that with hierchical clustering the number of topics is specified after thte model has been fitted, whereas kmeans requires the k topics to be specified before the model is fit. Additionally, kmeans uses a random start seed, the results may vary each time a model is fit. Additionally, Euclidian distance is typically used in a kmeans algorithm, where as any distance metric may be passed to a hierachical clustering fit.
The general idea is that we turn the documents into a matrix of words. After this we weight the terms by importance using tf-idf. This helps the more salient words to rise to the top. The user then selects k clusters (topics) and runs the model. The model iteratively shuffels centers and assigns documents to the clusters based on minimal dsitance of a document to center. Each run uses the recalculated mean centroid of the prior clusters as a starting point for the current iteration's centroids. Once the centroids have stabalized the model has converged at k topics. The user then may extract the clusters from the fit, providing a grouping for documents with similar important text features.
The main functions, task category, & descriptions are summarized in the table below:
| Function | Category | Description |
|------------------------|----------------|--------------------------------------------------------------------------|
| data_store
| data structure | kmeanstext's data structure (list of dtm + text) |
| kmeans_cluster
| cluster fit | Fits a kmeans cluster model |
| assign_cluster
| assignment | Extract clusters for document/text element |
| get_text
| extraction | Get text from various kmeanstext objects |
| get_dtm
| extraction | Get tm::DocumentTermMatrix
from various kmeanstext objects |
| get_removed
| extraction | Get removed text elements from various kmeanstext objects |
| get_terms
| extraction | Get clustered weighted important terms from an assign_cluster object |
| get_documents
| extraction | Get clustered documents from an assign_cluster object |
To download the development version of kmeanstext:
Download the zip ball or tar ball, decompress and run R CMD INSTALL
on it, or use the pacman package to install the development version:
if (!require("pacman")) install.packages("pacman") pacman::p_load_gh( "trinker/textshape", "trinker/gofastr", "trinker/termco", "trinker/hclusttext", "trinker/kmeanstext" )
You are welcome to:
submit suggestions and bug-reports at: https://github.com/trinker/kmeanstext/issues
send a pull request on: https://github.com/trinker/kmeanstext/
* compose a friendly e-mail to: tyler.rinker@gmail.com
if (!require("pacman")) install.packages("pacman") pacman::p_load(kmeanstext, dplyr, textshape, ggplot2, tidyr) data(presidential_debates_2012)
The data structure for kmeanstext is very specific. The data_storage
produces a DocumentTermMatrix
which maps to the original text. The empty/removed documents are tracked within this data structure, making subsequent calls to cluster the original documents and produce weighted important terms more robust. Making the data_storage
object is the first step to analysis.
We can give the DocumentTermMatrix
rownames via the doc.names
argument. If these names are not unique they will be combined into a single document as seen below. Also, if you want to do stemming, minimum character length, stopword removal or such this is when/where it's done.
ds <- with( presidential_debates_2012, data_store(dialogue, doc.names = paste(person, time, sep = "_")) ) ds
Next we can fit a kmeans cluster model to the data_store
object via kmeans_cluster
. Note that, unlike hclustext's hierarchical_cluster
, we must provide the k
(number of topics) to the model.
By default kmeans_cluster
uses an approximation of k
based on Can & Ozkarahan's (1990) formula $(m * n)/t$ where $m$ and $n$ are the dimensions of the matrix and $t$ is the length of the non-zero elements in matrix $A$.
There are other means of determining k
as well. See Ben Marwic's StackOverflow post for a detailed exploration.
set.seed(100) myfit <- kmeans_cluster(ds, k=6) str(myfit)
The assign_cluster
function allows the user to extract the clusters and the documents they are assigned to. Unlike hclustext's assign_cluster
, the kmeanstext version as no k
argument and is merely extracting the cluster assignments from the model.
ca <- assign_cluster(myfit) ca
To check the number of documents loading on a cluster there is a summary
method for assign_cluster
which provides a descending data frame of clusters and counts. Additionally, a horizontal bar plot shows the document loadings on each cluster.
summary(ca)
The user can grab the texts from the original documents grouped by cluster using the get_text
function. Here I demo a 40 character substring of the document texts.
get_text(ca) %>% lapply(substring, 1, 40)
As with many topic clustering techniques, it is useful to get the to salient terms from the model. The get_terms
function uses the centers
from the kmeans
output. Notice the absence of clusters 1 & 2. This is a result of lower weights (more diverse term use) across these clusters.
get_terms(ca, .008)
The min.weight
hyperparmeter sets the lower bound on the centers
value to accept. If you don't get any terms you may want to lower this. Likewise, this parameter (and lowering nrow
) can be raised to eliminate noise.
get_terms(ca, .002, nrow=10)
Here I plot the clusters, terms, and documents (grouping variables) together as a combined heatmap. This can be useful for viewing & comparing what documents are clustering together in the context of the cluster's salient terms. This example also shows how to use the cluster terms as a lookup key to extract probable salient terms for a given document.
key <- data_frame( cluster = 1:6, labs = get_terms(ca, .002) %>% bind_list("cluster") %>% select(-weight) %>% group_by(cluster) %>% slice(1:10) %>% na.omit() %>% group_by(cluster) %>% summarize(term=paste(term, collapse=", ")) %>% apply(., 1, paste, collapse=": ") ) ca %>% bind_vector("id", "cluster") %>% separate(id, c("person", "time"), sep="_") %>% tbl_df() %>% left_join(key) %>% mutate(n = 1) %>% mutate(labs = factor(labs, levels=rev(key[["labs"]]))) %>% unite("time_person", time, person, sep="\n") %>% select(-cluster) %>% complete(time_person, labs) %>% mutate(n = factor(ifelse(is.na(n), FALSE, TRUE))) %>% ggplot(aes(time_person, labs, fill = n)) + geom_tile() + scale_fill_manual(values=c("grey90", "red"), guide=FALSE) + labs(x=NULL, y=NULL)
The get_documents
function grabs the documents associated with a particular cluster. This is most useful in cases where the number of documents is small and they have been given names.
get_documents(ca)
I like working in a chain. In the setup below we work within a magrittr pipeline to fit a model, select clusters, and examine the results. In this example I do not condense the 2012 Presidential Debates data by speaker and time, rather leaving every sentence as a separate document. On my machine the initial data_store
and model fit take ~35 seconds to run. Note that I do restrict the number of clusters (for texts and terms) to a random 5 clusters for the sake of space.
.tic <- Sys.time() myfit2 <- presidential_debates_2012 %>% with(data_store(dialogue)) %>% kmeans_cluster(k=100) difftime(Sys.time(), .tic) ## View Document Loadings ca2 <- assign_cluster(myfit2) summary(ca2) %>% head(12) ## Split Text into Clusters set.seed(3); inds <- sort(sample.int(100, 5)) get_text(ca2)[inds] %>% lapply(head, 10) ## Get Associated Terms get_terms(ca2, term.cutoff = .07)[inds]
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.