ConversationAlign_Introduction

knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>"
)

A good conversation is a cooperative endeavor where both parties modify the form and content of their own production to align with each other. This is a phenomenon known as alignment. People align across many dimensions including the words they choose and the affective tenor of their prosody. ConversationAlign measures dynamics of lexical use between conversation partners across more than 40 semantic, lexical, phonological, and affective dimensions. Before launching into your analyses, there are some important use caveats to consider.

Caveats for Using ConversationAlign

Prepare your Transcripts Outside of the Package

Installation

Install and load the development version of ConversationAlign from GitHub using the devtools package.

# Load SemanticDistance
library(ConversationAlign)

Calibaration Transcripts Included in ConversationAlign

ConversationAligncontains two sample conversation transcripts that are pre-load when you call the package. These are: MaronGross_2013: Interview transcript of Marc Maron and Terry Gross on NPR (2013).
NurseryRhymes: Three nursery rhymes looping same phrases formatted as conversations, cleaned, and aligned to illustrate how the formatting pipeline reshaopes conversation transcripts.

NurseryRhymes

knitr::kable(head(NurseryRhymes, 20), format = "simple")
str(NurseryRhymes)

Maron-Gross Interview

Here's one from a 2013 NPR interview (USA) between Marc Maron and Terry Gross, titled Marc Maron: A Life Fueled By 'Panic And Dread'.

knitr::kable(head(MaronGross_2013, 20), format = "simple")
str(MaronGross_2013)

Caveat emptor

Any analysis of language comes with assumptions and potential bias. For example, there are some instances where a researcher might care about morphemes and grammatical elements such as 'the', 'a', 'and', etc.. The default for ConversationAlign is to omit these as stopwords and to average across all open class words (e.g., nouns, verbs) in each turn by interlocutor. There are some specific cases where this can all go wrong. Here are some things to consider:

  1. Stopwords : ConversationAlign omits stopwords by default applying a customized stopword list, Temple_Stopwords25. CLICK HERE to inspect the list. This stopword list includes greetings, idioms, filler words, numerals, and pronouns.

  2. Lemmatization : The package will lemmatize your language transcripts by default. Lemmatization transforms inflected forms (e.g., standing, stands) into their root or dictionary entry (e.g., stand). This helps for yoking offline values (e.g., happiness, concreteness) to each word and also entails what NLP folks refer to as 'term aggregation'. However, sometimes you might NOT want to lemmatize. You can easily change this option by using the argument, "lemmatize=FALSE," to the clean_dyads function below.

  3. Sample Size Issue 1: Exchange Count: The program derives correlations and AUC for each dyad as metrics of alignment. For very brief conversations (<30 turns), the likelihood of unstable or unreliable estimates is high.

  4. Sample Size Issue 2 : matching to lookup database: ConversationAlign works by yoking values from a lookup database to each word in your language transcript. Some variables have lots of values characterizing many English words. Other variables (e.g., age of acquisition) only cover about 30k words. When a word in your transcript does not have a 'match' in the lookup datase, ConversationAlign will return an NA which will not go into the average of the words for that interlocutor and turn. This can be dangerous when there are many missing values. Beware!

  5. Compositionality : ConversationAlign is a caveman in its complexity. It matches a value to each word as if that word is an island. Phenomena like polysemy (e.g., bank) and the modulation of one word by an intensifier (e.g., very terrible) are not handled. This is a problem for many of the affective measures but not for lexical variables like word length.

Background and Supporting Materials

  1. Preprint
    Our PsyArXiv preprint describing the method(s) in greater detail is referenced as: Sacks, B., Ulichney, V., Duncan, A., Helion, C., Weinstein, S., Giovannetti, T., ... Reilly, J. (2025, March 12). ConversationAlign: Open-Source Software for Analyzing Patterns of Lexical Use and Alignment in Conversation Transcripts. Click Here to read our preprint. It was recently invited for revision at Behavior Rsearch Methods. We will update when/if eventually accepted there!

  2. Methods for creating internal lookup database
    ConversationAlign contains a large, internal lexical lookup_database. Click Here to see how we created this by merging other offline psycholinguistic databases into one.

  3. Variable Key for ConversationAlign
    ConversationAlign currently allows users to compute alignment dynamics across >40 different lexical, affective, and semantic dimensions.Click Here to link to a variable key.

References

Lewis, David D., et al. (2004) "Rcv1: A new benchmark collection for text categorization research." Journal of machine learning research 5: 361-397.



Try the ConversationAlign package in your browser

Any scripts or data that you put into this service are public.

ConversationAlign documentation built on Aug. 8, 2025, 7:22 p.m.