competence | R Documentation |
Assesses competence perceptions in self-presentational natural language.
This function is one of the main two functions of the warmthcompetence
package.
It takes an N-length vector of self-presentational text documents and N-length vector of document IDs and returns a competence perception score that represents how much competence
others attribute the individual who wrote the self-presentational text.
The function also contains a metrics argument that enables users to also return the raw features used to assess competence perceptions.
competence(text, ID = NULL, metrics = "scores")
text |
character A vector of texts, each of which will be assessed for competence. |
ID |
character A vector of IDs that will be used to identify the competence scores. |
metrics |
character An argument that allows users to decide what metrics to return. Users can return the competence scores (metrics = "scores"), the features that underlie the competence scores (metrics = "features"), or both the competence scores and the features (metrics = "all). The default choice is to return the competence scores. |
Some features depend Spacyr which must be installed separately in Python.
The default is to return a data.frame with each row containing the document identifier and the competence score. Users can also customize what is returned through the metrics argument. If metrics = "features", then a dataframe of competence features will be returned where each document is represented by a row. If metrics = "all", then both the competence scores and features will be returned in a data.frame.
Benoit K, Watanabe K, Wang H, Nulty P, Obeng A, Müller S, Matsuo A (2018). “quanteda: An R package for the quantitative analysis of textual data.” Journal of Open Source Software, 3(30), 774. \Sexpr[results=rd]{tools:::Rd_expr_doi("10.21105/joss.00774")}, https://quanteda.io. Buchanan, E. M., Valentine, K. D., & Maxwell, N. P. (2018). LAB: Linguistic Annotated Bibliography - Shiny Application. Retrieved from http://aggieerin.com/shiny/lab_table. Rinker, T. W. (2018). lexicon: Lexicon Data version 1.2.1. http://github.com/trinker/lexicon Rinker, T. W. (2019). sentimentr: Calculate Text Polarity Sentiment version 2.7.1. http://github.com/trinker/sentimentr Yeomans, M., Kantor, A. & Tingley, D. (2018). Detecting Politeness in Natural Language. The R Journal, 10(2), 489-502.
data("example_data")
competence_scores <- competence(example_data$bio, metrics = "all")
example_data$competence_predictions <- competence_scores$competence_predictions
competence_model1 <- lm(RA_comp_AVG ~ competence_predictions, data = example_data)
summary(competence_model1)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.