supunsup: Supervised/unserupvised phonetic adaptation (Experiment 1)

Description Usage Format Details

Description

The whole dataset, parsed by load_and_parse.

Usage

1

Format

A data frame with 89,244 observations of 28 variables:

subject

Anonymized MTurk worker identifier

assignmentid

Unique ID for the excluded assignment

hitid

The (non-unique) ID of the HIT for this assignment

experiment

Name of the experiment: 'supervised-unsupervised-vot'

accepttime

Time assignment was initially accepted by subject

submittime

Time assignment was finished and submitted.

block

The type of block for this dataset ('visworld')

supCond

Factor: The supervision condition: one of 'supervised', 'unsupervised', or 'mixed'

.

bvotCond

Factor: the location of the lower cluster of the bimodal input in ms VOT, either '0', '10', '20', '30', or '40'.

trial

Trial number, starting at 0.

stim

Numeric: Unique ID of the stimulus file played on that trial.

stimfn

Factor: Filename of the stimulus.

wordclass

Factor: Minimal pair for this stimulus, one of 'BEACH', 'BEAK', or 'BEES'.

respCategory

The intended (correct) category of the response, not the listener's actual choice (that's respCat).

trialSupCond

Factor: Whether this trial was 'supervised' (labeled) or 'unsupervised' (unlabeled). Use labeled instead.

targetId

The name of the image that the listener clicked on.

targetPos

The location of the clicked image.

clickx

Relative to left of experiment window.

clicky

Relative to top of experiment window.

tstart

System time (in ms) of trial-start click.

tend

System time (in ms) of response click.

rt

Response time, in ms (difference between tstart and tend.

respCat

Factor: The category ('b' or 'p') of the listner's response (based on targetId).

trueCat

Factor: The intended ('correct') category ('b' or 'p'). Depends on the input distribution (bvotCond).

vot

Numeric: The VOT of the stimulus for this trial (ms).

labeled

Factor: Whether this trial was 'labeled' or 'unlabeled'.

Details

This data comes from a phonetic adaptation experiment, where each listener hears a different distribution of VOTs and classifies each one as a /b/ or a /p/. The VOTs are presented in the form of /b/-/p/ minimal pair words (beach/peach, bees/peas, and beak/peak). On each trial, the subject hears a word from one of these pairs with a VOT drawn from their specific distribution, and click on a picture to indicate the word they heard.

Subjects excluded from analysis are described in excludes.

There are two, crossed conditions:

Supervision

For subjects in the 'unsupervised' condition, all trials were unlabeled, and either the /b/ or /p/ response was appropriate. In the 'supervised' and 'mixed' conditions, some trials were labeled. On these trials, only one of the response pictures matched the end of the word, effectively labeleing the VOT. For instance, on an unlabeled beach/peach trial, the subject could click on a picture of a beach or a peach. On a labeled beach/peach trial, there might be a picture of a beach and a picture of a peak, which labeled the VOT of the word as a /b/. The 'supervised' and 'mixed' conditions differed only in how the labeled trials were distributed over the different VOT values a subject heard.

VOT distribution

Each subject heard one of five distributions of VOTs. All the distributions are bimodal, with the means separated by 40ms VOT, and differ only in the location: -10/30ms, 0/40ms, 10/50ms, 20/60ms, or 30/70ms.

There are two different outcome variables:

Classification

Listeners' percepts measured by which of the two possible pictures they clicked on. Variable respCat is a two-level factor (b or p), and respP codes this as a binary (0 or 1) value suitable for logistic regression.

Reaction time

Listeners initiated each trial by clicking on a "light" in the center of the screen. The sound file played immediately after this click. Reaction time is coded in rt as the number of milliseconds between the start of the sound file and the response (click on picture).


kleinschmidt/phonetic-sup-unsup documentation built on May 20, 2019, 12:33 p.m.