tokenize: Tokenization and transliteration of character strings based...

Description Usage Arguments Details Value Note Author(s) References See Also Examples

View source: R/tokenize.R


To process strings it is often very useful to tokenise them into graphemes (i.e. functional units of the orthography), and possibly replace those graphemes by other symbols to harmonize the orthographic representation of different orthographic representations (‘transcription/transliteration’). As a quick and easy way to specify, save, and document the decisions taken for the tokenization, we propose using an orthography profile.

This function is the main function to produce, test and apply orthography profiles.


  profile = NULL, transliterate = NULL,
  method = "global", ordering = c("size", "context", "reverse"),
  sep = " ", sep.replace = NULL, missing = "\u2047", normalize = "NFC",
  regex = FALSE, silent = FALSE,
  file.out = NULL)



Vector of strings to the tokenized.


Orthography profile specifying the graphemes for the tokenization, and possibly any replacements of the available graphemes. Can be a reference to a file or an R object. If NULL then the orthography profile will be created on the fly using the defaults of write.profile.


Default NULL, meaning no transliteration is to be performed. Alternatively, specify the name of the column in the orthography profile that should be used for replacement.


Method to be used for parsing the strings into graphemes. Currently two options are implemented: global and linear. See Details for further explanation.


Method for ordering. Currently three different methods are implemented, which can be combined (see Details below): size, context, reverse and frequency. Use NULL to prevent ordering and use the top to bottom order as specified in the orthography profile.


Separator to be inserted between graphemes. Defaults to space. This function assumes that the separator specified here does not occur in the data. If it does, unexpected things might happen. Consider removing the chosen seperator from your strings first, e.g. by using gsub.


Sometimes, the chosen separator (see above) occurs in the strings to be parsed. This is technically not a problem, but the result might show unexpected sequences. When sep.replace is specified, this marking is inserted in the string at those places where the sep marker occurs. Typical usage in linguistics would be sep = " ", sep.replace = "#" adding spaces between graphemes and replacing spaces in the input string by hashes in the output string.


Character to be inserted at transliteration when no transliteration is specified. Defaults to DOUBLE QUESTION MARK at U+2047. Change this when this character appears in the input string.


Which normalization to use before tokenization, defaults to "NFC". Other option is "NFD". Any other input will result in no normalisation being performed.


Logical: when regex = FALSE internally the matching of graphemes is done exact, i.e. without using regular expressions. When regex = TRUE ICU-style regular expression (see stri_search_regex) are used, so any reserved characters have to be escaped in the orthography profile. Specifically, add a slash "\" before any occurrence of the characters [](){}|+*.-!?<cb><86>$\ in your profile (except of course when these characters are used in their regular expression meaning).

Note that this parameter also influences whether contexts should be considered in the tokenization (internally, contextual searching uses regular expressions). By default, when regex = FALSE, context is ignored. If regex = TRUE then the function checks whether there are columns called Left (for the left context) and Right (for the right context), and optionally a column called Class (for the specification of grapheme-classes) in the orthography profile. These are hard-coded column-names, so please adapt your orthography profile accordingly. The columns Left and Right allow for regular expression to specify context.


Logical: by default missing characters in the strings are reported with a warning. use silent = TRUE to supress these warnings.


Filename for results to be written. No suffix should be specified, as various different files with different suffixes are produced (see Details below). When file.out is specified, then the data is written to disk AND the R dataframe is returned invisibly.


Given a set of graphemes, there are at least two different methods to tokenize strings. The first is called global here: this approach takes the first grapheme, matches this grapheme globally at all places in the string, and then turns to the next string. The other approach is called linear here: this approach walks through the string from left to right. At the first character it looks through all graphemes whether there is any match, and then walks further to the end of the match and starts again. In some special cases these two methods can lead to different results (see Examples for an example).

The ordering or the lines in the ortography profile is of crucial importance, and different orderings will lead to radically different results. To simply use the top to bottom ordering as specified in the profile, use order = NULL. Currently, there are four different ordering strategies implemented: size, context, reverse and frequency. By specifying more than one in a vector, these orderings are used to break ties, e.g. c("size, frequency", "reverse") will first order by size, and for those with the same size, it will order by frequency. For lines that are still tied (i.e. the have the same size and frequency) the order will reverse order as attested in the profile. Reversing order can be useful, because hand-written profiles tend to put general rules before specific rules, which mostly should be applied in reverse order.


Without specificatino of file.out, the function tokenize will return a list of three:


a dataframe with the original and the tokenized/transliterated strings


a dataframe with the graphemes with added frequencies. The dataframe is ordered according to the order that resulted from the specifications in ordering.


a dataframe with all original strings that contain unmatched parts.


a dataframe with the graphemes that are missing from the original orthography profilr, as indicated in the errors. Note that the report of missing characters does currently not lead to correct results for transliterated strings.

When file is specified, these three tables will be written to three different tab-separated files (with header lines): file_strings.tsv for the strings, file_profile.tsv for the orthrography profile, file_errors.tsv for the strings that have unidentifyable parts, and file_missing.tsv for the graphemes that seem to be missing. When there is nothing missing, then no file for the missing strings is produced.


When regex = TRUE, regular expressions are acceptable in the columns ‘Grapheme', ’Left' and 'Right'. Backreferences in the transliteration column are not possible (yet). When regular expressions are allowed, all literal uses of special regex-characters have to be escaped! Any literal occurrence of the following characters has then to be preceded by a backslash \ .

Note that overlapping matching does not (yet) work with regular expressions. That means that for example "aa" is only found once in "aaa". In some special cases this might lead to problems that might have to be explicitly specified in the profile, e.g. a grapheme "aa" with a left context "a". See examples below. This problem arises because overlap is only available in literal searches stri_opts_fixed, but the current function uses regex-searching, which does not catch overlap stri_opts_regex.


Michael Cysouw <[email protected]>


Moran & Cysouw (forthcoming)

See Also

See also write.profile for preparing a skeleton orthography profile.


# simple example with interesting warning and error reporting
# the string might look like "AABB" but it isn't...
(string <- "\u0041\u0410\u0042\u0412")

# make an ad-hoc orthography profile
profile <- cbind(
    Grapheme = c("a","<c3><a4>","n","ng","ch","sch"), 
    Trans = c("a","e","n","N","x","sh"))
# tokenization
tokenize(c("nana", "<c3><a4>nngsch<c3><a4>", "ach"), profile)
# with replacements and a warning
tokenize(c("Nan<c3><a1>", "<c3><a4>nngsch<c3><a4>", "ach"), profile, transliterate = "Trans")

# different results of ordering
tokenize("aaa", c("a","aa"), order = NULL)
tokenize("aaa", c("a","aa"), order = "size")

# regexmatching does not catch overlap, which can lead to wrong results
# the second example results in a warning instead of just parsing "ab bb"
# this should occur only rarely in natural language
tokenize("abbb", profile = c("ab","bb"), order = NULL)
tokenize("abbb", profile = c("ab","bb"), order = NULL, regex = TRUE)

# different parsing methods can lead to different results
# note that in natural language this is VERY unlikely to happen
tokenize("abc", c("bc","ab","a","c"), order = NULL, method = "global")$strings
tokenize("abc", c("bc","ab","a","c"), order = NULL, method = "linear")$strings

cysouw/qlcTokenize documentation built on May 12, 2017, 2:27 p.m.