computeTf: Compute term frequencies on a corpus.

Description Usage Arguments Details See Also Examples

View source: R/computeTfIdf.R

Description

Compute term frequencies on a corpus.

Usage

1
2
3
computeTf(channel, tableName, docId, textColumns, parser,
  weighting = "normal", top = NULL, rankFunction = "rank", where = NULL,
  idSep = "-", idNull = "(null)", stopwords = NULL, test = FALSE)

Arguments

channel

connection object as returned by odbcConnect

tableName

Aster table name

docId

vector with one or more column names comprising unique document id. Values are concatenated with idSep. Database NULLs are replaced with idNull string.

textColumns

one or more names of columns with text. Multiple coumn are concatenated into single text field first.

parser

type of parser to use on text. For example, ngram(2) parser generates 2-grams (ngrams of length 2), token(2) parser generates 2-word combinations of terms within documents.

weighting

term frequency formula to compute the tf value. One of following: 'raw', 'bool', 'binary', 'log', 'augment', and 'normal' (default).

top

specifies threshold to cut off terms ranked below top value. If value is greater than 0 then included top ranking terms only, otherwise all terms returned (also see paramter rankFunction). Terms are always ordered by their term frequency (tf) within each document. Filtered out terms have their rank ariphmetically greater than threshold top (see details): term is more important the smaller value of its rank.

rankFunction

one of rownumber, rank, denserank, percentrank. Rank computed and returned for each term within each document. function determines which SQL window function computes term rank value (default rank corresponds to SQL RANK() window function). When threshold top is greater than 0 ranking function used to limit number of terms returned (see details).

where

specifies criteria to satisfy by the table rows before applying computation. The criteria are expressed in the form of SQL predicates (inside WHERE clause).

idSep

separator when concatenating 2 or more document id columns (see docId).

idNull

string to replace NULL value in document id columns.

stopwords

character vector with stop words. Removing stop words takes place in R after results are computed and returned from Aster.

test

logical: if TRUE show what would be done, only (similar to parameter test in RODBC functions sqlQuery and sqlSave).

Details

By default function computes and returns all terms. When large number of terms is expected then use parameters top to limit number of terms returned by filtering top ranked terms for each document. Thus if set top=1000 and there is 100 documents then at least 100,000 terms (rows) will be returned. Result size could exceed this number when other than rownumber rankFunction used:

The ordering of the rows is always by their tf value within each document.

See Also

computeTfIdf, nGram, token

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
if(interactive()){
# initialize connection to Dallas database in Aster 
conn = odbcDriverConnect(connection="driver={Aster ODBC Driver};
                         server=<dbhost>;port=2406;database=<dbname>;uid=<user>;pwd=<pw>")

# compute term-document-matrix of all 2-word Ngrams of Dallas police open crime reports
tdm1 = computeTf(channel=conn, tableName="public.dallaspoliceall", docId="offensestatus",
                 textColumns=c("offensedescription", "offensenarrative"),
                 parser=nGram(2),
                 where="offensestatus NOT IN ('System.Xml.XmlElement', 'C')")

# compute term-document-matrix of all 2-word combinations of Dallas police crime reports
# by time of day (4 documents corresponding to 4 parts of day)
tdm2 = computeTf(channel=conn, tableName="public.dallaspoliceall",
                 docId="(extract('hour' from offensestarttime)/6)::int%4",
                 textColumns=c("offensedescription", "offensenarrative"),
                 parser=token(2, punctuation="[-.,?\\!:;~()]+", stopWords=TRUE),
                 where="offensenarrative IS NOT NULL")

# include only top 100 ranked 2-word ngrams for each offense status
# into resulting term-document-matrix using dense rank function
tdm3 = computeTf(channel=NULL, tableName="public.dallaspoliceall", docId="offensestatus",
                 textColumns=c("offensedescription", "offensenarrative"),
                 parser=nGram(2), top=100, rankFunction="denserank",
                 where="offensestatus NOT IN ('System.Xml.XmlElement', 'C')")

}

toaster documentation built on May 30, 2017, 3:51 a.m.