knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "man/figures/README-", out.width = "100%" ) pkgload::load_all(export_all = FALSE)
gibasa is a plain 'Rcpp' wrapper for 'MeCab', a morphological analyzer for CJK text.
Part-of-speech tagging with morphological analyzers is useful for processing CJK text data. This is because most words in CJK text are not separated by whitespaces and tokenizers::tokenize_words
may split them into wrong tokens.
The main goal of gibasa package is to provide an alternative to tidytext::unnest_tokens
for CJK text data. For this goal, gibasa provides three main functions: gibasa::tokenize
, gibasa::prettify
, and gibasa::pack
.
gibasa::tokenize
takes a TIF-compliant data.frame of corpus, returning tokens as format that known as 'tidy text data', so that users can replace tidytext::unnest_tokens
with it for tokenizing CJK text.gibasa::prettify
turns tagged features into columns.gibasa::pack
takes a 'tidy text data', typically returning space-separated corpus.You can install binary package via CRAN or r-universe.
## Install gibasa from r-universe repository install.packages("gibasa", repos = c("https://paithiov909.r-universe.dev", "https://cloud.r-project.org")) ## Or build from source package Sys.setenv(MECAB_DEFAULT_RC = "/fullpath/to/your/mecabrc") # if necessary remotes::install_github("paithiov909/gibasa")
To use gibasa package requires the MeCab library and its dictionary installed and available.
In case using Linux or macOS, you can install them with their package managers, or build and install from the source by yourself.
In case using Windows, use installer built for 64bit. Note that gibasa requires a UTF-8 dictionary, not a Shift-JIS one.
As of v0.9.4, gibasa looks at the file specified by the environment variable MECABRC
or the file located at ~/.mecabrc
. If the MeCab dictionary is in a different location than the default, create a mecabrc file and specify where the dictionary is located.
For example, to install and use the ipadic from PyPI, run:
$ python3 -m pip install ipadic
$ python3 -c "import ipadic; print('dicdir=' + ipadic.DICDIR);" > ~/.mecabrc
res <- gibasa::tokenize( data.frame( doc_id = seq_along(gibasa::ginga[5:8]), text = gibasa::ginga[5:8] ), text, doc_id ) res
gibasa::prettify(res) gibasa::prettify(res, col_select = 1:3) gibasa::prettify(res, col_select = c(1, 3, 5)) gibasa::prettify(res, col_select = c("POS1", "Original"))
res <- gibasa::prettify(res) gibasa::pack(res) dplyr::mutate( res, token = dplyr::if_else(is.na(Original), token, Original), token = paste(token, POS1, sep = "/") ) |> gibasa::pack() |> head(1L)
IPA, UniDic, CC-CEDICT-MeCab, and mecab-ko-dic schemes are supported.
## UniDic 2.1.2 gibasa::tokenize("あのイーハトーヴォのすきとおった風", sys_dic = file.path("mecab/unidic-lite")) |> gibasa::prettify(into = gibasa::get_dict_features("unidic26")) ## CC-CEDICT gibasa::tokenize("它可以进行日语和汉语的语态分析", sys_dic = file.path("mecab/cc-cedict")) |> gibasa::prettify(into = gibasa::get_dict_features("cc-cedict")) ## mecab-ko-dic gibasa::tokenize("하네다공항한정토트백", sys_dic = file.path("mecab/mecab-ko-dic")) |> gibasa::prettify(into = gibasa::get_dict_features("ko-dic"))
## build a new ipadic in temporary directory build_sys_dic( dic_dir = file.path("mecab/ipadic-eucjp"), # replace here with path to your source dictionary out_dir = tempdir(), encoding = "euc-jp" # encoding of source csv files ) ## copy the 'dicrc' file file.copy(file.path("mecab/ipadic-eucjp/dicrc"), tempdir()) dictionary_info(sys_dic = tempdir())
## write a csv file and compile it into a user dictionary writeLines( c( "月ノ,1290,1290,4579,名詞,固有名詞,人名,姓,*,*,月ノ,ツキノ,ツキノ", "美兎,1291,1291,8561,名詞,固有名詞,人名,名,*,*,美兎,ミト,ミト" ), con = (csv_file <- tempfile(fileext = ".csv")) ) build_user_dic( dic_dir = file.path("mecab/ipadic-eucjp"), file = (user_dic <- tempfile(fileext = ".dic")), csv_file = csv_file, encoding = "utf8" ) tokenize("月ノ美兎は箱の中", sys_dic = tempdir(), user_dic = user_dic)
GPL (>=3).
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.