Tesseract OCR

Description

Extract text from an image. Requires that you have training data for the language you are reading. Works best for images with high contrast, little noise and horizontal text.

Usage

1
2
3
ocr(image, engine = tesseract())

tesseract(language = NULL, datapath = NULL, cache = TRUE)

Arguments

image

file path, url, or raw vector to image (png, tiff, jpeg, etc)

engine

a tesseract engine created with tesseract()

language

string with language for training data. Usually defaults to eng

datapath

path with the training data for this language. Default uses the system library.

cache

use a cached version of this training data if available

Details

Tesseract uses training data to perform OCR. Most systems default to English training data. To improve OCR performance for other langauges you can to install the training data from your distribution. For example to install the spanish training data:

On other platforms you can manually download training data from github and store it in a path on disk that you pass in the datapath parameter. Alternatively you can set a default path via the TESSDATA_PREFIX environment variable.

References

Tesseract training data

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
# Simple example
text <- ocr("http://jeroenooms.github.io/images/testocr.png")
cat(text)

# Roundtrip test: render PDF to image and OCR it back to text
library(pdftools)
library(tiff)

# A PDF file with some text
setwd(tempdir())
news <- file.path(Sys.getenv("R_DOC_DIR"), "NEWS.pdf")
orig <- pdf_text(news)[1]

# Render pdf to jpeg/tiff image
bitmap <- pdf_render_page(news, dpi = 300)
tiff::writeTIFF(bitmap, "page.tiff")

# Extract text from images
out <- ocr("page.tiff")
cat(out)

Want to suggest features or report bugs for rdrr.io? Use the GitHub issue tracker.