read_fwf: Read a fixed width file into a tibble

Description Usage Arguments Second edition changes See Also Examples

View source: R/read_fwf.R

Description

A fixed width file can be a very compact representation of numeric data. It's also very fast to parse, because every field is in the same place in every line. Unfortunately, it's painful to parse because you need to describe the length of every field. Readr aims to make it as easy as possible by providing a number of different ways to describe the field structure.

Usage

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
read_fwf(
  file,
  col_positions = fwf_empty(file, skip, n = guess_max),
  col_types = NULL,
  col_select = NULL,
  id = NULL,
  locale = default_locale(),
  na = c("", "NA"),
  comment = "",
  trim_ws = TRUE,
  skip = 0,
  n_max = Inf,
  guess_max = min(n_max, 1000),
  progress = show_progress(),
  name_repair = "unique",
  num_threads = readr_threads(),
  show_col_types = should_show_types(),
  lazy = TRUE,
  skip_empty_rows = TRUE
)

fwf_empty(
  file,
  skip = 0,
  skip_empty_rows = FALSE,
  col_names = NULL,
  comment = "",
  n = 100L
)

fwf_widths(widths, col_names = NULL)

fwf_positions(start, end = NULL, col_names = NULL)

fwf_cols(...)

Arguments

file

Either a path to a file, a connection, or literal data (either a single string or a raw vector).

Files ending in .gz, .bz2, .xz, or .zip will be automatically uncompressed. Files starting with http://, https://, ftp://, or ftps:// will be automatically downloaded. Remote gz files can also be automatically downloaded and decompressed.

Literal data is most useful for examples and tests. To be recognised as a path, it must be wrapped with I(), be a string containing at least one new line, or be a vector containing at least one string with a new line.

Using a value of clipboard() will read from the system clipboard.

col_positions

Column positions, as created by fwf_empty(), fwf_widths() or fwf_positions(). To read in only selected fields, use fwf_positions(). If the width of the last column is variable (a ragged fwf file), supply the last end position as NA.

col_types

One of NULL, a cols() specification, or a string. See vignette("readr") for more details.

If NULL, all column types will be imputed from the first 1000 rows on the input. This is convenient (and fast), but not robust. If the imputation fails, you'll need to increase the guess_max or supply the correct types yourself.

Column specifications created by list() or cols() must contain one column specification for each column. If you only want to read a subset of the columns, use cols_only().

Alternatively, you can use a compact string representation where each character represents one column:

  • c = character

  • i = integer

  • n = number

  • d = double

  • l = logical

  • f = factor

  • D = date

  • T = date time

  • t = time

  • ? = guess

  • _ or - = skip

    By default, reading a file without a column specification will print a message showing what readr guessed they were. To remove this message, set show_col_types = FALSE or set 'options(readr.show_col_types = FALSE).

col_select

<tidy-select> Columns to include in the results, either by name or by numeric index. Use c() or list() to select with more than one expression and ?tidyselect::language for full details on the selection language.

id

The name of a column in which to store the file path. This is useful when reading multiple input files and there is data in the file paths, such as the data collection date. If NULL (the default) no extra column is created.

locale

The locale controls defaults that vary from place to place. The default locale is US-centric (like R), but you can use locale() to create your own locale that controls things like the default time zone, encoding, decimal mark, big mark, and day/month names.

na

Character vector of strings to interpret as missing values. Set this option to character() to indicate no missing values.

comment

A string used to identify comments. Any text after the comment characters will be silently ignored.

trim_ws

Should leading and trailing whitespace (ASCII spaces and tabs) be trimmed from each field before parsing it?

skip

Number of lines to skip before reading data.

n_max

Maximum number of lines to read.

guess_max

Maximum number of lines to use for guessing column types.

progress

Display a progress bar? By default it will only display in an interactive session and not while knitting a document. The automatic progress bar can be disabled by setting option readr.show_progress to FALSE.

name_repair

Treatment of problematic column names:

  • "minimal": No name repair or checks, beyond basic existence of names

  • "unique": Make sure names are unique and not empty

  • "check_unique": (default value), no name repair, but check they are unique

  • "universal": Make the names unique and syntactic

  • a function: apply custom name repair (e.g., .name_repair = make.names for names in the style of base R)

  • A purrr-style anonymous function, see rlang::as_function()

This argument is passed on as repair to vctrs::vec_as_names(). See there for more details on these terms and the strategies used to enforce them.

num_threads

The number of processing threads to use for initial parsing and lazy reading of data. If your data contains newlines within fields the parser should automatically detect this and fall back to using one thread only. However if you know your file has newlines within quoted fields it is safest to set num_threads = 1 explicitly.

show_col_types

If FALSE, do not show the guessed column types. If TRUE always show the column types, even if they are supplied. If NULL (the default) only show the column types if they are not explicitly supplied by the col_types argument.

lazy

Read values lazily? By default the file is initially only indexed and the values are read lazily when accessed. Lazy reading is useful interactively, particularly if you are only interested in a subset of the full dataset. Note, lazy reading on windows will lock the file until all the data has been read from it, if you run into this issue set lazy = FALSE.

skip_empty_rows

Should blank rows be ignored altogether? i.e. If this option is TRUE then blank rows will not be represented at all. If it is FALSE then they will be represented by NA values in all the columns.

col_names

Either NULL, or a character vector column names.

n

Number of lines the tokenizer will read to determine file structure. By default it is set to 100.

widths

Width of each field. Use NA as width of last field when reading a ragged fwf file.

start, end

Starting and ending (inclusive) positions of each field. Use NA as last end field when reading a ragged fwf file.

...

If the first element is a data frame, then it must have all numeric columns and either one or two rows. The column names are the variable names. The column values are the variable widths if a length one vector, and if length two, variable start and end positions. The elements of ... are used to construct a data frame with or or two rows as above.

Second edition changes

Comments are no longer looked for anywhere in the file. They are now only ignored at the start of a line.

See Also

read_table() to read fixed width files where each column is separated by whitespace.

Examples

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
fwf_sample <- readr_example("fwf-sample.txt")
writeLines(read_lines(fwf_sample))

# You can specify column positions in several ways:
# 1. Guess based on position of empty columns
read_fwf(fwf_sample, fwf_empty(fwf_sample, col_names = c("first", "last", "state", "ssn")))
# 2. A vector of field widths
read_fwf(fwf_sample, fwf_widths(c(20, 10, 12), c("name", "state", "ssn")))
# 3. Paired vectors of start and end positions
read_fwf(fwf_sample, fwf_positions(c(1, 30), c(20, 42), c("name", "ssn")))
# 4. Named arguments with start and end positions
read_fwf(fwf_sample, fwf_cols(name = c(1, 20), ssn = c(30, 42)))
# 5. Named arguments with column widths
read_fwf(fwf_sample, fwf_cols(name = 20, state = 10, ssn = 12))

Example output

John Smith          WA        418-Y11-4111
Mary Hartford       CA        319-Z19-4341
Evan Nolan          IL        219-532-c301
Parsed with column specification:
cols(
  first = col_character(),
  last = col_character(),
  state = col_character(),
  ssn = col_character()
)
# A tibble: 3 x 4
  first last     state ssn         
  <chr> <chr>    <chr> <chr>       
1 John  Smith    WA    418-Y11-4111
2 Mary  Hartford CA    319-Z19-4341
3 Evan  Nolan    IL    219-532-c301
Parsed with column specification:
cols(
  name = col_character(),
  state = col_character(),
  ssn = col_character()
)
# A tibble: 3 x 3
  name          state ssn         
  <chr>         <chr> <chr>       
1 John Smith    WA    418-Y11-4111
2 Mary Hartford CA    319-Z19-4341
3 Evan Nolan    IL    219-532-c301
Parsed with column specification:
cols(
  name = col_character(),
  ssn = col_character()
)
# A tibble: 3 x 2
  name          ssn         
  <chr>         <chr>       
1 John Smith    418-Y11-4111
2 Mary Hartford 319-Z19-4341
3 Evan Nolan    219-532-c301
Parsed with column specification:
cols(
  name = col_character(),
  ssn = col_character()
)
# A tibble: 3 x 2
  name          ssn         
  <chr>         <chr>       
1 John Smith    418-Y11-4111
2 Mary Hartford 319-Z19-4341
3 Evan Nolan    219-532-c301
Parsed with column specification:
cols(
  name = col_character(),
  state = col_character(),
  ssn = col_character()
)
# A tibble: 3 x 3
  name          state ssn         
  <chr>         <chr> <chr>       
1 John Smith    WA    418-Y11-4111
2 Mary Hartford CA    319-Z19-4341
3 Evan Nolan    IL    219-532-c301

readr documentation built on Aug. 10, 2021, 5:06 p.m.