read_xml | R Documentation |
Read HTML or XML.
read_xml(x, encoding = "", ..., as_html = FALSE, options = "NOBLANKS")
read_html(x, encoding = "", ..., options = c("RECOVER", "NOERROR", "NOBLANKS"))
## S3 method for class 'character'
read_xml(x, encoding = "", ..., as_html = FALSE, options = "NOBLANKS")
## S3 method for class 'raw'
read_xml(
x,
encoding = "",
base_url = "",
...,
as_html = FALSE,
options = "NOBLANKS"
)
## S3 method for class 'connection'
read_xml(
x,
encoding = "",
n = 64 * 1024,
verbose = FALSE,
...,
base_url = "",
as_html = FALSE,
options = "NOBLANKS"
)
x |
A string, a connection, or a raw vector. A string can be either a path, a url or literal xml. Urls will
be converted into connections either using If a connection, the complete connection is read into a raw vector before being parsed. |
encoding |
Specify a default encoding for the document. Unless otherwise specified XML documents are assumed to be in UTF-8 or UTF-16. If the document is not UTF-8/16, and lacks an explicit encoding directive, this allows you to supply a default. |
... |
Additional arguments passed on to methods. |
as_html |
Optionally parse an xml file as if it's html. |
options |
Set parsing options for the libxml2 parser. Zero or more of \Sexpr[results=rd]{xml2:::describe_options(xml2:::xml_parse_options())} |
base_url |
When loading from a connection, raw vector or literal html/xml, this allows you to specify a base url for the document. Base urls are used to turn relative urls into absolute urls. |
n |
If |
verbose |
When reading from a slow connection, this prints some output on every iteration so you know its working. |
An XML document. HTML is normalised to valid XML - this may not be exactly the same transformation performed by the browser, but it's a reasonable approximation.
When performing web scraping tasks it is both good practice — and often required —
to set the user agent request header
to a specific value. Sometimes this value is assigned to emulate a browser in order
to have content render in a certain way (e.g. Mozilla/5.0 (Windows NT 5.1; rv:52.0) Gecko/20100101 Firefox/52.0
to emulate more recent Windows browsers). Most often,
this value should be set to provide the web resource owner information on who you are
and the intent of your actions like this Google scraping bot user agent identifier:
Googlebot/2.1 (+http://www.google.com/bot.html)
.
You can set the HTTP user agent for URL-based requests using httr::set_config()
and httr::user_agent()
:
httr::set_config(httr::user_agent("me@example.com; +https://example.com/info.html"))
httr::set_config()
changes the configuration globally,
httr::with_config()
can be used to change configuration temporarily.
# Literal xml/html is useful for small examples
read_xml("<foo><bar /></foo>")
read_html("<html><title>Hi<title></html>")
read_html("<html><title>Hi")
# From a local path
read_html(system.file("extdata", "r-project.html", package = "xml2"))
## Not run:
# From a url
cd <- read_xml(xml2_example("cd_catalog.xml"))
me <- read_html("http://had.co.nz")
## End(Not run)
Add the following code to your website.
For more information on customizing the embed code, read Embedding Snippets.