LinkExtractor: Extracts links from web pages

Description Usage Arguments Details Value Note Author(s) See Also Examples

View source: R/RNomadsTools.R

Description

Parse a web page, capturing and returning any links found.

Usage

1

Arguments

url

A URL to scan for links.

Details

This is an internal routine used by several functions in the package.

Value

links

A vector of link URLs

Note

While it might be fun to try LinkExtractor on a large website such as Google, the results will be unpredictable and perhaps disastrous if depth is not set. This is because there is no protection against infinite recursion.

Author(s)

Daniel C. Bowman daniel.bowman@unc.edu

See Also

WebCrawler

Examples

1
2
3
4
5
6
7
8
#Find model runs for the 
#GFS 0.5x0.5 model

## Not run: 
urls.out <- LinkExtractor(
"http://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p50.pl")

## End(Not run)

rNOMADS documentation built on May 19, 2017, 12:38 p.m.

Search within the rNOMADS package
Search all R packages, documentation and source code

Questions? Problems? Suggestions? Tweet to @rdrrHQ or email at ian@mutexlabs.com.

Please suggest features or report bugs in the GitHub issue tracker.

All documentation is copyright its authors; we didn't write any of that.