LinkExtractor: Extracts links from web pages

Description Usage Arguments Details Value Note Author(s) See Also Examples

View source: R/RNomadsTools.R

Description

Parse a web page, capturing and returning any links found.

Usage

1

Arguments

url

A URL to scan for links.

Details

This is an internal routine used by several functions in the package.

Value

links

A vector of link URLs

Note

While it might be fun to try LinkExtractor on a large website such as Google, the results will be unpredictable and perhaps disastrous if depth is not set. This is because there is no protection against infinite recursion.

Author(s)

Daniel C. Bowman [email protected]

See Also

WebCrawler

Examples

1
2
3
4
5
6
7
8
#Find model runs for the 
#GFS 0.5x0.5 model

## Not run: 
urls.out <- LinkExtractor(
"http://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p50.pl")

## End(Not run)

rNOMADS documentation built on May 29, 2017, 2:55 p.m.