Extracts links from web pages

Share:

Description

Parse a web page, capturing and returning any links found.

Usage

1

Arguments

url

A URL to scan for links.

Details

This is an internal routine used by several functions in the package.

Value

links

A vector of link URLs

Note

While it might be fun to try LinkExtractor on a large website such as Google, the results will be unpredictable and perhaps disastrous if depth is not set. This is because there is no protection against infinite recursion.

Author(s)

Daniel C. Bowman daniel.bowman@unc.edu

See Also

WebCrawler

Examples

1
2
3
4
5
6
7
8
#Find the first 10 model runs for the 
#GFS 0.5x0.5 model

## Not run: 
urls.out <- LinkExtractor(
"http://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p50.pl")

## End(Not run)