This module provides a single class,
RobotFileParser, which answers questions about whether or not a particular user agent can fetch a URL on the Web site that published the
robots.txt file. For more details on the structure of
robots.txt files, see http://www.robotstxt.org/orig.html.
This class provides methods to read, parse and answer questions about the
robots.txt file at url.
Sets the URL referring to a
robots.txt URL and feeds it to the parser.
Parses the lines argument.
True if the useragent is allowed to fetch the url according to the rules contained in the parsed
Returns the time the
robots.txt file was last fetched. This is useful for long-running web spiders that need to check for new
robots.txt files periodically.
Sets the time the
robots.txt file was last fetched to the current time.
The following example demonstrates basic use of the RobotFileParser class.
>>> import robotparser >>> rp = robotparser.RobotFileParser() >>> rp.set_url("http://www.musi-cal.com/robots.txt") >>> rp.read() >>> rp.can_fetch("*", "http://www.musi-cal.com/cgi-bin/search?city=San+Francisco") False >>> rp.can_fetch("*", "http://www.musi-cal.com/") True
© 2001–2017 Python Software Foundation
Licensed under the PSF License.