This class can be used to check whether a page may be crawled by looking at the robots.txt file of its site.
It takes the URL of a page and retrieves the robots.txt file of the same site.
The class parses the robots.txt file and looks up for the rules defined in that file to see if the site allows crawling the intended page.
The class also stores the time when a page is crawled to check whether next time another page of the same site is being crawled it is honoring the intended crawl delay and request rate limits. |