scrapy


distributed web scraping (crawling)
This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.

The goal is to distribute seed URLs among many waiting spider instances, whose requests are coordinated via Redis. Any other crawls those trigger, as a result of frontier expansion or depth traversal, will also be distributed among all workers in the cluster.

The input to the system is a set of Kafka topics and the output is a set of Kafka topics. Raw HTML and assets are crawled interactively, spidered, and output to the log. For easy local development, you can also disable the Kafka portions and work with the spider entirely via Redis, although this is not recommended due to the serialization of the crawl requests.


http://stackoverflow.com/questions/17975471/selenium-with-scrapy-for-dynamic-page
https://github.com/voliveirajr/seleniumcrawler





https://github.com/scrapinghub/frontera












Comments