迭代爬取时,报错 Filtered offsite request

用scrapy框架迭代爬取时报错
scrapy日志:

在 setting.py 文件中 设置 日志 记录等级

LOG_LEVEL= 'DEBUG'

LOG_FILE ='log.txt'

观察 scrapy 日志

2017-08-15 21:58:05 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'sou.zhaopin.com': <GET http://sou.zhaopin.com/jobs/searchresult.ashx?jl=%E4%B8%8A%E6%B5%B7&kw=python&sm=0&source=0&p=2>
2017-08-15 21:58:05 [scrapy.core.engine] INFO: Closing spider (finished)
2017-08-15 21:58:05 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 782,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 58273,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 2,
 'downloader/response_status_count/302': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 8, 15, 13, 58, 5, 915565),
 'item_scraped_count': 59,
 'log_count/DEBUG': 64,
 'log_count/INFO': 7,
 'memusage/max': 52699136,
 'memusage/startup': 52699136,
 'offsite/domains': 1,
 'offsite/filtered': 1,
 'request_depth_max': 1,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2017, 8, 15, 13, 58, 5, 98357)}
2017-08-15 21:58:05 [scrapy.core.engine] INFO: Spider closed (finished)

重要的是第一行,我开始做的时候没有意识到这竟然是一个错误,应该是被记录的一个错误提示,然后程序也就没有报错

2017-08-15 21:58:05 [scrapy.spidermiddlewares.offsite] DEBUG: Filtered offsite request to 'sou.zhaopin.com': <GET http://sou.zhaopin.com/jobs/searchresult.ashx?jl=%E4%B8%8A%E6%B5%B7&kw=python&sm=0&source=0&p=2>

DEBUG: Filtered offsite request to
因为 Request中请求的 URL 和 allowed_domains 中定义的域名冲突,所以将Request中请求的URL过滤掉了,无法请求

    name = 'zhilianspider'
    allowed_domains = ['http://sou.zhaopin.com']

    page = 1
    url = 'http://sou.zhaopin.com/jobs/searchresult.ashx?jl=%E4%B8%8A%E6%B5%B7&kw=python&sm=0&source=0&p='
    start_urls = [url+str(page)]

在 Request 请求参数中,设置 dont_filter = True ,Request 中请求的 URL 将不通过 allowed_domains 过滤。

if self.page <= 10:
            self.page +=1
            yield scrapy.Request(self.url+str(self.page),callback=self.parse,dont_filter = True)

由于关掉了allowed_domains 过滤,所以要将yield 写在判断条件呢,开始我写在了外面程序一直迭代,停不下来了,尴尬。
之前都是写在if同级下的,那时候还没有关掉过滤所以没问题

    原文作者:PythonMaO
    原文地址: https://www.jianshu.com/p/c31e53fd45f6
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞