基于scrapy框架的请求过滤问题

最近被scrapy的dont_filter困扰,因为写的程序经常因为request被过滤掉而中断。
自认为还是不了解scrapy的运行机制造成的。
如下代码:

from scrapy.spiders import Spider
from scrapy.selector import Selector
from scrapy.linkextractors import LinkExtractor
from scrapy import Request
from example.items import xxxxItem
import re

class xxxxSpider(Spider):
    name = "example"
    allowed_domains = ["xxxx.com.cn"]
    pat = 'http://finance.xxxx.com.cn/.*[0-9]{4}-[0-9]{2}-[0-9]{2}/[a-z]*-[a-z0-9]*.*'
    def start_requests(self):
        yield Request(url="http://finance.xxxx.com.cn/", callback=self.parse)
    def parse(self, response):
        if response.status == 200:
            URLgroup = LinkExtractor(allow=()).extract_links(response)
            for URL in URLgroup:
                key = re.findall(self.pat, URL.url)
                if key:
                    #only crawl url with a fixed prefix
                    yield Request(url=URL.url, callback=self.parse_content)
    def parse_content(self, response):
        if response.status == 200:
            content = Selector(response)
            text = content.xpath("/html/body//div[@id='artibody']//p/descendant::text()").extract() 
            if text and title:
                item = xxxxItem()
                Text = ''
                for text_one in text:
                        Text += text_one
                item["text"] = Text
                yield item
            yield Request(url=response.url, callback=self.parse, dont_filter=True)

在最后一行的request中将dont_filter设置为True,将不会导致爬虫中途停止,因为访问这个网页的request不会被filtered,进而继续爬取。

    原文作者:Nise9s
    原文地址: https://www.jianshu.com/p/af2d8236b0da
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞