爬虫笔记(12) scrapy源码分析

虽然爬虫的工作原理我是大概清楚的,但是scrapy毕竟是个框架,要用好这个框架务必把底层结构弄清楚。

1. 去重

from __future__ import print_function
import os
import logging

from scrapy.utils.job import job_dir
from scrapy.utils.request import request_fingerprint


class BaseDupeFilter(object):

    @classmethod
    def from_settings(cls, settings):
        return cls()

    def request_seen(self, request):
        return False

    def open(self):  # can return deferred
        pass

    def close(self, reason):  # can return a deferred
        pass

    def log(self, request, spider):  # log that a request has been filtered
        pass


class RFPDupeFilter(BaseDupeFilter):
    """Request Fingerprint duplicates filter"""

    def __init__(self, path=None, debug=False):
        self.file = None
        self.fingerprints = set()#用集合来记录已经访问的request
        self.logdupes = True
        self.debug = debug
        self.logger = logging.getLogger(__name__)
        if path:
            self.file = open(os.path.join(path, 'requests.seen'), 'a+')
            self.file.seek(0)
            self.fingerprints.update(x.rstrip() for x in self.file)

    @classmethod
    def from_settings(cls, settings):
        debug = settings.getbool('DUPEFILTER_DEBUG')
        return cls(job_dir(settings), debug)

    def request_seen(self, request):
        fp = self.request_fingerprint(request)
        if fp in self.fingerprints:
            return True
        self.fingerprints.add(fp)
        if self.file:
            self.file.write(fp + os.linesep)

    def request_fingerprint(self, request):
        return request_fingerprint(request)

    def close(self, reason):
        if self.file:
            self.file.close()

    def log(self, request, spider):
        if self.debug:
            msg = "Filtered duplicate request: %(request)s"
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
        elif self.logdupes:
            msg = ("Filtered duplicate request: %(request)s"
                   " - no more duplicates will be shown"
                   " (see DUPEFILTER_DEBUG to show all duplicates)")
            self.logger.debug(msg, {'request': request}, extra={'spider': spider})
            self.logdupes = False

        spider.crawler.stats.inc_value('dupefilter/filtered', spider=spider)

上面的代码是系统自带的去重,代码中使用set来去重,这跟我以前单独写的代码是一样的。那么系统在哪调用去重代码呢?

#core/scheduler.py
def enqueue_request(self, request):
        if not request.dont_filter and self.df.request_seen(request):
            self.df.log(request, self.spider)
            return False
        dqok = self._dqpush(request)
        if dqok:
            self.stats.inc_value('scheduler/enqueued/disk', spider=self.spider)
        else:
            self._mqpush(request)
            self.stats.inc_value('scheduler/enqueued/memory', spider=self.spider)
        self.stats.inc_value('scheduler/enqueued', spider=self.spider)
        return True

看上面的代码self.df.request_seen,这句调用就是去重类中的函数。我们在返回去看这个函数,发现这个函数与我们处理方式是不同的,request_fingerprint这个函数用来计算请求指纹,对于同一个链接,如果使用cookie来传递参数或者使用post来传递参数,那么我以前的去重方法是不对的。
这个去重类中还有个功能就是能中断之后继续工作,这需要在settings设置JOBDIR
可以在settings配置自定义的类DUPEFILTER_CLASS

2 调度

    原文作者:无事扯淡
    原文地址: https://www.jianshu.com/p/874b5a6147ff
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞