[Scrapy] Item Pipeline

Item Pipeline

官方文档

After an item has been scraped by a spider, it is sent to the Item Pipeline which processes it through several components that are executed sequentially.

Each item pipeline component (sometimes referred as just “Item Pipeline”) is a Python class that implements a simple method. They receive an item and perform an action over it, also deciding if the item should continue through the pipeline or be dropped and no longer processed.

Typical uses of item pipelines are:

  • cleansing HTML data
  • validating scraped data (checking that the items contain certain fields)
  • checking for duplicates (and dropping them)
  • storing the scraped item in a database

实现 item pipeline

每个 item pipeline 组件就是一个 Python 类,实现我们自己的 item pipeline 必须实现如下方法
process_item(self, item, spider)
参数:

  • item (Item 对象或者字典对象),爬取的 Item

  • spider (Spider 对象),爬取 Item 的爬虫

    描述:这个方法会被每一个 item pipeline 组件调用,并且该方法必须返回一个字典数据,Item(或其子类)或者 抛出一个 DropItem 异常。被 drop 的 Item 将不会被接下来的 pipeline 组件处理。

也可以实现如下的方法
open_spider(self, spider)
参数:

  • spider (Spider 对象),开启的爬虫

    描述:该方法在 spider 开启时调用

close_spider(self, spider)
参数:

  • spider (Spider 对象),关闭的爬虫

    描述:该方法在 spider 关闭时调用

from_crawler(cls, crawler)
参数:

  • crawler (Crawler 对象)

    描述:

item pipeline 示例

其他部分代码参见 爬取干货集中营数据(3)

下面的示例 item pipeline 会判断 item 中是否包含 ‘title’ 信息,并将不包含 ‘title’ 的 Item 「扔掉」:

from scrapy.exceptions import DropItem

import json

class GankPipeline(object):
# def __init__(self):
#     self.file = open('gank.json', 'wb')

def process_item(self, item, spider):
    print 'Start Process'
    # line = json.dumps(dict(item)) + "\n"
    # self.file.write(line)
    if item['title']:
        print item['title']
        # print item['images']
        # print item['leftLink']
        # print item['rightLink']
    else:
        raise DropItem('Missing title in %s' % item)

    # todo do what do you want!
    return item

def open_spider(self, spider):
    print 'Open Spider'

def close_spider(self, spider):
    print 'Close Spider'

当然在这里,你可以对爬虫爬取的数据进行校验、去重、持久化……(在接下来的文章中说明)

激活 Item Pipeline 组件

在爬虫项目目录下 settings.py 添加配置:

ITEM_PIPELINES = {
   'gank.pipelines.GankPipeline': 300,
}

配置中的整型值(习惯性定义在0-1000范围内)用来标识 item pipeline 组件的执行顺序,items 会从低值向高值传递。

执行爬虫 查看结果

$> crapy crawl gank –nolog

《[Scrapy] Item Pipeline》 project gank result

    原文作者:甚了
    原文地址: https://www.jianshu.com/p/4ce9048732f1
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞