如何控制Scrapy中的产量顺序

救命!阅读以下scrapy代码和crawler的结果.我想从 http://china.fathom.info/data/data.json抓取一些数据,只允许Scrapy.但我不知道如何控制产量的顺序.我期待在循环中处理所有parse_member请求然后返回group_item,但似乎yield产品总是在yield请求之前执行.

start_urls = [
    "http://china.fathom.info/data/data.json"
]

def parse(self, response):
    groups = json.loads(response.body)['group_members']
    for i in groups:
        group_item = GroupItem()
        group_item['name'] = groups[i]['name']
        group_item['chinese'] = groups[i]['chinese']
        group_item['members'] = []

        members = groups[i]['members']
        for member in members:
            yield Request(self.person_url % member['id'], meta={'group_item': group_item, 'member': member},
                          callback=self.parse_member, priority=100)
        yield group_item

def parse_member(self, response):
    group_item = response.meta['group_item']
    member = response.meta['member']
    person = json.loads(response.body)
    ego = person['ego']
    group_item['members'].append({
        'id': ego['id'],
        'name': ego['name'],
        'chinese': ego['chinese'],
        'role': member['role']
    })

Data on MongoDB

最佳答案 你需要在最终回调中产生项目,解析不会停止parse_member完成,所以解析中的group_item在parse_member工作时没有改变.

不要产生parse_item,只是parse_member上的那个,因为你已经在meta上复制了上一个项目,并且你已经使用response.meta [‘group_item’]在parse_member上恢复了它

点赞