python – Scrapy:crawlspider在嵌套回调中不生成所有链接

我写了一个scrapy crawlspider来抓取一个像类别页面>结构的网站.打印页面>列表页面>项目页面.在类别页面上有许多类别的机器,每个机器都有一个包含许多类型的类型页面,每个不同类型都有一个项目列表,最后每台机器都有一个页面,其中包含有关它的信息.

我的蜘蛛有一个规则,从主页到我定义回调parsecatpage的类别页面,这会生成一个项目,抓取类别并为页面上的每个类别产生一个新请求.我使用request.meta传递项目和类别名称,并指定回调是parsetype页面.

Parsetypepage从response.meta获取项目,然后为每个类型生成请求并传递该项目,以及在request.meta中将类别和类型连接起来.回调是parsemachinelist.

Parsemachinelist从response.meta获取项目,然后为列表中的每个项目生成请求,并通过request.meta将项目,类别/类型,描述传递给最终回调parsemachine.这将获取元属性并使用页面上的信息和从前一页传递的信息填充项目中的所有字段,最后生成一个项目.

如果我将其限制为单个类别和类型(例如包含[@href,“filter = c:Grinders”]并包含[@href,“filter = t:Disc – Horizo​​ntal,Single End”],那么它的工作原理并且最终页面上的每台机器都有一个机器项目.问题是,一旦我允许蜘蛛扫描所有类别和所有类型,它只返回它到达的最后一页上的机器的scrapy项目,一旦它完成了蜘蛛完成而不是得到其他类别等

这是(匿名)代码

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.http import Request
from myspider.items import MachineItem
import urlparse


class MachineSpider(CrawlSpider):
    name = 'myspider'
    allowed_domains = ['example.com']
    start_urls = ['http://www.example.com/index.php']

    rules = (
        Rule(SgmlLinkExtractor(allow_domains=('example.com'),allow=('12\.html'),unique=True),callback='parsecatpage'),
        )

    def parsecatpage(self, response):
        hxs = HtmlXPathSelector(response)
#this works, next line doesn't   categories = hxs.select('//a[contains(@href, "filter=c:Grinders")]')  
        categories = hxs.select('//a[contains(@href, "filter=c:Grinders") or contains(@href, "filter=c:Lathes")]')
        for cat in categories:
            item = MachineItem()
            req = Request(urlparse.urljoin(response.url,''.join(cat.select("@href").extract()).strip()),callback=self.parsetypepage)
            req.meta['item'] = item
            req.meta['machinecategory'] = ''.join(cat.select("./text()").extract())
            yield req

    def parsetypepage(self, response):
        hxs = HtmlXPathSelector(response)
#this works, next line doesn't   types = hxs.select('//a[contains(@href, "filter=t:Disc+-+Horizontal%2C+Single+End")]')
        types = hxs.select('//a[contains(@href, "filter=t:Disc+-+Horizontal%2C+Single+End") or contains(@href, "filter=t:Lathe%2C+Production")]')
        for typ in types:
            item = response.meta['item']
            req = Request(urlparse.urljoin(response.url,''.join(typ.select("@href").extract()).strip()),callback=self.parsemachinelist)
            req.meta['item'] = item
            req.meta['machinecategory'] = ': '.join([response.meta['machinecategory'],''.join(typ.select("./text()").extract())])
            yield req

    def parsemachinelist(self, response):
        hxs = HtmlXPathSelector(response)
        for row in hxs.select('//tr[contains(td/a/@href, "action=searchdet")]'):
            item = response.meta['item']
            req = Request(urlparse.urljoin(response.url,''.join(row.select('./td/a[contains(@href,"action=searchdet")]/@href').extract()).strip()),callback=self.parsemachine)
            print urlparse.urljoin(response.url,''.join(row.select('./td/a[contains(@href,"action=searchdet")]/@href').extract()).strip())
            req.meta['item'] = item
            req.meta['descr'] = row.select('./td/div/text()').extract()
            req.meta['machinecategory'] = response.meta['machinecategory']
            yield req

    def parsemachine(self, response):
        hxs = HtmlXPathSelector(response)
        item = response.meta['item']
        item['machinecategory'] = response.meta['machinecategory']
        item['comp_name'] = 'Name'
        item['description'] = response.meta['descr']
        item['makemodel'] = ' '.join([''.join(hxs.select('//table/tr[contains(td/strong/text(), "Make")]/td/text()').extract()),''.join(hxs.select('//table/tr[contains(td/strong/text(), "Model")]/td/text()').extract())])
        item['capacity'] = hxs.select('//tr[contains(td/strong/text(), "Capacity")]/td/text()').extract()
        relative_image_url = hxs.select('//img[contains(@src, "custom/modules/images")]/@src')[0].extract()
        abs_image_url = urlparse.urljoin(response.url, relative_image_url.strip())
        item['image_urls'] = [abs_image_url]
        yield item

SPIDER = MachineSpider()

因此,例如蜘蛛将在类别页面上找到Grinders并转到Grinder类型页面,在那里它将找到Disc Horizo​​ntal Single End类型,然后它将转到该页面并找到机器列表并转到每个机器页面和最后每台机器都会有一个项目.如果你试着去磨床和车床,虽然它会完全通过磨床,然后它会抓住车床和车床类型页面并停在那里,而不会产生对车床列表页面和最终车床页面的请求.

有人能帮忙吗?一旦有多个类别的机器,为什么蜘蛛不能进入第二个(或第三个等)机器列表页面?

对不起史诗贴,只是试图解释问题!

谢谢!!

最佳答案 您应该打印请求的URL,以确保它没问题.你也可以尝试这个版本:

def parsecatpage(self, response):
    hxs = HtmlXPathSelector(response)
    categories = hxs.select('//a[contains(@href, "filter=c:Grinders") or contains(@href, "filter=c:Lathes")]')
    for cat in categories:
        item = MachineItem()
        cat_url = urlparse.urljoin(response.url, cat.select("./@href").extract()[0])
        print 'url:', cat_url # to see what's there
        cat_name = cat.select("./text()").extract()[0]
        req = Request(cat_url, callback=self.parsetypepage, meta={'item': item, 'machinecategory': cat_name})
        yield req
点赞