使用selenium爬取网页,如何在scrapy shell中调试响应

scrapy shell 使用方法

一般为了检查 Spider 的解析过程,我们会进入 scrapy shell,执行一些代码测试解析逻辑有没有问题,比如看 CSS 选择器有没有写错。进入 shell 的方法如下:

$ scrapy shell example.com
2018-09-12 12:25:17 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-09-12 12:25:17 [scrapy.core.engine] INFO: Spider opened
2018-09-12 12:25:17 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://example.com> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x101e0f9e8>
[s]   item       {}
[s]   request    <GET http://example.com>
[s]   response   <200 http://example.com>
[s]   settings   <scrapy.settings.Settings object at 0x102bf4780>
[s]   spider     <DefaultSpider 'default' at 0x102fbdcc0>
[s] Useful shortcuts:
[s]   fetch(url[, redirect=True]) Fetch URL and update local objects (by default, redirects are followed)
[s]   fetch(req)                  Fetch a scrapy.Request and update local objects
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
In [1]: print(response)
<200 http://example.com>

shell 中,提供了好几个变量,最常用的就是 response ,它表示 HTTP 请求收到的响应。我们可以这样测试:

In [7]: response.css('body div h1::text').extract_first()

问题

但是,如果我们的响应是通过 selenium 在浏览器渲染后返回的呢?这时我们直接进入 scrapy shell 是得不到浏览器渲染后的 response 的,得到的是 HTTP 请求后的 response,没有执行 JS 脚本。

比如,我们爬取淘宝商品列表页:

$ scrapy shell https://s.taobao.com/search?q=%E5%B0%8F%E7%B1%B38&s=44

进入 scrapy shell

In [6]: xpath = '//div[@id="mainsrp-itemlist"]//div[@class="items"][1]//div[contains(@class, "item")]'

In [7]: response.xpath(xpath)
Out[7]: []

直接进入 scrapy shell,响应中的 HTML 没有商品列表节点。

解决办法

Google 搜 scrapy shell selenium 没有找到合适的答案,在官方文档找到答案,我们可以在 spider 进入 scapy shell,当 response 传送给 spider 时,已经由 SeleniumDownloaderMiddlerware(自己写的中间件)渲染好,这时就商品列表已经在 response 的 HTML 中了,所以我们就可以测试 CSS 选择器了。

# -*- coding: utf-8 -*-
from scrapy import Spider, Request
from scrapytaobao.items import ProductItem

class TaobaoSpider(Spider):
    name = 'taobao'
    allowed_domains = ['www.taobao.com']
    base_url = 'https://s.taobao.com/search'

    def start_requests(self):
        ...
   
    def parse(self, response):
        from scrapy.shell import inspect_response
        inspect_response(response, self)

spider 解析方法中调用 inspect_response(),传入 responsespider 实例。然后我们运行爬虫。

$ scrapy crawl taobao
2018-09-12 12:27:48 [selenium.webdriver.remote.remote_connection] DEBUG: Finished Request
2018-09-12 12:27:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://s.taobao.com/search?q=%E9%AD%85%E6%97%8F&s=44> (referer: None)
[s] Available Scrapy objects:
[s]   scrapy     scrapy module (contains scrapy.Request, scrapy.Selector, etc)
[s]   crawler    <scrapy.crawler.Crawler object at 0x10a911a20>
[s]   item       {}
[s]   request    <GET https://s.taobao.com/search?q=%E9%AD%85%E6%97%8F&s=0>
[s]   response   <200 https://s.taobao.com/search?q=%E9%AD%85%E6%97%8F&s=0>
[s]   settings   <scrapy.settings.Settings object at 0x10b79b828>
[s]   spider     <TaobaoSpider 'taobao' at 0x10b8bf048>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   view(response)    View response in a browser
In [1]:

scrapy 执行到请求回调方法时(parse),就会进入 scrapy shell ,我们检查一下 response 的 HTML 中是否真的包含 JS 渲染的商品列表:

In [1]:  xpath = '//div[@id="mainsrp-itemlist"]//div[@class="items"][1]//div[contains(@class, "item")]'

In [2]: response.xpath(xpath)
Out[2]:
[<Selector xpath='//div[@id="mainsrp-itemlist"]//div[@class="items"][1]//div[contains(@class, "item")]' data='<div class="item J_MouserOnverReq item-a'>,
 <Selector xpath='//div[@id="mainsrp-itemlist"]//div[@class="items"][1]//div[contains(@class, "item")]' data='<div class="item J_MouserOnverReq  " dat'>,
...]

可以看到我们已经获取到商品列表了,解决了在 scrapy shell 中调试浏览器渲染的响应问题。

    原文作者:hjiangwen
    原文地址: https://www.jianshu.com/p/3bada860f60c
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞