1.Scrapy爬虫之静态网页爬取之三spider.py练习

练习1.抓取一个页面的内容
网址:http://stackoverflow.com/questions?sort=votes
图如下:

《1.Scrapy爬虫之静态网页爬取之三spider.py练习》 1

注意:运行一个spider.py的命令 scrapy runspider stackoverflow.py

输出到一个文件中 scrapy runspider stackoverflow.py -o stackoverflow.csv

# -*- coding: utf-8 -*-
import scrapy

class StackOverFlowSpider(scrapy.Spider):
    name = "stackoverflow" #你在项目中跑蜘蛛的时候,要用到它的名字
    start_urls = ['http://stackoverflow.com/questions?sort=votes']
    
    #parse是解析函数
    def parse(self,response):
        for question in response.xpath('//div[@class="question-summary"]'):
            title = question.xpath('.//div[@class="summary"]/h3/a/text()').extract_first()
            links = response.urljoin(question.xpath('.//div[@class="summary"]/h3/a/@href').extract_first())
            content = question.xpath('.//div[@class="excerpt"]/text()').extract_first().strip()
            votes = question.xpath('.//span[@class="vote-count-post high-scored-post"]/strong/text()').extract_first()
            #votes = question.xpath('.//strong/text()').extract_first()
            answers = question.xpath('.//div[@class="status answered-accepted"]/strong/text()').extract_first()

            yield{
                'title':title,
                'links':links,
                'content':content,
                'votes': votes,
                'answers':answers
            }```
输出到文件中如下:

![2](http://upload-images.jianshu.io/upload_images/5076126-29c8906471d5346a.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

**练习2.给一个列表,其中都是url**
来看下一页类型:(就是给一个列表去抓取网页)有每个页数
网址:http://www.cnblogs.com/pick/#p1

![3](http://upload-images.jianshu.io/upload_images/5076126-0f1730c18b9ddb41.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)

– coding: utf-8 –

import scrapy

class CnblogSpider(scrapy.Spider):
name = “cnblogs”
allowed_domains = [“http://www.cnblogs.com“]
start_urls = [‘http://www.cnblogs.com/pick/#p%s’ %p for p in range(1,3)]
def parse(self,response):
for article in response.xpath(‘//div[@class=”post_item”]’):
title = article.xpath(‘.//div[@class=”post_item_body”]/h3/a/text()’).extract_first()
#链接不完整用:response.urljoin()
title_link = article.xpath(‘.//div[@class=”post_item_body”]/h3/a/@href’).extract_first()
content = article.xpath(‘.//p[@class=”post_item_summary”]/text()’).extract_first()
anthor = article.xpath(‘.//div[@class=”post_item_foot”]/a/text()’).extract_first()
anthor_link = article.xpath(‘.//div[@class=”post_item_foot”]/a/@href’).extract_first()
comment = article.xpath(‘.//span[@class=”article_comment”]/a/text()’).extract_first().strip()
view = article.xpath(‘.//span[@class=”article_view”]/a/text()’).extract_first()

        print title
        print title_link
        print content
        print anthor
        print anthor_link
        print comment
        print view

        yield{
            'title':title,
            'title_link':title_link,
            'content':content,
            'anthor':anthor,
            'anthor_link':anthor_link,
            'comment':comment,
            'view':view
        }```

《1.Scrapy爬虫之静态网页爬取之三spider.py练习》 4

输出到文件中如下:

《1.Scrapy爬虫之静态网页爬取之三spider.py练习》 5

重点技巧:一开始加属性就是类似id一样的精确定位,后面子标签有属性不一定加,看需要属性还是文本内容

练习3.还是下一页,只有一个next,假如网址里面没有1和2等等的数字

《1.Scrapy爬虫之静态网页爬取之三spider.py练习》 6
《1.Scrapy爬虫之静态网页爬取之三spider.py练习》 7

# -*- coding: utf-8 -*-
import scrapy

class QuetoSpider(scrapy.Spider):
    name = 'queto'
    start_urls = ['http://quotes.toscrape.com/tag/humor/']

    def parse(self,response):
        for quote in response.xpath('//div[@class="quote"]'):
            content = quote.xpath('.//span[@class="text"]/text()').extract_first()
            author = quote.xpath('.//small[@class="author"]/text()').extract_first()

            yield{
                'content' : content,
                'author' :author
            }
        #解析下一页
        next_page = response.xpath('//li[@class="next"]/a/@href').extract_first()
        if next_page is not None:
            next_page = response.urljoin(next_page)
            #返回页面
            yield scrapy.Request(next_page,callback=self.parse)```
解析下一个页面,next_page里面是网址链接,返回response,有个回掉函数,再用自己的parse,这是一个递归的过程。




    原文作者:siro刹那
    原文地址: https://www.jianshu.com/p/36935a667ec7
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞