Python: 02 爬虫框架 scrapy

  1. 安装python 依赖 pypiwin32 scrapy
C:\Users\wu-chao> pip install  pypiwin32 pymongo
C:\Users\wu-chao> pip install  scrapy

  1. 切换到项目所在的目录,新建项目
C:\Users\wu-chao>F:
F:> cd mongodb
F:\mongodb> scrapy startproject baidutieba
New Scrapy project 'baidutieba', using template directory 'd:\\python27\\lib\\site-packages\\scrapy\\templates\\project', created in:
    F:\mongodb\baidutieba

You can start your first spider with:
    cd baidutieba
    scrapy genspider example example.com
    
# 项目新建成功!
  1. 接下来书写爬虫逻辑准备:厘清文件执行顺序
start.py[程序入口]
------->a. pipelines.py[实例化 Pipeline]
------->b. spiders/teiba_baidu.py 实例化 Spider 并调用 start_requests
------->c. 回掉函数返回 itrms/item 对象
------->d. Pipeline 对象调用 process_item 处理 item 对象
------->e. 完成
  1. 打开settings.py,启用配置 ITEM_PIPELINES
#取消相应注释,效果如图
ITEM_PIPELINES = {
   'baidutieba.pipelines.BaidutiebaPipeline': 300,
}
  1. 定义items Item 类
import scrapy

class BaidutiebaItem(scrapy.Item):
    largeClassName=scrapy.Field();
    largeClassUrl=scrapy.Field();

    smallClassName=scrapy.Field();
    smallClassUrl=scrapy.Field();
    
  1. 书写 Spider,实现核心爬虫逻辑,注意在回掉函数中返回 item:BaidutiebaItem(largeClassName=largeClassName,largeClassUrl=largeClassUrl
    ,smallClassName=smallClassName,smallClassUrl=smallClassUrl);
# -*- coding: utf-8 -*-
import scrapy

from baidutieba.items import BaidutiebaItem;

class TiebaBaiduSpider(scrapy.Spider):
    name = 'teiba_baidu'
    allowed_domains = ['www.baidu.com']
    start_urls = 'http://tieba.baidu.com/f/index/forumclass'

    def start_requests(self):
        print "-"*10,"爬虫开始","-"*10
        print BaidutiebaItem;

        header = {
            "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36"}
        yield scrapy.Request(url=self.start_urls,headers=header, meta={'headers': header},callback=self.parseClass)


    def parseClass(self,resp):
        """解析:start_requests 返回的响应"""

        #获取响应字典
        #respDict=resp.__dict__;

        #取得大类
        largeClassList=resp.xpath("//div[@id='right-sec']/div[@class='clearfix']/div[@class='class-item']");
        for largeClass in largeClassList:

            #大类的名称
            largeClassName=largeClass.xpath("a/text()").extract()[0];
            #大类的url
            largeClassUrl="http://tieba.baidu.com"+largeClass.xpath("a/@href").extract()[0];

            #该大类下的小类
            smallClassList=largeClass.xpath("ul/li");
            for smallClass in smallClassList:

                #小类名称
                smallClassName=smallClass.xpath("a/text()").extract()[0];
                #小类url, &pn= 页码
                smallClassUrl="http://tieba.baidu.com"+smallClass.xpath("a/@href").extract()[0]+"&pn=";

                item=BaidutiebaItem(largeClassName=largeClassName,largeClassUrl=largeClassUrl
                                    ,smallClassName=smallClassName,smallClassUrl=smallClassUrl);
                yield item;
  1. 编写 Pipeline ,将item 存到数据库
# -*- coding: utf-8 -*-
import pymongo;


class BaidutiebaPipeline(object):

    def open_spider(self,spider):
        """开"""
        self.client=pymongo.MongoClient('mongodb://localhost:27017')
        print self.client;

    def process_item(self, item, spider):
        """逻辑执行"""
        print "逻辑执行"
        # 将对象转换为字典(键值对,json对象字符串)
        jsonStr=dict(item);

                    #数据库    集合       json字符串作为文档插入
        self.client['Tieba']['datas'].insert(dict(item))

    def close_spider(self, spider):
        """关"""
        self.client.close()
  1. 查询数据库,检查数据:
> use Tieba
> db.datas.find({"smallClassName" : "时尚人物"})
{ "_id" : ObjectId("5a7407489aad34099c14605c"), "smallClassName" : "时尚人物", "largeClassName" : "娱乐明星", "smallClassUrl" : "http://tieba.baidu.com/f/index/forumpark?cn=%E6%97%B6%E5%B0%9A%E4%BA%BA%E7%89%A9&ci=0&pcn=%E5%A8%B1%E4%B9%90%E6%98%8E%E6%98%9F&pci=0&ct=1&pn=", "largeClassUrl" : "http://tieba.baidu.com/f/index/forumpark?pcn=娱乐明星&pci=0&ct=1" }
>

完!

    原文作者:程序员_超
    原文地址: https://www.jianshu.com/p/e57f4b6e1122
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞