Scrapy爬取并导入数据到MongoDB

比如我开始是要爬取的网站为:http://readcolor.com

目的是要爬取书的书名,以及书的数目和一些简介

(1)配置item文件

class DuyuanItem(scrapy.Item):    # define the fields for your item here like:    # name = scrapy.Field()   
book_list_title = scrapy.Field()    
book_number = scrapy.Field()    
book_list_author = scrapy.Field()    
book_list_date = scrapy.Field()    
book_list_summary = scrapy.Field()    
book_url = scrapy.Field()    
book_name = scrapy.Field()    
book_author = scrapy.Field()    
book_summary = scrapy.Field()  #根据你自己想要抓取哪些数据来填写

(2)配置setting文件

ROBOTSTXT_OBEY = False #这是基础里面就说了要配置的

ITEM_PIPELINES = {    'duyuan.pipelines.DuyuanPipeline': 300, }  #pipeline文件的入口

MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'duyuan'
MONGODB_DOCNAME = 'bookitem'  #MongoDB的一些参数

(3)配置pipelines文件

import pymongo
from  scrapy.conf import settingsclass 
class DuyuanPipeline(object):
  def __init__(self):    
    host = settings['MONGODB_HOST']
    port = settings['MONGODB_PORT']
    db_name = settings['MONGODB_DBNAME']
    client = pymongo.MongoClient(host=host, port=port)
    db = client[db_name]
    self.post = db[settings['MONGODB_DOCNAME']]
  
  def process_item(self, item, spider):
     book_info = dict(item)
     self.post.insert(book_info)
     return item      
                #都是按这个套路配。。模仿着来就可以了。

(4)配置爬虫文件

import scrapy
from duyuan.items import DuyuanItemclass
 ReadcolorSpider(scrapy.Spider):
    name = "readcolor"
    allowed_domains = ["readcolor.com"]
    start_urls = ['http://readcolor.com/lists']
    url = 'http://readcolor.com'
    def parse(self, response):
        book_list_group = response.xpath('//article[@style="margin:10px 0 20px;"]')
        for each in book_list_group:
            item = DuyuanItem()  #实例化一个对象
            item['book_list_title'] = each.xpath('header/h3/a/text()').extract()[0] #爬取标题,我发现我这xpath还真有点不熟悉,然后这都是一些数据处理
            item['book_number'] = each.xpath('p/a/text()').extract()[0]
            book_list_url = each.xpath('header/h3/a/@href').extract()[0]
            yield scrapy.Request(self.url+book_list_url,callback=self.parse_book_list_detail,dont_filter=True,meta={'item':item})  #这个yield好像和return挺像的,具体我还得看下python的书,那个url是相对的网址,要自己拼凑出来,callback一个回传
   def parse_book_list_detail(self,response): #相当于点进去一个网站,处理那个点进去的网站的信息
        item = response.meta['item']
        summary = response.xpath('//div[@id="list-description"]/p/text()').extract()
        item['book_list_summary'] = '\n'.join(summary)
        yield item
    原文作者:gogoforit
    原文地址: https://www.jianshu.com/p/1e93e0acd10b
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞