scrapy爬取dmoz/Home

一、实验背景

此次实验要求我们爬取DMOZ下的Home目录(http://www.dmoztools.net/Home/)的所有子目录。dmoz/Home子目录如图001所示。

《scrapy爬取dmoz/Home》 image001.png

二、实验过程

1.dmoz网页结构分析

经过我们的分析发现,dmoz的网页结构是一棵树。既然是树,我们就可以按照树的遍历来爬取,由于其每一节点孩子较多,适合用层次遍历这种广度优先遍历。由于树=根节点+树,所以json存储格式我们就用节点本身加父节点的方式,用三个对象记录节点名、节点链接、父节点就可以了。
细节上每个节点都分为两类,子类别和链接。子类别是父目录下的直接子目录,其URL是父目录URL+子目录名;链接是链接到其他网页的外链和链接到其他目录下的内链,外链是Sites下链接,内链在Subcategories下,与子类别混杂在一起,需要重点区别,如图003。

《scrapy爬取dmoz/Home》 image003.png

2.xpath分析

2.1父目录名称xpath
父目录名称在html中head部分,如图005所示.
可提取出父目录名称xpath:”/html/head/meta[2]/@content”

《scrapy爬取dmoz/Home》 image005.png

2.2Subcategories下子目录URL与子目录名称
Subcategories下子目录所在位置如图007所示。
可提取出Subcategories下子目录URL与子目录名称xpath:
“//div[@class=’section-wrapper’]/section/div/div[@class=’cat-item’]/a/@href”
“//div[@class=’section-wrapper’]/section/div/div[@class=’cat-item’]/a/div/text()”

《scrapy爬取dmoz/Home》 image007.png

2.3 Sites下的外链URL与名称
Sites下外链所在位置如图009所示。
可提取出Sites下外链URL与名称xpath:
“//div[@class=’title-and-desc’]/ a/@href “
“//div[@class=’title-and-desc’]/ a/div/text()”

《scrapy爬取dmoz/Home》 image009.png

3.scrapy搭建爬虫框架

使用scrapy创建爬虫项目,然后在spider文件夹中新建爬虫文件dmozSpider.py,开始编写爬虫代码。新建项目命令如下:
scrapy startproject dmoz

4.python文件编写

4.1 item.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy

classDmozItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
name = scrapy.Field()
url = scrapy.Field()
father_name = scrapy.Field()
desc=scrapy.Field()
pass

4.2 settings.py
为防止服务器判断出了爬虫程序,拒绝访问,首先我们在settings.py中设定USER_AGENT的值,使其伪装成浏览器访问页面,即加入以下代码即可。

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

其次,设置爬取每个网页之间的时间间隔,减缓爬取速度,去掉下述代码中的“#”
#DOWNLOAD_DELAY = 3

4.3 dmozSpider.py

#coding:utf-8
#!/usr/bin/python  
# -*- coding:utf-8 -*- 
import scrapy
from scrapy.selector import Selector
from dmoz.items import DmozItem
import re
import os



class Dmoz(scrapy.Spider):
    name = "dmozspider"
    allowed_domains = ["dmoztools.net"]
    start_urls = ['http://www.dmoztools.net/Home/',]
    def parse(self, response):
        for dmoz in response.xpath("/html/head"):
            item = DmozItem()
            global father_name
            father_name = dmoz.xpath("meta[2]/@content").extract_first()
            yield item
        for dmoz in response.xpath("//div[@class='section-wrapper']/section/div/div[@class='cat-item']"):
            item = DmozItem()
            item['father_name'] = father_name.replace(', ','/')
            item['father_name'] = item['father_name'].replace(' ','_')
            item['url'] = dmoz.xpath("a/@href").extract_first()
            item['url'] = "http://www.dmoztools.net" + item['url']
            text = dmoz.xpath("a/div/i/text()").extract_first()
            item['name'] = dmoz.xpath("a/div/i[contains(./text(), text)]/following::text()[1]").extract_first()
            item['name'] = dmoz.xpath("a/div/i[contains(./text(), text)]/following::text()[1]").extract_first()
            item['name'] = item['name'].replace('  ','')
            item['name'] = item['name'].replace('\r\n','')
            item['name'] = item['name'].replace(' ','_')
            item['name'] = item['name'].replace('-n','n')

            url_1 = "http://www.dmoztools.net/" + item['father_name'] + "/" + item['name']
            global next_url
            next_url = ""
            if url_1 in item['url']:
                next_url = item['url']
            if next_url == "":
                yield item
            else:
                yield scrapy.Request(next_url, callback=self.parse)

        for dmoz in response.xpath("//div[@class='title-and-desc']"):
            item = DmozItem()
            item['father_name'] = father_name.replace(', ','/')
            item['father_name'] = item['father_name'].replace(' ','_')
            item['url'] = dmoz.xpath("a/@href").extract_first()
            item['desc'] = dmoz.xpath('div[@class="site-descr "]/text()').extract_first().strip()
            item['name'] = dmoz.xpath("a/div/text()").extract_first()
            yield item

5.运行结果
使用scrapy运行爬虫。
scrapy crawl dmozspider–o dmoz.json
运行结果如下

{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Apartment_Living/", "name": "Apartment Living"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Consumer_Information/", "name": "Consumer Informatio\u00adn"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Business/Business_Services/Domestic_Services/", "name": "Domestic Services"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Emergency_Preparation/", "name": "Emergency Preparatio\u00adn"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Entertaining/", "name": "Entertaini\u00adng"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Computers/Home_Automation/", "name": "Home Automation"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Business/Small_Business/Home_Office/", "name": "Home Business"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Homeowners/Home_Buyers/", "name": "Home Buyers"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Home_Improvement/", "name": "Home Improvemen\u00adt"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Moving_and_Relocating/", "name": "Moving and Relocating"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Personal_Finance/", "name": "Personal Finance"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Personal_Organization/", "name": "Personal Organizati\u00adon"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Recreation/Pets/", "name": "Pets"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Rural_Living/", "name": "Rural Living"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Society/People/Generations_and_Age_Groups/Seniors/", "name": "Seniors"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Shopping/Home_and_Garden/", "name": "Shopping"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/Urban_Living/", "name": "Urban Living"}
{"father_name": "Home", "url": "http://www.dmoztools.net/Home/News_and_Media/", "name": "News and Media"}
{}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Family/Adoption/Weblogs/", "name": "Adoption"}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Gardening/Bonsai_and_Suiseki/Bonsai/Weblogs/", "name": "Bonsai"}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Cooking/Weblogs/", "name": "Cooking"}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Consumer_Information/Electronics/Weblogs/", "name": "Electronics"}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Consumer_Information/Food_and_Drink/Weblogs/", "name": "Food and Drink"}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Personal_Finance/Money_Management/Weblogs/", "name": "Money Management"}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Family/Parenting/Mothers/Weblogs/", "name": "Mothers"}
{"father_name": "Home/Weblogs", "url": "http://www.dmoztools.net/Home/Family/Parenting/Fathers/Stay_at_Home_Fathers/Weblogs/", "name": "Stay at Home Fathers"}
{}
{"father_name": "Home/Software", "url": "http://www.dmoztools.net/Home/Family/Software/", "name": "Family"}
{"father_name": "Home/Software", "url": "http://www.dmoztools.net/Home/Gardening/Software/", "name": "Gardening"}
{"father_name": "Home/Software", "url": "http://www.dmoztools.net/Society/Genealogy/Software/", "name": "Genealogy"}
{"father_name": "Home/Software", "url": "http://www.dmoztools.net/Computers/Home_Automation/Software/", "name": "Home Automation"}
{"father_name": "Home/Software", "url": "http://www.dmoztools.net/Home/Personal_Finance/Software/", "name": "Personal Finance"}
{"father_name": "Home/Software", "url": "http://www.dmoztools.net/Home/Cooking/Recipe_Management/", "name": "Recipe Management"}
{"father_name": "Home/Software", "url": "http://www.agilairecorp.com/", "desc": "Environmental data management solutions. Products and support.", "name": "Agilaire "}
{"father_name": "Home/Software", "url": "http://www.homedesignersoftware.com/", "desc": "Software package for home remodelling, interior design, decks and landscaping creation. Company info, products list, shop and support.", "name": "Chief Architect Inc: Home Designer Software "}
{"father_name": "Home/Software", "url": "http://www.chorewars.com/", "desc": "Browser-based system loosely based on D&D that allows household members to claim experience points for doing tasks.  Monsters and treasure may be optionally defined with each chore.", "name": "Chore Wars "}
{"father_name": "Home/Software", "url": "http://www.kopykake.com/", "desc": "Cake decorating software for professional bakers and hobbyists.", "name": "Kopy Kake "}
{"father_name": "Home/Software", "url": "http://www.lets-clean-up.com/", "desc": "Software to help the family and small businesses organize cleaning chores and maintenance activities.", "name": "Let's Clean Up "}
{"father_name": "Home/Software", "url": "http://www.punchsoftware.com/", "desc": "Offers 3D home design suite for professional home planning with real model technology. Plan a dream house with this architecture design and 3D landscape software.", "name": "Punch Software "}
{}

三、问题分析与解决

1. 403 forbidden:爬虫被禁止访问该网页

1.1.在setting.py中修改user-agent,伪装成浏览器。

USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'

1.2 在setting.py中修改DOWNLOAD_DELAY,设置爬取每个网页之间的时间间隔。

#DOWNLOAD_DELAY = 3 去掉#

参考文献:https://www.jianshu.com/p/df9c0d1e9087

2. Subcategories下子目录名称不在标签内
 item['name'] = dmoz.xpath("a/div/i[contains(./text(), text)]/following::text()[1]").extract_first()

参考文献:https://www.zhihu.com/question/38080188

    原文作者:Carina_55
    原文地址: https://www.jianshu.com/p/51419fec3915
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞