scrapy+selenium+chrome实现模拟登入 附带防反爬虫方法

心塞的一天 废话不多说直接上图 代码存放在github 地址:https://github.com/zhangshier/scrapy-

《scrapy+selenium+chrome实现模拟登入 附带防反爬虫方法》 查看他登入的网址

 企查查   地址 www.qichacha.com/user_login 

1.创建爬虫

scrapy startproject qichacha  创建爬虫文件

cd qichacha 

scrapy genspider qicha 创建爬虫

创建middlewares.py文件

代码:

# -*- coding: utf-8 -*-

# Define here the models for your spider middleware

#

# See documentation in:

# http://doc.scrapy.org/en/latest/topics/spider-middleware.html

fromscrapyimportsignals

importrandom

fromscrapy.confimportsettings

classMiddlewaretestSpiderMiddleware(object):

# Not all methods need to be defined. If a method is not defined,

# scrapy acts as if the spider middleware does not modify the

# passed objects.

@classmethod

deffrom_crawler(cls,crawler):

# This method is used by Scrapy to create your spiders.

s = cls()

crawler.signals.connect(s.spider_opened,signal=signals.spider_opened)

returns

defprocess_spider_input(response,spider):

# Called for each response that goes through the spider

# middleware and into the spider.

# Should return None or raise an exception.

returnNone

defprocess_spider_output(response,result,spider):

# Called with the results returned from the Spider, after

# it has processed the response.

# Must return an iterable of Request, dict or Item objects.

foriinresult:

yieldi

defprocess_spider_exception(response,exception,spider):

# Called when a spider or process_spider_input() method

# (from other spider middleware) raises an exception.

# Should return either None or an iterable of Response, dict

# or Item objects.

pass

defprocess_start_requests(start_requests,spider):

# Called with the start requests of the spider, and works

# similarly to the process_spider_output() method, except

# that it doesn’t have a response associated.

# Must return only requests (not items).

forrinstart_requests:

yieldr

defspider_opened(self,spider):

spider.logger.info(‘Spider opened: %s’% spider.name)

#更换User-agent 

classUAMiddleware(object):

user_agent_list = settings[‘USER_AGENT_LIST’]

defprocess_request(self,request,spider):

ua = random.choice(self.user_agent_list)

request.headers[‘User-Agent’] = ua

#反爬虫  实现代理ip

classProxyMiddleware(object):

ip_list = settings[‘IP_LIST’]

defprocess_request(self,request,spider):

ip = random.choice(self.ip_list)

request.meta[‘proxy’] = ip

fromseleniumimportwebdriver

fromscrapy.httpimportHtmlResponse

classJSPageMiddleware(object):

#通过chrome 动态访问

defprocess_request(self,request,spider):

ifspider.name ==”qicha”:

spider. browser.get(request.url)

importtime

time.sleep(3)

spider.browser.find_element_by_xpath(‘//div[@class=”form-group has-feedback m-l-lg m-r-lg m-t-xs m-b-none”]/input[@name=”nameNormal”]’).send_keys(“你自己的账号”)

spider.browser.find_element_by_xpath(‘//div[@class=”form-group has-feedback m-l-lg m-r-lg m-t-xs m-b-none”]/input[@name=”pwdNormal”]’).send_keys(“你自己的密码”)

time.sleep(9)

#spider.browser.find_element_by_xpath(‘//div[@class=”m-l-lg m-r-lg m-t-lg”]/button[@class=”btn  btn-primary    m-t-n-xs btn-block btn-lg font-15″]’).click()

returnHtmlResponse(url=spider.browser.current_url,body=spider.browser.page_source,encoding=”utf-8″)

到spider 下的 qicha.py 

代码

# -*- coding: utf-8 -*-

importscrapy

fromscrapy.httpimportFormRequest

fromseleniumimportwebdriver

fromscrapy.xlib.pydispatchimportdispatcher

fromscrapyimportsignals

classQichaSpider(scrapy.Spider):

name =”qicha”

allowed_domains = [“http://www.qichacha.com/”]

start_urls = [‘http://www.qichacha.com/user_login’]

def__init__(self):

self.browser = webdriver.Chrome(executable_path=”C:/Program Files (x86)/Google/Chrome/Application/chromedriver.exe”)  # 与你自己的chromedrive位置相对

super(QichaSpider,self).__init__()

dispatcher.connect(self.spider_closed,signals.spider_closed)

defspider_closed(self,spider):

#当爬虫退出的时候 关闭chrome

print(“spider closed”)

# self.browser.quit()

defparse(self,response):

printresponse.body.decode

配置settings.py 用来存放代理ip   跟useragent

DOWNLOADER_MIDDLEWARES = {

‘qichacha.middlewares.UAMiddleware’:543,  #用来更换useragent

‘qichacha.middlewares.ProxyMiddleware’:544,#用来更换代理ip

‘qichacha.middlewares.JSPageMiddleware’:540,#与,middlewares对应 用来实现自动登入

}

#user_agent

USER_AGENT_LIST = [‘Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1’,

‘Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36’,

‘Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11’]

#代理ip

IP_LIST = [‘http://175.155.25.19:80’,

‘http://124.88.67.19:80’,

‘http://110.72.32.103:8123’,

‘http://175.155.25.40:808’,

‘http://175.155.24.2:808’]

    原文作者:a十二_4765
    原文地址: https://www.jianshu.com/p/eaab2b59beb1
    本文转自网络文章,转载此文章仅为分享知识,如有侵权,请联系博主进行删除。
点赞