在使用python爬虫爬取数据的时候,经常会遇到一些网站的反爬虫措施,一般就是针对于headers中的User-Agent,如果没有对headers进行设置,User-Agent会声明自己是python脚本,而如果网站有反爬虫的想法的话,必然会拒绝这样的连接。而修改headers可以将自己的爬虫脚本伪装成浏览器的正常访问,来避免这一问题。
设置方法
使用urllib请求页面时
import urllib, urllib2
def get_page_source(url):
headers = {'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.8',
'Cache-Control': 'max-age=0',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36',
'Connection': 'keep-alive',
'Referer': 'http://www.baidu.com/'
}
req = urllib2.Request(url, None, headers)
response = urllib2.urlopen(req)
page_source = response.read()
return page_source
使用phantomjs请求页面
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def get_headers_driver():
desire = DesiredCapabilities.PHANTOMJS.copy()
headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',
'Accept-Language': 'zh-CN,zh;q=0.8,en;q=0.6,ja;q=0.4',
'Cache-Control': 'max-age=0',
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/59.0.3071.115 Safari/537.36',
'Connection': 'keep-alive',
'Referer': 'http://www.ziroom.com/?utm_source=pinzhuan&utm_medium=baidu&utm_campaign=biaoti'
}
for key, value in headers.iteritems():
desire['phantomjs.page.customHeaders.{}'.format(key)] = value
driver = webdriver.PhantomJS(desired_capabilities=desire, service_args=['--load-images=yes'])#将yes改成no可以让浏览器不加载图片
return driver