Python scrapy reddit确认按钮打破爬行

我正在尝试抓取网站上的数据,当我到达18页时,我会收到一个警告页面.我的抓取工具通常适用于大多数reddit页面,我可以成功获取数据.我尝试使用selenium进入下一页,当它打开时,浏览器成功,但爬虫不会跟随该页面.以下是我的代码..

class DarknetmarketsSpider(scrapy.Spider):
    name = "darknetmarkets"
    allowed_domains = ["https://www.reddit.com"]
    start_urls = (
        'http://www.reddit.com/r/darknetmarkets',
    )
    rules = (Rule(LinkExtractor(allow=()), callback='parse_obj', follow=False),)
    def __init__(self):
        self.driver = webdriver.Firefox()

    def parse(self, response):
        self.driver.get('http://www.reddit.com/r/darknetmarkets')
        #self.driver.get('https://www.reddit.com/over18?dest=https%3A%2F%2Fwww.reddit.com%2Fr%2Fdarknetmarketsnoobs')

        while True:
            try:
                YES_BUTTON = '//button[@value="yes"]'
                next = self.driver.find_element_by_xpath(YES_BUTTON).click()


                url = 'http://www.reddit.com/r/darknetmarkets'

                next.click()
            except:
                break

        self.driver.close()


        item = darknetItem()
        item['url'] = []
        for link in LinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response):
            item['url'].append(link.url)
            print link

按钮的CSS …

<button class="c-btn c-btn-primary" type="submit" name="over18" value="yes">continue</button>

最佳答案 我看到你试图绕过该subreddit中的年龄限制屏幕.单击“继续”按钮后,该选项将另存为cookie,因此您必须重新进入scrapy.

用Selenium点击它后保存cookie并将它们发送到scrapy

代码由scrapy authentication login with cookies提供

class MySpider(scrapy.Spider):
name = 'MySpider'
start_urls = ['http://reddit.com/']

def get_cookies(self):
    self.driver = webdriver.Firefox()
    base_url = "http://www.reddit.com/r/darknetmarkets/"
    self.driver.get(base_url)
    self.driver.find_element_by_xpath("//button[@value='yes']").click()
    cookies = self.driver.get_cookies()
    self.driver.close()
    return cookies

def parse(self, response):
    yield scrapy.Request("http://www.reddit.com/r/darknetmarkets/",
        cookies=self.get_cookies(),
        callback=self.darkNetPage)
点赞