我希望在主脚本中获取已删除项目的列表,而不是使用scrapy
shell.
我知道我定义的类FooSpider中有一个方法解析,这个方法返回一个Item列表. Scrapy框架调用此方法.但是,我怎么能自己得到这个返回的列表呢?
我发现了很多这方面的帖子,但我不明白他们在说什么.
作为上下文,我在这里放置官方示例代码
import scrapy
from tutorial.items import DmozItem
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/",
]
def parse(self, response):
for href in response.css("ul.directory.dir-col > li > a::attr('href')"):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback=self.parse_dir_contents)
def parse_dir_contents(self, response):
result = []
for sel in response.xpath('//ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('text()').extract()
result.append(item)
return result
我如何从main.py或run.py等主要python脚本返回结果?
if __name__ == "__main__":
...
result = xxxx()
for item in result:
print item
任何人都可以提供一个代码片段,我可以从某个地方获取此返回的列表吗?
非常感谢你!
最佳答案 这是一个如何使用管道收集列表中的所有项目的示例:
#!/usr/bin/python3
# Scrapy API imports
import scrapy
from scrapy.crawler import CrawlerProcess
# your spider
from FollowAllSpider import FollowAllSpider
# list to collect all items
items = []
# pipeline to fill the items list
class ItemCollectorPipeline(object):
def __init__(self):
self.ids_seen = set()
def process_item(self, item, spider):
items.append(item)
# create a crawler process with the specified settings
process = CrawlerProcess({
'USER_AGENT': 'scrapy',
'LOG_LEVEL': 'INFO',
'ITEM_PIPELINES': { '__main__.ItemCollectorPipeline': 100 }
})
# start the spider
process.crawl(FollowAllSpider)
process.start()
# print the items
for item in items:
print("url: " + item['url'])
您可以从here获得FollowAllSpider,或使用您自己的蜘蛛.与我的网页一起使用时的输出示例:
$./crawler.py
2018-09-16 22:28:09 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-09-16 22:28:09 [scrapy.utils.log] INFO: Versions: lxml 3.7.1.0, libxml2 2.9.4, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.5.3 (default, Jan 19 2017, 14:11:04) - [GCC 6.3.0 20170118], pyOpenSSL 16.2.0 (OpenSSL 1.1.0f 25 May 2017), cryptography 1.7.1, Platform Linux-4.9.0-6-amd64-x86_64-with-debian-9.5
2018-09-16 22:28:09 [scrapy.crawler] INFO: Overridden settings: {'USER_AGENT': 'scrapy', 'LOG_LEVEL': 'INFO'}
[...]
2018-09-16 22:28:15 [scrapy.core.engine] INFO: Spider closed (finished)
url: http://www.frank-buss.de/
url: http://www.frank-buss.de/impressum.html
url: http://www.frank-buss.de/spline.html
url: http://www.frank-buss.de/schnecke/index.html
url: http://www.frank-buss.de/solitaire/index.html
url: http://www.frank-buss.de/forth/index.html
url: http://www.frank-buss.de/pi.tex
[...]