我想从网站解析一些信息,这些信息的数据分布在几个页面中.
问题是我不知道有多少页面.可能有2个,但也可能有4个,甚至只有一个页面.
当我不知道会有多少页面时,我怎么能遍历页面?
我知道url模式看起来像下面的代码.
此外,页面名称不是普通数字,但它们在第2页的“pe2”和第3页的“pe4”等中,因此不能只是在范围(数字)上循环.
这个我正在尝试修复的循环的虚拟代码.
pages=['','pe2', 'pe4', 'pe6', 'pe8',]
import requests
from bs4 import BeautifulSoup
for i in pages:
url = "http://www.website.com/somecode/dummy?page={}".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content)
#rest of the scraping code
最佳答案 您可以使用while循环,它会在遇到异常时停止运行.
码:
from bs4 import BeautifulSoup
from time import sleep
import requests
i = 0
while(True):
try:
if i == 0:
url = "http://www.website.com/somecode/dummy?page=pe"
else:
url = "http://www.website.com/somecode/dummy?page=pe{}".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content, 'html.parser')
#print page url
print(url)
#rest of the scraping code
#don't overflow website
sleep(2)
#increase page number
i += 2
except:
break
输出:
http://www.website.com/somecode/dummy?page
http://www.website.com/somecode/dummy?page=pe2
http://www.website.com/somecode/dummy?page=pe4
http://www.website.com/somecode/dummy?page=pe6
http://www.website.com/somecode/dummy?page=pe8
...
... and so on, until it faces an Exception.