生活随笔
收集整理的這篇文章主要介紹了
爬取斗图网的图片
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
單線程爬取斗圖網的圖片
# -*- coding: utf-8 -*-
import requests
from bs4 import BeautifulSoup
from urllib.request import urlretrieve
import lxml,urllib
from lxml import etree
import os
#基本url
BASE_URL = 'https://www.doutula.com/photo/list/?page='
#獲取每一頁的url
PAGE_URLS = []
headers = {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.75 Safari/537.36'
}
#下載每一頁的圖片
def get_down_image(url,index):filename = url.split('/')[-1]# print(index)os.makedirs('./images/page{}_image/'.format(index), exist_ok=True)#創建一個文件夾filename = filename.split('!')[-2]path = os.path.join('images/page{}_image'.format(index),filename)urlretrieve(url,filename=path)#下載圖片#獲取每一個圖片的url
def get_image_urls(url,index):response = requests.get(url,headers=headers)context = response.texthtml = etree.HTML(context)# soup = BeautifulSoup(context,'lxml')image_urls = html.xpath("//div[@class='page-content text-center']//img/@data-original")# print(context)for image_url in image_urls:# print(image_url)get_down_image(image_url,index)#獲取每一頁url
def get_urls_list():for x in range(5):url = BASE_URL+str(x)PAGE_URLS.append(url)return PAGE_URLSdef main():urls = get_urls_list()for index,url in enumerate(urls):get_image_urls(url,index)# breakif __name__ == '__main__':main()
總結
以上是生活随笔為你收集整理的爬取斗图网的图片的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。