Python爬虫爬取电影网站种子,让你以后再也不寂寞
生活随笔
收集整理的這篇文章主要介紹了
Python爬虫爬取电影网站种子,让你以后再也不寂寞
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
前言
本文的文字及圖片來源于網絡,僅供學習、交流使用,不具有任何商業用途,版權歸原作者所有,如有問題請及時聯系我們以作處理。
作者: imBobby
到了周末,寫點簡單加愉快的東西吧,下午健身回來,想看個電影,于是來到熟悉的網站:
btbtt.me我覺得這個網站中文資源比較全,而海盜灣就是英文資源全一些,今天做個電影資源爬蟲吧,進入btbtt.me首頁:
?
這濃烈的的山寨風格,有一絲絲上頭,先觀察一下,點進高清電影區,我的思路是進入高清電影區,逐個訪問頁面內的電影標簽,并將電影詳情頁面的種子下載到本地,所以先觀察一下:
?
?
發現電影詳情頁的URL都在class為subject_link thread-new和subject_link thread-old的標簽下存儲,接下來點進電影詳情頁看看:
?
發現下載鏈接存儲在屬性rel為nofollow的標簽a中,點擊一下下載鏈接試試看:
?
竟然還有一層,有點難受了,想靠標簽篩選這個下載鏈接有點難受,但是可以觀察到:
下載鏈接其實就是把URL內的attach換成了download,這就省了很多事兒啊~
思路大概有了,那就寫代碼唄:
import requests import bs4 import os import time# 設置代理,這個網站也需要科學上網 proxies = { "http": "http://127.0.0.1:41091", "https": "http://127.0.0.1:41091", }headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.105 Safari/537.36 Edg/84.0.522.50" }def init_movie_list(nums):"""根據傳入數字決定爬取多少頁電影,每頁的電影大概幾十個:param nums:爬取頁面數量:return:頁面url組成的list"""movie_list = []if nums < 2:return movie_listfor num in range(1, nums + 1):url = "http://btbtt.me/forum-index-fid-1183-page-" + str(num) + ".htm"movie_list.append(url)return movie_listdef get_movie_detail_url(url):"""根據傳入的URL獲取頁面內電影詳情鏈接并存儲進List:param url:目標頁面URL:return:電影詳情頁的URL和電影名字組成tuple,各個tuple再連成list"""context = requests.get(url=url, headers=headers, proxies=proxies).contenttime.sleep(1)bs4_result = bs4.BeautifulSoup(context, "html.parser")new_read_details = bs4_result.find_all("a", class_="subject_link thread-new")all_details = bs4_result.find_all("a", class_="subject_link thread-old") + new_read_detailsif not all_details:return []url_list =[]for item in all_details:url_list.append((item.get("title"), "http://btbtt.me/" + item.get("href")))return url_listdef get_movie_download_url(url_tuple):"""傳入的tuple為文件夾名和下載鏈接組合:param url_tuple::return:"""folder_name = replace_folder_name(url_tuple[0])url = url_tuple[1]resp = requests.get(url=url, headers=headers, proxies=proxies)time.sleep(1)bs4_result = bs4.BeautifulSoup(resp.content, "html.parser")result = bs4_result.find_all("a", rel="nofollow", target="_blank", ajaxdialog=False)if not result:return ('', '', '')file_name = replace_folder_name(result[-1].text)download_url = "http://btbtt.me/" + result[-1].get("href").replace("dialog", "download")return (folder_name, file_name, download_url)def replace_folder_name(folder_name):"""按照windows系統下的文件命名規則規整化文件夾名:param folder_name::return:"""illegal_str = ["?",",","/","\\","*","<",">","|"," ", "\n", ":"]for item in illegal_str:folder_name = folder_name.replace(item, "")return folder_namedef download_file(input_tuple):"""下載文件:param input_tuple::return:"""folder_name = input_tuple[0]if not folder_name:folder_name = str(int(time.time()))file_name = input_tuple[1]if not file_name:file_name = str(int(time.time())) + ".zip"download_url = input_tuple[2]if not download_url:returnresp = requests.get(url=download_url, headers=headers, proxies=proxies)time.sleep(1)# D:/torrent是我的存儲路徑,這里可以修改if not os.path.exists('D:/torrent/' + folder_name):os.mkdir('D:/torrent/' + folder_name)with open('D:/torrent/' + folder_name + "/" + file_name, 'wb') as f:f.write(resp.content)if __name__ == '__main__':url = init_movie_list(5)url_list = []for item in url:url_list = get_movie_detail_url(item) + url_listfor i in url_list:download_tuple = get_movie_download_url(i)download_file(download_tuple)?
PS:如有需要Python學習資料的小伙伴可以加下方的群去找免費管理員領取
?
可以免費領取源碼、項目實戰視頻、PDF文件等
總結
以上是生活随笔為你收集整理的Python爬虫爬取电影网站种子,让你以后再也不寂寞的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: B端产品的业务调研
- 下一篇: python 爬取种子_利用python