使用Xpath爬虫库下载诗词名句网的史书典籍类所有文章。
生活随笔
收集整理的這篇文章主要介紹了
使用Xpath爬虫库下载诗词名句网的史书典籍类所有文章。
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
# 需要的庫
from lxml import etree
import requests
# 請求頭
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.181 Safari/537.36'
}
# 保存文本的地址
pathname=r'E:\爬蟲\詩詞名句網\\'
# 獲取書籍名稱的函數
def get_book(url):response = requests.get(url,headers)etrees = etree.HTML(response.text)url_infos = etrees.xpath('//div[@class="bookmark-list"]/ul/li')for i in url_infos:url_info = i.xpath('./h2/a/@href')book_name = i.xpath('./h2/a/text()')[0]print('開始下載.'+book_name)# print('http://www.shicimingju.com'+url_info[0])get_index('http://www.shicimingju.com'+url_info[0])
# 獲取書籍目錄的函數
def get_index(url):response = requests.get(url, headers)etrees = etree.HTML(response.text)url_infos = etrees.xpath('//div[@class="book-mulu"]/ul/li')for i in url_infos:url_info = i.xpath('./a/@href')# print('http://www.shicimingju.com' + url_info[0])get_content('http://www.shicimingju.com' + url_info[0])
# 獲取書籍內容并寫入.txt文件
def get_content(url):response = requests.get(url, headers)etrees = etree.HTML(response.text)title = etrees.xpath('//div[@class="www-main-container www-shadow-card "]/h1/text()')[0]content = etrees.xpath('//div[@class="chapter_content"]/p/text()')content = ''.join(content)book_name=etrees.xpath('//div[@class="nav-top"]/a[3]/text()')[0]with open(pathname+book_name+'.txt','a+',encoding='utf-8') as f:f.write(title+'\n\n'+content+'\n\n\n')print(title+'..下載完成')# 程序入口
if __name__ == '__main__':url = 'http://www.shicimingju.com/book/'get_book(url)
控制臺查看下載過程;
打開文件夾查看是否下載成功;
總結
以上是生活随笔為你收集整理的使用Xpath爬虫库下载诗词名句网的史书典籍类所有文章。的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python爬虫中的ip代理设置
- 下一篇: 使用Xpath+多进程爬取诗词名句网的史