网络爬虫练习
1.利用requests.get(url)獲取網(wǎng)頁(yè)頁(yè)面的html文件
import requests newsurl='http://news.gzcc.cn/html/xiaoyuanxinwen/' res = requests.get(newsurl) #返回response對(duì)象 res.encoding='utf-8' print(res.text)?
2.利用BeautifulSoup的HTML解析器,生成結(jié)構(gòu)樹(shù)
import requests from bs4 import BeautifulSoup newsurl='http://news.gzcc.cn/html/xiaoyuanxinwen/' res = requests.get(newsurl) #返回response對(duì)象 res.encoding='utf-8' print(BeautifulSoup(res.text,'html.parser'))3.找出特定標(biāo)簽的html元素
soup.p #標(biāo)簽名,返回第一個(gè)
soup.head
soup.p.name #字符串
soup.p. attrs #字典,標(biāo)簽的所有屬性
soup.p. contents # 列表,所有子標(biāo)簽
soup.p.text #字符串
soup.p.string
soup.select(‘li')
?
4.取得含有特定CSS屬性的元素
soup.select('#p1Node')
soup.select('.news-list-title')
?
5.練習(xí):
取出h1標(biāo)簽的文本
import requests from bs4 import BeautifulSoup url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/' res = requests.get(url) res.encoding = 'utf-8' soup = BeautifulSoup(res.text,'html.parser') #取h4標(biāo)簽的文本 print(soup.select('h4')[0].text)?
取出a標(biāo)簽的鏈接
import requests from bs4 import BeautifulSoup url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/' res = requests.get(url) res.encoding = 'utf-8' soup = BeautifulSoup(res.text,'html.parser') #取a標(biāo)簽的鏈接 for i in soup.select('a'):print(i.get('href'))?
取出所有l(wèi)i標(biāo)簽的所有內(nèi)容
import requests from bs4 import BeautifulSoup url = 'http://news.gzcc.cn/html/xiaoyuanxinwen/' res = requests.get(url) res.encoding = 'utf-8' soup = BeautifulSoup(res.text,'html.parser') #取li標(biāo)簽的所有內(nèi)容 for i in soup.select('li'):print(i)?
取出一條新聞的標(biāo)題、鏈接、發(fā)布時(shí)間、來(lái)源
?
轉(zhuǎn)載于:https://www.cnblogs.com/weixingna/p/8670551.html
總結(jié)
 
                            
                        - 上一篇: java8新特性及汪文君Google G
- 下一篇: 《高性能网站建设指南》勘误
