scrapy-splash抓取动态数据例子三
一、介紹
本例子用scrapy-splash抓取今日頭條網(wǎng)站給定關(guān)鍵字抓取咨詢信息。
給定關(guān)鍵字:打通;融合;電視
抓取信息內(nèi)如下:
1、資訊標(biāo)題
2、資訊鏈接
3、資訊時間
4、資訊來源
二、網(wǎng)站信息
?
?
?
?
三、數(shù)據(jù)抓取
針對上面的網(wǎng)站信息,來進(jìn)行抓取
1、首先抓取信息列表
抓取代碼:sels = site.xpath('//div[@class="articleCard"]')
2、抓取標(biāo)題
由于在抓取標(biāo)題的時候,在列表頁不太好抓取,所以就繞了一下直接每條資訊的鏈接頁面來抓取標(biāo)題
抓取代碼:titles = sel.xpath('.//div[@class="doc-title"]/text()')
3、抓取鏈接
抓取代碼:url = 'http://www.toutiao.com' + str(sel.xpath('.//a[@class="link title"]/@href')[0].extract())
4、抓取日期
抓取代碼:dates = sel.xpath('.//span[@class="lbtn"]/text()')
5、抓取來源
抓取代碼:source=str(sel.xpath('.//a[@class="lbtn source J_source"]/text()')[0].extract())
?
四、完整代碼
1、toutiaoSpider
# -*- coding: utf-8 -*- import scrapy from scrapy import Request from scrapy.spiders import Spider from scrapy_splash import SplashRequest from scrapy_splash import SplashMiddleware from scrapy.http import Request, HtmlResponse from scrapy.selector import Selector from scrapy_splash import SplashRequest from splash_test.items import SplashTestItem import IniFile import sys import os import re import timereload(sys) sys.setdefaultencoding('utf-8')# sys.stdout = open('output.txt', 'w')class toutiaoSpider(Spider):name = 'toutiao'configfile = os.path.join(os.getcwd(), 'splash_test\spiders\setting.conf')cf = IniFile.ConfigFile(configfile)information_keywords = cf.GetValue("section", "information_keywords")information_wordlist = information_keywords.split(';')websearchurl = cf.GetValue("toutiao", "websearchurl")start_urls = []for word in information_wordlist:start_urls.append(websearchurl + word)# request需要封裝成SplashRequestdef start_requests(self):for url in self.start_urls:index = url.rfind('=')yield SplashRequest(url, self.parse, args={'wait': '2'},meta={'keyword': url[index + 1:]})def date_isValid(self, strDateText):'''判斷日期時間字符串是否合法:如果給定時間大于當(dāng)前時間是合法,或者說當(dāng)前時間給定的范圍內(nèi):param strDateText: 四種格式 '2小時前'; '2天前' ; '昨天' ;'2017.2.12 ':return: True:合法;False:不合法'''currentDate = time.strftime('%Y-%m-%d')if strDateText.find('分鐘前') > 0 or strDateText.find('剛剛') > -1:return True, currentDateelif strDateText.find('小時前') > 0:datePattern = re.compile(r'\d{1,2}')ch = int(time.strftime('%H')) # 當(dāng)前小時數(shù)strDate = re.findall(datePattern, strDateText)if len(strDate) == 1:if int(strDate[0]) <= ch: # 只有小于當(dāng)前小時數(shù),才認(rèn)為是今天return True, currentDatereturn False, ''def parse(self, response):site = Selector(response)# it_list = []keyword = response.meta['keyword']sels = site.xpath('//div[@class="articleCard"]')for sel in sels:dates = sel.xpath('.//span[@class="lbtn"]/text()')if len(dates) > 0:flag, date = self.date_isValid(dates[0].extract())if flag:url = 'http://www.toutiao.com' + str(sel.xpath('.//a[@class="link title"]/@href')[0].extract())source=str(sel.xpath('.//a[@class="lbtn source J_source"]/text()')[0].extract())yield SplashRequest(url, self.parse_item, args={'wait': '1'},meta={'date': date, 'url': url,'keyword': keyword, 'source': source})def parse_item(self, response):site = Selector(response)it = SplashTestItem()titles = site.xpath('//h1[@class="article-title"]/text()')count = 0if len(titles) > 0:keyword = response.meta['keyword']strtiltle = str(titles[0].extract())if strtiltle.find(keyword) > -1:it['title'] = strtiltleit['url'] = response.meta['url']it['date'] = response.meta['date']it['keyword'] = keywordit['source'] = response.meta['source']return it2、SplashTestItem
# -*- coding: utf-8 -*-# Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.htmlimport scrapyclass SplashTestItem(scrapy.Item):#標(biāo)題title = scrapy.Field()#日期date = scrapy.Field()#鏈接url = scrapy.Field()#關(guān)鍵字keyword = scrapy.Field()#來源網(wǎng)站source = scrapy.Field()3、SplashTestPipeline
# -*- coding: utf-8 -*-# Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html import codecs import jsonclass SplashTestPipeline(object):def __init__(self):# self.file = open('data.json', 'wb')self.file = codecs.open('spider.txt', 'w', encoding='utf-8')# self.file = codecs.open(# 'spider.json', 'w', encoding='utf-8')def process_item(self, item, spider):line = json.dumps(dict(item), ensure_ascii=False) + "\n"self.file.write(line)return itemdef spider_closed(self, spider):self.file.close() 4、setting.conf
總結(jié)
以上是生活随笔為你收集整理的scrapy-splash抓取动态数据例子三的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 移动html特殊链接【打电话_发短信_发
- 下一篇: 用 Go 开发 Go 编译器