【原创】shadowebdict开发日记:基于linux的简明英汉字典(三)
生活随笔
收集整理的這篇文章主要介紹了
【原创】shadowebdict开发日记:基于linux的简明英汉字典(三)
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
全系列目錄:
- 【原創】shadowebdict開發日記:基于linux的簡明英漢字典(一)
- 【原創】shadowebdict開發日記:基于linux的簡明英漢字典(二)
- 【原創】shadowebdict開發日記:基于linux的簡明英漢字典(三)
- 【原創】shadowebdict開發日記:基于linux的簡明英漢字典(四)
- 項目的github地址
承接上文。
現在來進行response模塊的開發。
這一模塊所完成的任務是,如果本地的詞庫中沒有用戶需要查詢的詞匯,那么就去網絡上尋找到相應的詞條作為結果返回,并存入本地數據庫。
?
我選擇的網上的源是iciba,理由很簡單,不需要復雜的cookie管理,所查詞匯的內容基本集成在返回的html源文件中。
值得注意的是,如果請求過于頻繁,那么會被iciba ban掉,所以如果要利用這段代碼爬iciba的詞庫,請自行加個sleep。不過好像我代碼中也有,注意改下便是。
?
該模塊的邏輯為:
0、提供一個接口給其他模塊調用,輸入為待查詞匯。
1、構造url請求,獲得返回的數據。
2、根據數據的格式,解析返回的數據并獲取相應詞條的內容
3、按照約定的格式返回相應詞條的內容給調用其的其他模塊
?
具體的做法參考源代碼
# -*- coding:utf-8 -*- __author__ = 'wmydx'import urllib import re import urllib2 import timeclass GetResponse:def __init__(self):self.url = 'http://www.iciba.com/'self.isEng = re.compile(r'(([a-zA-Z]*)(\s*))*$')self.group_pos = re.compile(r'<div class="group_pos">(.*?)</div>', re.DOTALL)self.net_paraphrase = re.compile(r'<div class="net_paraphrase">(.*?)</div>', re.DOTALL)self.sentence = re.compile(r'<dl class="vDef_list">(.*?)</dl>', re.DOTALL)def process_input(self, word):word = word.strip()word = word.replace(' ', '_')return worddef get_data_from_web(self, word):headers = {'Referer': 'http://www.iciba.com/', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1) \AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.72 Safari/537.36'}request = urllib2.Request(self.url + word, headers=headers)while True:try:f = urllib2.urlopen(request).read()breakexcept:passreturn fdef get_eng_from_chinese(self, word):word = self.process_input(word)word = urllib.quote(word)data = self.get_data_from_web(word)label_lst = re.compile(r'<span class="label_list">(.*?)</span>', re.DOTALL)label_itm = re.compile(r'<label>(?P<item>.*?)</a>(.*?)</label>', re.DOTALL)first = label_lst.search(data)data = data[first.start():first.end()]start_itm = 0res = []while 1:second = label_itm.search(data, start_itm)if not second:breakword = self.get_sentence_from_dt(data[second.start('item'):second.end('item')])res.append(word)start_itm = second.end()return resdef get_dict_data(self, word):englst = []res = []match = self.isEng.match(word)if not match:englst = self.get_eng_from_chinese(word)else:englst.append(word)for item in englst:word = self.process_input(item)data = self.get_data_from_web(word)if data.find('對不起,沒有找到') != -1:res.append(-1)else:tmp_dict = self.analysis_eng_data(data)tmp_dict['word'] = wordtmp_dict['times'] = 1res.append(tmp_dict)return resdef analysis_eng_data(self, data):res = {}explain = self.group_pos.search(data)if explain:explain = data[explain.start():explain.end()]res['explain'] = self.generate_explain(explain)else:res['explain'] = -1net_explain = self.net_paraphrase.search(data)if net_explain:net_explain = data[net_explain.start():net_explain.end()]res['net_explain'] = self.generate_net_explain(net_explain)else:res['net_explain'] = -1sentence_start = 0sentence_end = len(data)sentence_lst = []while sentence_start < sentence_end:sentence = self.sentence.search(data, sentence_start)if sentence:sentence_str = data[sentence.start():sentence.end()]else:breaksentence_lst.append(self.generate_sentence(sentence_str))sentence_start = sentence.end()res['sentence'] = "\n\n".join(sentence_lst)return resdef generate_explain(self, target):start_word = 0end_word = len(target)meta_word = re.compile(r'<strong class="fl">(?P<meta_word>.*?)</strong>', re.DOTALL)label_lst = re.compile(r'<span class="label_list">(.*?)</span>', re.DOTALL)label_itm = re.compile(r'<label>(?P<item>.*?)</label>', re.DOTALL)res = ''while start_word < end_word:first = meta_word.search(target, start_word)if first:word_type = target[first.start('meta_word'):first.end('meta_word')]else:breakres += word_type + ' 'second = label_lst.search(target, first.end('meta_word'))start_label = second.start()end_label = second.end()while start_label < end_label:third = label_itm.search(target, start_label)if third:res += target[third.start('item'):third.end('item')]start_label = third.end()else:breakres += '\n'start_word = end_labelreturn resdef generate_net_explain(self, target):start_itm = 0end_itm = len(target)li_item = re.compile(r'<li>(?P<item>.*?)</li>', re.DOTALL)res = '網絡釋義: 'while 1:first = li_item.search(target, start_itm)if first:res += target[first.start('item'):first.end('item')]else:breakstart_itm = first.end()return resdef generate_sentence(self, target):res = ''english = re.compile(r'<dt>(?P<eng>.*?)</dt>', re.DOTALL)chinese = re.compile(r'<dd>(?P<chn>.*?)</dd>', re.DOTALL)first = english.search(target)second = chinese.search(target)res += self.get_sentence_from_dt(target[first.start('eng'):first.end('eng')]) + '\n'res += target[second.start('chn'):second.end('chn')]return resdef get_sentence_from_dt(self, target):res = ''length = len(target)index = 0while index < length:if target[index] == '<':while target[index] != '>':index += 1else:res += target[index]index += 1return resif __name__ == '__main__':p = GetResponse()test = ['hello', 'computer', 'nothing', 'bad guy', 'someday']for item in test:res = p.get_dict_data(item)for key in res:for (k, v) in key.items():print "dict[%s]=" % k, vprinttime.sleep(3)?
轉載于:https://www.cnblogs.com/shadowmydx/p/4335901.html
創作挑戰賽新人創作獎勵來咯,堅持創作打卡瓜分現金大獎總結
以上是生活随笔為你收集整理的【原创】shadowebdict开发日记:基于linux的简明英汉字典(三)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: C语言根据日期(年,月,日)判断星期几(
- 下一篇: 浙江大学PAT上机题解析之1008. 数