综合练习:词频统计
1.英文詞頻統
下載一首英文的歌詞或文章
article = '''An empty street An empty house A hole inside my heart I'm all alone The rooms are getting smaller I wonder how I wonder why I wonder where they are The days we had The songs we sang together Oh yeah And oh my love I'm holding on forever Reaching for a love that seems so far So i say a little prayer And hope my dreams will take me there Where the skies are blue to see you once again, my love Over seas and coast to coast To find a place i love the most Where the fields are green to see you once again, my love I try to read I go to work I'm laughing with my friends But i can't stop to keep myself from thinking Oh no I wonder how I wonder why I wonder where they are The days we had The songs we sang together Oh yeah And oh my love I'm holding on forever Reaching for a love that seems so far Mark: To hold you in my arms To promise you my love To tell you from the heart You're all i'm thinking of I'm reaching for a love that seems so far So i say a little prayer And hope my dreams will take me there Where the skies are blue to see you once again, my love Over seas and coast to coast To find a place i love the most Where the fields are green to see you once again,my love say a little prayer dreams will take me there Where the skies are blue to see you once again '''
將所有,.?!’:等分隔符全部替換為空格
sep = ''':.,?!''' for i in sep:article = article.replace(i,' ');
將所有大寫轉換為小寫
article = article.lower();
生成單詞列表
article_list = article.split(); print(article_list);
生成詞頻統計
# # ①統計,遍歷集合# article_dict={} # article_set =set(article_list)-exclude# 清除重復的部分 # for w in article_set: # article_dict[w] = article_list.count(w) # # 遍歷字典 # for w in article_dict: # print(w,article_dict[w])#方法②,遍歷列表 article_dict={} for w in article_list:article_dict[w] = article_dict.get(w,0)+1 # 排除不要的單詞 for w in exclude:del (article_dict[w]);for w in article_dict:print(w,article_dict[w])
?
排序
dictList = list(article_dict.items()) dictList.sort(key=lambda x:x[1],reverse=True);
排除語法型詞匯,代詞、冠詞、連詞
exclude = {'the','to','is','and'} for w in exclude:del (article_dict[w]);
輸出詞頻最大TOP20
for i in range(20):print(dictList[i])
將分析對象存為utf-8編碼的文件,通過文件讀取的方式獲得詞頻分析內容。
file = open("test.txt", "r",encoding='utf-8'); article = file.read(); file.close()
?
2.中文詞頻統計
下載一長篇中文文章。
從文件讀取待分析文本。
news = open('gzccnews.txt','r',encoding = 'utf-8')
安裝與使用jieba進行中文分詞。
pip install jieba
import jieba
list(jieba.lcut(news))
生成詞頻統計
排序
排除語法型詞匯,代詞、冠詞、連詞
輸出詞頻最大TOP20(或把結果存放到文件里)
?
import jieba#打開文件 file = open("gzccnews.txt",'r',encoding="utf-8") notes = file.read(); file.close();#替換標點符號 sep = ''':。,?!;∶ ...“”''' for i in sep:notes = notes.replace(i,' ');notes_list = list(jieba.cut(notes));#排除單詞 exclude =[' ','\n','我','你','邊','上','說,'了','的','那','些','什','么','話','呢']#方法②,遍歷列表 notes_dict={} for w in notes_list:notes_dict[w] = notes_dict.get(w,0)+1# 排除不要的單詞 for w in exclude:del (notes_dict[w]);for w in notes_dict:print(w,notes_dict[w])# 降序排序 dictList = list(notes_dict.items()) dictList.sort(key=lambda x:x[1],reverse=True); print(dictList)#輸出詞頻最大TOP20 for i in range(20):print(dictList[i])#把結果存放到文件里 outfile = open("top20.txt","a") for i in range(20):outfile.write(dictList[i][0]+" "+str(dictList[i][1])+"\n") outfile.close();
將代碼與運行結果截圖發布在博客上。
轉載于:https://www.cnblogs.com/qq412158152/p/8660824.html
總結
- 上一篇: MariaDB杂记(2)
- 下一篇: 阿里安全图灵实验室再次刷新世界顶级算法比