特征选择:python lime
生活随笔
收集整理的這篇文章主要介紹了
特征选择:python lime
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
首先我們先看源代碼:
import lime import sklearn import numpy as np import sklearn import sklearn.ensemble import sklearn.metrics from __future__ import print_functionfrom sklearn.datasets import fetch_20newsgroups categories = ['alt.atheism', 'soc.religion.christian'] newsgroups_train = fetch_20newsgroups(subset='train', categories=categories) newsgroups_test = fetch_20newsgroups(subset='test', categories=categories) class_names = ['atheism', 'christian'] # 兩種標簽,一種基督教,一種無神論vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(lowercase=False) ##使用TF-IDF對文本進行編碼 train_vectors = vectorizer.fit_transform(newsgroups_train.data) test_vectors = vectorizer.transform(newsgroups_test.data)# 使用RF模型 rf = sklearn.ensemble.RandomForestClassifier(n_estimators=500) rf.fit(train_vectors, newsgroups_train.target) # RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', # max_depth=None, max_features='auto', max_leaf_nodes=None, # min_samples_leaf=1, min_samples_split=2, # min_weight_fraction_leaf=0.0, n_estimators=500, n_jobs=1, # oob_score=False, random_state=None, verbose=0, # warm_start=False)# 預測 pred = rf.predict(test_vectors) sklearn.metrics.f1_score(newsgroups_test.target, pred, average='binary')# 預測結果:0.92093023255813955運行程序我們可以看到上段代碼使最終分類達到了一個很高的F1值。
下面我們使用lime解釋器對最終預測的結果做出解釋:
from lime import lime_text from sklearn.pipeline import make_pipeline c = make_pipeline(vectorizer, rf) print(c.predict_proba([newsgroups_test.data[0]])) # [[ 0.274 0.726]]from lime.lime_text import LimeTextExplainer explainer = LimeTextExplainer(class_names=class_names)# 我們對任意一篇文章挑選出前6個重要的特征 idx = 83 exp = explainer.explain_instance(newsgroups_test.data[idx], c.predict_proba, num_features=6) print('Document id: %d' % idx) print('Probability(christian) =', c.predict_proba([newsgroups_test.data[idx]])[0,1]) print('True class: %s' % class_names[newsgroups_test.target[idx]]) # Document id: 83 # Probability(christian) = 0.414 # True class: atheismexp.as_list() # [(u'Posting', -0.15748303818990594), # (u'Host', -0.13220892468795911), # (u'NNTP', -0.097422972255878093), # (u'edu', -0.051080418945152584), # (u'have', -0.010616558305370854), # (u'There', -0.0099743822272458232)]print('Original prediction:', rf.predict_proba(test_vectors[idx])[0,1]) tmp = test_vectors[idx].copy() tmp[0,vectorizer.vocabulary_['Posting']] = 0 tmp[0,vectorizer.vocabulary_['Host']] = 0 print('Prediction removing some features:', rf.predict_proba(tmp)[0,1]) print('Difference:', rf.predict_proba(tmp)[0,1] - rf.predict_proba(test_vectors[idx])[0,1]) # Original prediction: 0.414 # Prediction removing some features: 0.684 # Difference: 0.27這些加權特征是一個線性模型。粗略的說,如果我們從文檔中刪除”Posting“和”Host“兩個單詞,預測應該向相反類別方向(基督教)移動約0.27(這兩個特征的權重和)。經過實驗發現確實如此!
我們在這里只使用了隨機森林作為分類器,其實lime這個解釋器適用于任何我們想要用的任何分類器,只要這個分類器實現了predict_proba。
參考文章:
[1]. http://marcotcr.github.io/lime/tutorials/Lime%20-%20basic%20usage%2C%20two%20class%20case.html#Explaining-predictions-using-lime
總結
以上是生活随笔為你收集整理的特征选择:python lime的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: angular中$cacheFactor
- 下一篇: ASP.NET MVC随想录——锋利的K