【Python-ML】无监督线性降维PCA方法
生活随笔
收集整理的這篇文章主要介紹了
【Python-ML】无监督线性降维PCA方法
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
# -*- coding: utf-8 -*-
'''
Created on 2018年1月18日
@author: Jason.F
@summary: 特征抽取-PCA方法,無監督、線性
'''
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
import matplotlib.pyplot as plt
#第一步:導入數據,對原始d維數據集做標準化處理
df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data',header=None)
df_wine.columns=['Class label','Alcohol','Malic acid','Ash','Alcalinity of ash','Magnesium','Total phenols','Flavanoids','Nonflavanoid phenols','Proanthocyanins','Color intensity','Hue','OD280/OD315 of diluted wines','Proline']
print ('class labels:',np.unique(df_wine['Class label']))
#print (df_wine.head(5))
#分割訓練集合測試集
X,y=df_wine.iloc[:,1:].values,df_wine.iloc[:,0].values
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.3,random_state=0)
#特征值縮放-標準化
stdsc=StandardScaler()
X_train_std=stdsc.fit_transform(X_train)
X_test_std=stdsc.fit_transform(X_test)
#第二步:構造樣本的協方差矩陣
cov_mat=np.cov(X_train_std.T)#d=13維,構造13X13維的協方差矩陣
eigen_vals,eigen_vecs=np.linalg.eig(cov_mat)#計算線性矩陣的特征值和特征向量
print ('\nEigenvalues \n %s'%eigen_vals) #13個特征值
print (eigen_vecs.shape)#13X13的特征向量矩陣
#計算特征值占比,觀察期方差貢獻率,目標是尋找最大方差的成分
tot=sum(eigen_vals)
var_exp=[(i/tot) for i in sorted(eigen_vals,reverse=True)]
cum_var_exp=np.cumsum(var_exp)
plt.bar(range(1,14),var_exp,alpha=0.5,align='center',label='individual explained variance')
plt.step(range(1,14),cum_var_exp,where='mid',label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.show()
#第三部:選擇前k個最大特征值對應的特征向量,構造k個特征項的映射矩陣W
eigen_pairs=[(np.abs(eigen_vals[i]), eigen_vecs[:, i]) for i in range(len(eigen_vals))]
eigen_pairs.sort(reverse=True)
w=np.hstack((eigen_pairs[0][1][:,np.newaxis],eigen_pairs[1][1][:,np.newaxis]))#選取前2個特征,構建13X2維的映射矩陣W
print ('Matrix W:\n',w)
#第四步:通過映射矩陣W將d=13維的輸入數據集X轉換到新的k=2維特征子空間
print (X_train_std[0].dot(w)) #轉換一行,一個樣本
X_train_pca=X_train_std.dot(w)#轉換整個樣本集,從13維到2維
X_test_pca=X_test_std.dot(w)
print (X_train_pca.shape)
#用二維散點圖可視化降維后的樣本
colors=['r','b','g']
markers=['s','x','o']
for l,c,m in zip(np.unique(y_train),colors,markers):plt.scatter(X_train_pca[y_train == l, 0],X_train_pca[y_train == l, 1],c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.show()
#第五步:轉換后的數據集進行線性訓練
lr=LogisticRegression()
lr.fit(X_train_pca,y_train)
print ('Training accuracy:',lr.score(X_train_pca, y_train))
print ('Test accuracy:',lr.score(X_test_pca, y_test))
結果:
('class labels:', array([1, 2, 3], dtype=int64))Eigenvalues [ 4.8923083 2.46635032 1.42809973 1.01233462 0.84906459 0.601815140.52251546 0.08414846 0.33051429 0.29595018 0.16831254 0.214322120.2399553 ] (13L, 13L) ('Matrix W:\n', array([[ 0.14669811, 0.50417079],[-0.24224554, 0.24216889],[-0.02993442, 0.28698484],[-0.25519002, -0.06468718],[ 0.12079772, 0.22995385],[ 0.38934455, 0.09363991],[ 0.42326486, 0.01088622],[-0.30634956, 0.01870216],[ 0.30572219, 0.03040352],[-0.09869191, 0.54527081],[ 0.30032535, -0.27924322],[ 0.36821154, -0.174365 ],[ 0.29259713, 0.36315461]])) [ 2.59891628 0.00484089] (124L, 2L) ('Training accuracy:', 0.967741935483871) ('Test accuracy:', 0.98148148148148151)總結
以上是生活随笔為你收集整理的【Python-ML】无监督线性降维PCA方法的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 核方法的理解
- 下一篇: 【Python-ML】SKlearn库特