ML_SVM
機器學習100天系列學習筆記 機器學習100天(中文翻譯版)機器學習100天(英文原版)
代碼閱讀:
第一步:導包
#Step 1: Importing the Libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd第二步:導入數據
#Step 2: Importing the dataset dataset = pd.read_csv('D:/daily/機器學習100天/100-Days-Of-ML-Code-中文版本/100-Days-Of-ML-Code-master/datasets/Social_Network_Ads.csv') X = dataset.iloc[:, [2, 3]].values y = dataset.iloc[:, 4].values第三步:劃分訓練集、測試集
#Step 3: Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)第四步:特征縮放
#Step 4: Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)經過特征縮放后的X_train:
[[ 0.58164944 -0.88670699][-0.60673761 1.46173768][-0.01254409 -0.5677824 ][-0.60673761 1.89663484][ 1.37390747 -1.40858358][ 1.47293972 0.99784738][ 0.08648817 -0.79972756][-0.01254409 -0.24885782][-0.21060859 -0.5677824 ]...]對于進行特征縮放這一步,個人認為是非常重要的,它可以加快收斂速度,在深度學習中間尤為重要(梯度爆炸問題)。
第五步:Support Vector Machine
#Step 5: Fitting SVM to the Training set from sklearn.svm import SVC classifier = SVC(kernel = 'linear', random_state = 0) classifier.fit(X_train, y_train)SVC函數使用的是線性核,算法中采用的核函數類型,可選參數有:‘poly’:多項式核函數、‘rbf’:徑像核函數/高斯核、‘sigmod’:sigmod核函數、‘precomputed’:核矩陣
注:precomputed表示自己提前計算好核函數矩陣,這時候算法內部就不再用核函數去計算核矩陣,而是直接用你給的核矩陣。
第六步:預測
#Step 6: Predicting the Test set results y_pred = classifier.predict(X_test)第七步:混淆矩陣
#Step 7: Making the Confusion Matrix from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report cm = confusion_matrix(y_test, y_pred) print(cm) # print confusion_matrix print(classification_report(y_test, y_pred)) # print classification report混淆:簡單理解為一個class被預測成另一個class。
給一個參考鏈接 混淆矩陣
然后談談classification_report函數;科學上網,正常上網
輸出:
precision:精確度;
recall:召回率;
f1-score:precision、recall的調和函數,越接近1越好;
support:每個標簽的出現次數;
avg / total行為各列的均值(support列為總和);
第八步:可視化
#Step 8: Visualization from matplotlib.colors import ListedColormap X_set,y_set = X_train,y_train X1,X2 = np. meshgrid(np. arange(start=X_set[:,0].min()-1, stop=X_set[:,0].max()+1, step=0.01),np. arange(start=X_set[:,1].min()-1, stop=X_set[:,1].max()+1, step=0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max())for i,j in enumerate(np.unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j) plt. title(' SVM(Training set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()X_set,y_set=X_test,y_test X1,X2=np. meshgrid(np. arange(start=X_set[:,0].min()-1, stop=X_set[:, 0].max()+1, step=0.01),np. arange(start=X_set[:,1].min()-1, stop=X_set[:,1].max()+1, step=0.01))plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max()) for i,j in enumerate(np. unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j)plt. title(' SVM(Test set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()
當使用RBF核時:
[[64 4][ 3 29]]precision recall f1-score support0 0.96 0.94 0.95 681 0.88 0.91 0.89 32accuracy 0.93 100macro avg 0.92 0.92 0.92 100 weighted avg 0.93 0.93 0.93 100
Anaconda Pytorch 實現 SVM
全部代碼:
#Day 5: KNN 2022/4/9 #Step 1: Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd#Step 2: Importing the dataset dataset = pd.read_csv('D:/daily/機器學習100天/100-Days-Of-ML-Code-中文版本/100-Days-Of-ML-Code-master/datasets/Social_Network_Ads.csv') X = dataset.iloc[:, [2, 3]].values y = dataset.iloc[:, 4].values#Step 3: Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)#Step 4: Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)#Step 5: Fitting K-NN to the Training set from sklearn.neighbors import KNeighborsClassifier classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2) classifier.fit(X_train, y_train)#Step 6: Predicting the Test set results y_pred = classifier.predict(X_test)#Step 7: Making the Confusion Matrix from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report cm = confusion_matrix(y_test, y_pred) print(cm) print(classification_report(y_test, y_pred))#Step 8: Visualization from matplotlib.colors import ListedColormap X_set,y_set = X_train,y_train X1,X2 = np. meshgrid(np. arange(start = X_set[:,0].min()-1, stop = X_set[:,0].max()+1, step = 0.01),np. arange(start = X_set[:,1].min()-1, stop = X_set[:,1].max()+1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max()) for i,j in enumerate(np.unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j) plt. title(' SVM(Training set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()X_set,y_set = X_test,y_test X1,X2=np. meshgrid(np. arange(start = X_set[:,0].min()-1, stop = X_set[:,0].max()+1, step = 0.01),np. arange(start = X_set[:,1].min()-1, stop = X_set[:,1].max()+1, step = 0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(),X2.ravel()]).T).reshape(X1.shape),alpha = 0.75, cmap = ListedColormap(('red', 'green'))) plt.xlim(X1.min(),X1.max()) plt.ylim(X2.min(),X2.max()) for i,j in enumerate(np. unique(y_set)):plt.scatter(X_set[y_set==j,0],X_set[y_set==j,1],c = ListedColormap(('red', 'green'))(i), label=j)plt. title(' SVM(Test set)') plt. xlabel(' Age') plt. ylabel(' Estimated Salary') plt. legend() plt. show()總結
- 上一篇: 解微分方程_matlab
- 下一篇: ML_Random_Forests