XGBoost对比RandomForest、GBDT、决策树、SVM,XGB+LR精度还能提升
生活随笔
收集整理的這篇文章主要介紹了
XGBoost对比RandomForest、GBDT、决策树、SVM,XGB+LR精度还能提升
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
這里寫目錄標(biāo)題
- 目標(biāo):對(duì)比各種模型
- 對(duì)比內(nèi)容:
- 最終對(duì)比結(jié)果:
- 代碼:
目標(biāo):對(duì)比各種模型
對(duì)比各種模型,XGBoost直接判了RandomForest、GBDT、決策樹、SVM等死刑,XGB+LR精度還能提升。
XGBoost:目前樹模型的天花板,所有決策樹中,XGBoost的精度最高,運(yùn)行速度也還不錯(cuò),所以競(jìng)賽中,結(jié)構(gòu)化數(shù)據(jù)的比賽,基本都是用它了。
另外,實(shí)驗(yàn)表明,XGBoost+LR精度還能進(jìn)一步提升。
對(duì)比內(nèi)容:
模型對(duì)比
具體:
1、 比較在測(cè)試集上的AUC表現(xiàn)
2、 比較模型完成端到端訓(xùn)練預(yù)測(cè)的時(shí)間
3、 了解算法的優(yōu)缺點(diǎn)
最終對(duì)比結(jié)果:
結(jié)果如下:
40萬條數(shù)據(jù)。
| Xgboost | 0.9688 | 7.5972 |
| Xgboost + LR | 0.9724 | 13.1655 |
| RF+LR | 0.92432 | 2.3115 |
| GDBT+LR | 0.96249 | 18.1669 |
| LR | 0.9337 | 0.3479 |
| SVM | 0.8703 | 1104.25 |
可以看到:
- XGBoost模型的準(zhǔn)確度高過GDBT等樹模型,運(yùn)行速度也尚可接受,另外如果想進(jìn)一步提升XGBoost的準(zhǔn)確度,可以采用XGBoost+LR的方式,還能進(jìn)一步提升,在數(shù)據(jù)挖掘比賽中可以試試。
- LR模型的訓(xùn)練和響應(yīng)速度真實(shí)很不錯(cuò),0.3秒就搞定了40萬條數(shù)據(jù),模型準(zhǔn)確度不算高。
- SVM最費(fèi)時(shí)。所以目前估計(jì)慢慢也被淘汰了。
代碼:
這里用到了一個(gè)裝飾器,來統(tǒng)計(jì)每個(gè)函數(shù)的運(yùn)行時(shí)間,裝飾器這樣避免了很多重復(fù)性代碼
#!/usr/bin python # -*- coding:utf-8 -*-import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_classification from sklearn.linear_model import LogisticRegression from sklearn.ensemble import (RandomTreesEmbedding, RandomForestClassifier,GradientBoostingClassifier) from sklearn.preprocessing import OneHotEncoder from sklearn.model_selection import train_test_split from sklearn.metrics import roc_curve, roc_auc_score from sklearn import svm from sklearn.pipeline import make_pipeline import xgboost as xgb from xgboost.sklearn import XGBClassifier# 打印運(yùn)行時(shí)間,裝飾器 import functools import time def runtime_decorator(function):@functools.wraps(function)def wrapper(*args, **kwargs):start = time.time()result = function(*args, **kwargs)end = time.time()print("function runtime is", end - start ,'S')return resultreturn wrappernp.random.seed(10000) n_estimator = 10X, y = make_classification(n_samples=400000, n_features=100, shuffle=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.8) # To avoid overfitting X_train, X_train_lr, y_train, y_train_lr = train_test_split(X_train, y_train, test_size=0.5)@runtime_decorator def RandomForestLR():rf = RandomForestClassifier(max_depth=3, n_estimators=n_estimator)rf_enc = OneHotEncoder()rf_lr = LogisticRegression()rf.fit(X_train, y_train)rf_enc.fit(rf.apply(X_train))rf_lr.fit(rf_enc.transform(rf.apply(X_train_lr)), y_train_lr)y_pred_rf_lr = rf_lr.predict_proba(rf_enc.transform(rf.apply(X_test)))[:, 1]fpr_rf_lr, tpr_rf_lr, _ = roc_curve(y_test, y_pred_rf_lr)auc = roc_auc_score(y_test, y_pred_rf_lr)print("RF+LR:", auc)return fpr_rf_lr, tpr_rf_lr@runtime_decorator def GdbtLR():grd = GradientBoostingClassifier(n_estimators=n_estimator)grd_enc = OneHotEncoder()grd_lr = LogisticRegression()grd.fit(X_train, y_train)grd_enc.fit(grd.apply(X_train)[:, :, 0])grd_lr.fit(grd_enc.transform(grd.apply(X_train_lr)[:, :, 0]), y_train_lr)y_pred_grd_lr = grd_lr.predict_proba(grd_enc.transform(grd.apply(X_test)[:, :, 0]))[:, 1]fpr_grd_lr, tpr_grd_lr, _ = roc_curve(y_test, y_pred_grd_lr)auc = roc_auc_score(y_test, y_pred_grd_lr)print("GDBT+LR:", auc)return fpr_grd_lr, tpr_grd_lr@runtime_decorator def Gdbt():grd = GradientBoostingClassifier(n_estimators=n_estimator)grd.fit(X_train, y_train)y_pred_grd = grd.predict_proba(X_test)fpr_grd, tpr_grd, _ = roc_curve(y_test, y_pred_grd)auc = roc_auc_score(y_test, y_pred_grd)print("GDBT:", auc)return fpr_grd, tpr_grd@runtime_decorator def Xgboost():xgboost = xgb.XGBClassifier(nthread=4, learning_rate=0.08, \n_estimators=50, max_depth=5, gamma=0, subsample=0.9, colsample_bytree=0.5)xgboost.fit(X_train, y_train)y_xgboost_test = xgboost.predict_proba(X_test)[:, 1]fpr_xgboost, tpr_xgboost, _ = roc_curve(y_test, y_xgboost_test)auc = roc_auc_score(y_test, y_xgboost_test)print("Xgboost:", auc)return fpr_xgboost, tpr_xgboost@runtime_decorator def Lr():# lm = LogisticRegression(n_jobs=4, C=0.1, penalty='l2')lm = LogisticRegression()lm.fit(X_train, y_train)y_lr_test = lm.predict_proba(X_test)[:, 1]fpr_lr, tpr_lr, _ = roc_curve(y_test, y_lr_test)auc = roc_auc_score(y_test, y_lr_test)print("LR:", auc)return fpr_lr, tpr_lr@runtime_decorator def XgboostLr():xgboost = xgb.XGBClassifier(nthread=4, learning_rate=0.08, \n_estimators=50, max_depth=5, gamma=0, subsample=0.9, colsample_bytree=0.5)xgb_enc = OneHotEncoder()xgb_lr = LogisticRegression(n_jobs=4, C=0.1, penalty='l2') # Xgboost + LR: 0.973,function runtime is 8.22S# xgb_lr = LogisticRegression() # Xgboost + LR: 0.9723809376004443,function runtime is 7.22Sxgboost.fit(X_train, y_train)xgb_enc.fit(xgboost.apply(X_train)[:, :])xgb_lr.fit(xgb_enc.transform(xgboost.apply(X_train_lr)[:, :]), y_train_lr)y_xgb_lr_test = xgb_lr.predict_proba(xgb_enc.transform(xgboost.apply(X_test)[:, :]))[:, 1]fpr_xgb_lr, tpr_xgb_lr, _ = roc_curve(y_test, y_xgb_lr_test)auc = roc_auc_score(y_test, y_xgb_lr_test)print("Xgboost + LR:", auc)return fpr_xgb_lr, tpr_xgb_lr@runtime_decorator def Svm():Svc = svm.SVC()Svc.fit(X_train, y_train)y_svm_test = Svc.predict(X_test)fpr_svm, tpr_svm, _ = roc_curve(y_test, y_svm_test)auc = roc_auc_score(y_test, y_svm_test)print("SVM:", auc)return fpr_svm, tpr_svmif __name__ == '__main__':fpr_rf_lr, tpr_rf_lr = RandomForestLR()fpr_grd_lr, tpr_grd_lr = GdbtLR()fpr_xgboost, tpr_xgboost = Xgboost()fpr_lr, tpr_lr = Lr()fpr_xgb_lr, tpr_xgb_lr = XgboostLr()fpr_svm, tpr_svm = Svm()plt.figure(1)plt.plot([0, 1], [0, 1], 'k--')plt.plot(fpr_rf_lr, tpr_rf_lr, label='RF + LR')plt.plot(fpr_grd_lr, tpr_grd_lr, label='GBT + LR')plt.plot(fpr_xgboost, tpr_xgboost, label='XGB')plt.plot(fpr_lr, tpr_lr, label='LR')plt.plot(fpr_xgb_lr, tpr_xgb_lr, label='XGB + LR')plt.plot(fpr_svm, tpr_svm, label='SVM')plt.xlabel('False positive rate')plt.ylabel('True positive rate')plt.title('ROC curve')plt.legend(loc='best')plt.show()plt.figure(2)plt.xlim(0, 0.2)plt.ylim(0.8, 1)plt.plot([0, 1], [0, 1], 'k--')plt.plot(fpr_rf_lr, tpr_rf_lr, label='RF + LR')plt.plot(fpr_grd_lr, tpr_grd_lr, label='GBT + LR')plt.plot(fpr_xgboost, tpr_xgboost, label='XGB')plt.plot(fpr_lr, tpr_lr, label='LR')plt.plot(fpr_xgb_lr, tpr_xgb_lr, label='XGB + LR')plt.plot(fpr_svm, tpr_svm, label='SVM')plt.xlabel('False positive rate')plt.ylabel('True positive rate')plt.title('ROC curve (zoomed in at top left)')plt.legend(loc='best')plt.show()總結(jié)
以上是生活随笔為你收集整理的XGBoost对比RandomForest、GBDT、决策树、SVM,XGB+LR精度还能提升的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Facebook的GBDT+LR模型py
- 下一篇: 阿里DIN模型(深度兴趣网络)详解及理解