清华镜像源安装 NGboost XGboost Catboost
生活随笔
收集整理的這篇文章主要介紹了
清华镜像源安装 NGboost XGboost Catboost
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
清華鏡像源安裝 NGboost XGboost Catboost
pip install catboost -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install ngboost -i https://pypi.tuna.tsinghua.edu.cn/simple
pip install xgboost -i https://pypi.tuna.tsinghua.edu.cn/simple
數據比賽常用預測模型:LGB、XGB與ANN
LGB
lightgbm:由于現在的比賽數據越來越大,想要獲得一個比較高的預測精度,同時又要減少內存占用以及提升訓練速度,lightgbm是一個非常不錯的選擇,其可達到與xgboost相似的預測效果。
def LGB_predict(train_x,train_y,test_x,res,index):print("LGB test")clf = lgb.LGBMClassifier(boosting_type='gbdt', num_leaves=31, reg_alpha=0.0, reg_lambda=1,max_depth=-1, n_estimators=5000, objective='binary',subsample=0.7, colsample_bytree=0.7, subsample_freq=1,learning_rate=0.05, min_child_weight=50, random_state=2018, n_jobs=-1)clf.fit(train_x, train_y, eval_set=[(train_x, train_y)], eval_metric='auc',early_stopping_rounds=100)res['score'+str(index)] = clf.predict_proba(test_x)[:,1]res['score'+str(index)] = res['score'+str(index)].apply(lambda x: float('%.6f' % x))print(str(index)+' predict finish!')gc.collect()res=res.reset_index(drop=True)return res['score'+str(index)]XGB
xgboost:在lightgbm出來之前,是打比賽的不二之選,現在由于需要做模型融合以提高預測精度,所以也需要使用到xgboost。
def XGB_predict(train_x,train_y,val_X,val_Y,test_x,res):print("XGB test")# create dataset for lightgbmxgb_val = xgb.DMatrix(val_X, label=val_Y)xgb_train = xgb.DMatrix(X_train, label=y_train)xgb_test = xgb.DMatrix(test_x)# specify your configurations as a dictparams = {'booster': 'gbtree',# 'objective': 'multi:softmax', # 多分類的問題、# 'objective': 'multi:softprob', # 多分類概率'objective': 'binary:logistic','eval_metric': 'auc',# 'num_class': 9, # 類別數,與 multisoftmax 并用'gamma': 0.1, # 用于控制是否后剪枝的參數,越大越保守,一般0.1、0.2這樣子。'max_depth': 8, # 構建樹的深度,越大越容易過擬合'alpha': 0, # L1正則化系數'lambda': 10, # 控制模型復雜度的權重值的L2正則化項參數,參數越大,模型越不容易過擬合。'subsample': 0.7, # 隨機采樣訓練樣本'colsample_bytree': 0.5, # 生成樹時進行的列采樣'min_child_weight': 3,# 這個參數默認是 1,是每個葉子里面 h 的和至少是多少,對正負樣本不均衡時的 0-1 分類而言# ,假設 h 在 0.01 附近,min_child_weight 為 1 意味著葉子節點中最少需要包含 100 個樣本。# 這個參數非常影響結果,控制葉子節點中二階導的和的最小值,該參數值越小,越容易 overfitting。'silent': 0, # 設置成1則沒有運行信息輸出,最好是設置為0.'eta': 0.03, # 如同學習率'seed': 1000,'nthread': -1, # cpu 線程數'missing': 1,'scale_pos_weight': (np.sum(y==0)/np.sum(y==1)) # 用來處理正負樣本不均衡的問題,通常取:sum(negative cases) / sum(positive cases)# 'eval_metric': 'auc'} plst = list(params.items())num_rounds = 5000 # 迭代次數watchlist = [(xgb_train, 'train'), (xgb_val, 'val')]# 交叉驗證# result = xgb.cv(plst, xgb_train, num_boost_round=200, nfold=4, early_stopping_rounds=200, verbose_eval=True, folds=StratifiedKFold(n_splits=4).split(X, y))# 訓練模型并保存# early_stopping_rounds 當設置的迭代次數較大時,early_stopping_rounds 可在一定的迭代次數內準確率沒有提升就停止訓練model = xgb.train(plst, xgb_train, num_rounds, watchlist, early_stopping_rounds=200)res['score'] = model.predict(xgb_test)res['score'] = res['score'].apply(lambda x: float('%.6f' % x))return resANN
ANN:得益于現在的計算機技術的高度發展,以及GPU性能的提高,還有Keras,tensorflow,pytorch等多重工具的使用,人工神經網絡也可以作為最后模型融合的子模型之一,可以有效地提升最終的預測結果。
imp = Imputer(missing_values='NaN', strategy='mean', axis=0) X_train = imp.fit_transform(X_train) sc = StandardScaler(with_mean=False) sc.fit(X_train) X_train = sc.transform(X_train) val_X = sc.transform(val_X) X_test = sc.transform(X_test) ann_scale = 1 from keras.layers import Embedding model = Sequential() model.add(Embedding(X_train.shape[1] + 1,EMBEDDING_DIM,input_length=MAX_SEQUENCE_LENGTH)) #model.add(Dense(int(256 / ann_scale), input_shape=(X_train.shape[1],))) model.add(Dense(int(256 / ann_scale))) model.add(Activation('tanh')) model.add(Dropout(0.3)) model.add(Dense(int(512 / ann_scale))) model.add(Activation('relu')) model.add(Dropout(0.3)) model.add(Dense(int(512 / ann_scale))) model.add(Activation('tanh')) model.add(Dropout(0.3)) model.add(Dense(int(256 / ann_scale))) model.add(Activation('linear')) model.add(Dense(1)) model.add(Activation('sigmoid')) # For a multi-class classification problem model.summary() class_weight1 = class_weight.compute_class_weight('balanced',np.unique(y),y) #----------------------------------------------------------------------------------------------------------------------------------------------------- # AUC for a binary classifier def auc(y_true, y_pred): ptas = tf.stack([binary_PTA(y_true,y_pred,k) for k in np.linspace(0, 1, 1000)],axis=0) pfas = tf.stack([binary_PFA(y_true,y_pred,k) for k in np.linspace(0, 1, 1000)],axis=0) pfas = tf.concat([tf.ones((1,)) ,pfas],axis=0) binSizes = -(pfas[1:]-pfas[:-1]) s = ptas*binSizes return K.sum(s, axis=0) # PFA, prob false alert for binary classifier def binary_PFA(y_true, y_pred, threshold=K.variable(value=0.5)): y_pred = K.cast(y_pred >= threshold, 'float32') # N = total number of negative labels N = K.sum(1 - y_true) # FP = total number of false alerts, alerts from the negative class labels FP = K.sum(y_pred - y_pred * y_true) return FP/N # P_TA prob true alerts for binary classifier def binary_PTA(y_true, y_pred, threshold=K.variable(value=0.5)): y_pred = K.cast(y_pred >= threshold, 'float32') # P = total number of positive labels P = K.sum(y_true) # TP = total number of correct alerts, alerts from the positive class labels TP = K.sum(y_pred * y_true) return TP/P #--------------------------------------------------------------------------------------------------------------------------------------------------- model.compile(loss='binary_crossentropy',optimizer='rmsprop', # metrics=['accuracy'],metrics=[auc]) epochs = 100 model.fit(X_train, y, epochs=epochs, batch_size=2000, validation_data=(val_X, val_y), shuffle=True,class_weight = class_weight1)總結
以上是生活随笔為你收集整理的清华镜像源安装 NGboost XGboost Catboost的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: ClickHouse表引擎之Integr
- 下一篇: 无盘服务器 机械盘,Win7启动速度研究