UDA机器学习基础—交叉验证
生活随笔
收集整理的這篇文章主要介紹了
UDA机器学习基础—交叉验证
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
?
交叉驗證的目的是為了有在訓練集中有更多的數(shù)據(jù)點,以獲得最佳的學習效果,同時也希望有跟多的測試集數(shù)據(jù)來獲得最佳驗證。交叉驗證的要點是將訓練數(shù)據(jù)平分到k個容器中,在k折交叉驗證中,將運行k次單獨的試驗,每一次試驗中,你將挑選k個訓練集中的一個作為驗證集,剩下k-1個作為訓練集,訓練你的模型,用測試集測試你的模型。這樣運行k次,有十個不同的測試集,將十個測試集的表現(xiàn)平均,就是將這k次試驗結(jié)果取平均。這樣你就差不多用了全部數(shù)據(jù)去訓練,也用全部數(shù)據(jù)去測試。
?
?
#!/usr/bin/python"""Starter code for the validation mini-project.The first step toward building your POI identifier!Start by loading/formatting the dataAfter that, it's not our code anymore--it's yours! """import pickle import sys sys.path.append("../tools/") from feature_format import featureFormat, targetFeatureSplit from sklearn.metrics import accuracy_score from sklearn.cross_validation import train_test_split data_dict = pickle.load(open("../final_project/final_project_dataset.pkl", "r") )### first element is our labels, any added elements are predictor ### features. Keep this the same for the mini-project, but you'll ### have a different feature list when you do the final project. features_list = ["poi", "salary"]data = featureFormat(data_dict, features_list) labels, features = targetFeatureSplit(data) features_train,features_test,labels_train,labels_test=train_test_split(features,labels,test_size=0.3,random_state=42) from sklearn.tree import DecisionTreeClassifier dlf=DecisionTreeClassifier() dlf.fit(features_train ,labels_train) f=dlf.predict(features_test) print accuracy_score(f,labels_test)### it's all yours from here forward!
轉(zhuǎn)載于:https://www.cnblogs.com/fuhang/p/8512977.html
總結(jié)
以上是生活随笔為你收集整理的UDA机器学习基础—交叉验证的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 从CES 2017看今年智能汽车发展趋势
- 下一篇: Unity引擎与C#脚本简介