决策树算法预测NBA赛事结果
生活随笔
收集整理的這篇文章主要介紹了
决策树算法预测NBA赛事结果
小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
決策樹算法介紹
決策樹(decision tree)是一個樹結(jié)構(gòu)(可以是二叉樹或非二叉樹)。
其每個非葉節(jié)點表示一個特征屬性上的測試,每個分支代表這個特征屬性在某個值域上的輸出,而每個葉節(jié)點存放一個類別。
使用決策樹進行決策的過程就是從根節(jié)點開始,測試待分類項中相應的特征屬性,并按照其值選擇輸出分支,直到到達葉子節(jié)點,將葉子節(jié)點存放的類別作為決策結(jié)果。
總結(jié)來說:
決策樹模型核心是下面幾部分:
- 結(jié)點和有向邊組成
- 結(jié)點有內(nèi)部結(jié)點和葉結(jié)點倆種類型
- 內(nèi)部結(jié)點表示一個特征,葉節(jié)點表示一個類
加載數(shù)據(jù)集
import numpy as np import pandas as pd file = "NBA2014.csv" data = pd.read_csv(file) data.iloc[:5]數(shù)據(jù)預處理
# Don't read the first row,as it is blank. Parse the date column as a date data = pd.read_csv(file,parse_dates=[0]) data.columns = ["Date","Start","Visitor Team","VisitorPts","Home Team","HomePts","Score Type","OT?","Attend","Notes"] data.iloc[:5] data["Home Win"] = data["VisitorPts"] < data["HomePts"] y_true = data["Home Win"].values data.iloc[:5] print("Home Team Win Percentage: {0:.1f}%".format(np.mean(y_true)*100)) data["HomeLastWin"] = False data["VisitorLastWin"] = False data.iloc[:5] # create a dict to store the team last result from collections import defaultdict won_last = defaultdict(int) for index,row in data.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]row["HomeLastWin"] = won_last[home_team]row["VisitorLastWin"] = won_last[visitor_team]data.iloc[index] = row# set the current winwon_last[home_team] = row["Home Win"]won_last[visitor_team] = not row["Home Win"] data.iloc[20:25]模型建立
from sklearn.tree import DecisionTreeClassifier from sklearn.cross_validation import cross_val_score clf = DecisionTreeClassifier(random_state = 14) # create the dataset X_win = data[["HomeLastWin","VisitorLastWin"]].values scores = cross_val_score(clf,X_win,y_true,scoring="accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100))引入新的特征:賽季排名
import chardet file = "NBA2013_expanded-standings.csv" with open(file, 'rb') as f:print(f)result = chardet.detect(f.read()) # or readline if the file is large standings = pd.read_csv(file,skiprows=[0],encoding=result['encoding']) # create a new feature:HomeTeamRankHigher data["HomeTeamRankHigher"] = 0 for index,row in data.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]if home_team == "New Orleans Pelicans":home_team = "New Orleans Hornets"elif visitor_team == "New Orleans Pelicans":visitor_team = "New Orleans Hornets"home_rank = standings[standings["Team"] == home_team]['Rk'].values[0]visitor_rank = standings[standings["Team"] == visitor_team]["Rk"].values[0]row["HomeTeamRankHigher"] = int(home_rank > visitor_rank)data.iloc[index] = row data.iloc[:5]# create the train set X_homehigher = data[["HomeLastWin","VisitorLastWin","HomeTeamRankHigher"]].values clf = DecisionTreeClassifier(random_state=14) scores = cross_val_score(clf,X_homehigher,y_true,scoring = "accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) # who won the last match last_match_winer = defaultdict(int) data["HomeTeamWonLast"] = 0for index,row in data.iterrows():home_team = row["Home Team"]visitor_team = row["Visitor Team"]#sort the team namesteams = tuple(sorted([home_team,visitor_team]))# who won the last gamerow["HomeTeamWonLast"] = 1 if last_match_winer == row["Home Team"] else 0data.iloc[index] = rowwinner = row["Home Team"] if row["Home Win"] else row["Visitor Team"]last_match_winer = winner data.iloc[:5]# create the dataset X_lastwinner = data[["HomeTeamRankHigher","HomeTeamWonLast"]].values clf = DecisionTreeClassifier(random_state=14) scores = cross_val_score(clf,X_lastwinner,y_true,scoring="accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) # convert the string names to into integers from sklearn.preprocessing import LabelEncoder encoding = LabelEncoder() encoding.fit(data["Home Team"].values)home_teams = encoding.transform(data["Home Team"].values) visitor_teams = encoding.transform(data["Visitor Team"].values) X_teams = np.vstack([home_teams,visitor_teams]).T # encode these integers into a number if binary features from sklearn.preprocessing import OneHotEncoder onehot = OneHotEncoder() X_teams_expanded = onehot.fit_transform(X_teams).todense()clf = DecisionTreeClassifier(random_state=14) scores = cross_val_score(clf,X_teams_expanded,y_true,scoring="accuracy") print("Accuracy: {0:.1f}%".format(np.mean(scores)*100))新模型:隨機森林
# use random_forest from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(random_state=14) scores = cross_val_score(clf,X_teams_expanded,y_true,scoring='accuracy') print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) X_all = np.hstack([X_lastwinner,X_teams_expanded]) print("X_all shape: {0}".format(X_all.shape)) clf = RandomForestClassifier(random_state=14) scores = cross_val_score(clf,X_all,y_true,scoring='accuracy') print("Accuracy: {0:.1f}%".format(np.mean(scores)*100)) from sklearn.grid_search import GridSearchCV #n_estimators=10, criterion='gini', max_depth=None, #min_samples_split=2, min_samples_leaf=1, #max_features='auto', #max_leaf_nodes=None, bootstrap=True, #oob_score=False, n_jobs=1, #random_state=None, verbose=0, min_density=None, compute_importances=None parameter_space = {"max_features": [2,10,'auto'],"n_estimators": [100,],"criterion": ["gini","entropy"],"min_samples_leaf": [2,4,6], } clf = RandomForestClassifier(random_state=14) grid = GridSearchCV(clf,parameter_space) grid.fit(X_all,y_true) print("Accuracy: {0:.1f}%".format(grid.best_score_ *100))總結(jié)
以上是生活随笔為你收集整理的决策树算法预测NBA赛事结果的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 温度传感器DS18B20 ISIS仿真
- 下一篇: 诺基亚6303c