机器学习算法-随机森林之决策树R 代码从头暴力实现(2)
前文(機器學習算法 - 隨機森林之決策樹初探(1))講述了決策樹的基本概念、決策評價標準并手算了單個變量、單個分組的Gini impurity。是一個基本概念學習的過程,如果不了解,建議先讀一下再繼續。
本篇通過 R 代碼(希望感興趣的朋友能夠投稿這個代碼的Python實現)從頭暴力方式自寫函數訓練決策樹。之前計算的結果,可以作為正對照,確定后續函數結果的準確性。
訓練決策樹 - 確定根節點的分類閾值
Gini impurity可以用來判斷每一步最合適的決策分類方式,那么怎么確定最優的分類變量和分類閾值呢?
最粗暴的方式是,我們用每個變量的每個可能得閾值來進行決策分類,選擇具有最低Gini impurity值的分類組合。這不是最快速的解決問題的方式,但是最容易理解的方式。
定義計算Gini impurity的函數
data <- data.frame(x=c(0,0.5,1.1,1.8,1.9,2,2.5,3,3.6,3.7),y=c(1,0.5,1.5,2.1,2.8,2,2.2,3,3.3,3.5),color=c(rep('blue',3),rep('red',2),rep('green',5)))data## x y color ## 1 0.0 1.0 blue ## 2 0.5 0.5 blue ## 3 1.1 1.5 blue ## 4 1.8 2.1 red ## 5 1.9 2.8 red ## 6 2.0 2.0 green ## 7 2.5 2.2 green ## 8 3.0 3.0 green ## 9 3.6 3.3 green ## 10 3.7 3.5 green首先定義個函數計算每個分支的Gini_impurity。
Gini_impurity <- function(branch){# print(branch)len_branch <- length(branch)if(len_branch==0){return(0)}table_branch <- table(branch)wrong_probability <- function(x, total) (x/total*(1-x/total))return(sum(sapply(table_branch, wrong_probability, total=len_branch))) }測試下,沒問題。
Gini_impurity(c(rep('a',2),rep('b',3)))## [1] 0.48再定義一個函數,計算每次決策的總Gini impurity.
Gini_impurity_for_split_branch <- function(threshold, data, variable_column, class_column, Init_gini_impurity=NULL){total = nrow(data)left <- data[data[variable_column]<threshold,][[class_column]]left_len = length(left)left_table = table(left)left_gini <- Gini_impurity(left)right <- data[data[variable_column]>=threshold,][[class_column]]right_len = length(right)right_table = table(right)right_gini <- Gini_impurity(right)total_gini <- left_gini * left_len / total + right_gini * right_len /totalresult = c(variable_column,threshold, paste(names(left_table), left_table, collapse="; ", sep=" x "),paste(names(right_table), right_table, collapse="; ", sep=" x "),total_gini)names(result) <- c("Variable", "Threshold", "Left_branch", "Right_branch", "Gini_impurity")if(!is.null(Init_gini_impurity)){Gini_gain <- Init_gini_impurity - total_giniresult = c(variable_column, threshold, paste(names(left_table), left_table, collapse="; ", sep=" x "),paste(names(right_table), right_table, collapse="; ", sep=" x "),Gini_gain)names(result) <- c("Variable", "Threshold", "Left_branch", "Right_branch", "Gini_gain")}return(result) }測試下,跟之前計算的結果一致:
as.data.frame(rbind(Gini_impurity_for_split_branch(2, data, 'x', 'color'), Gini_impurity_for_split_branch(2, data, 'y', 'color')))## Variable Threshold Left_branch Right_branch Gini_impurity ## 1 x 2 blue x 3; red x 2 green x 5 0.24 ## 2 y 2 blue x 3 green x 5; red x 2 0.285714285714286暴力決策根節點和閾值
基于前面定義的函數,遍歷每一個可能的變量和閾值。
首先看下基于變量x的計算方法:
uniq_x <- sort(unique(data$x)) delimiter_x <- zoo::rollmean(uniq_x,2) impurity_x <- as.data.frame(do.call(rbind, lapply(delimiter_x, Gini_impurity_for_split_branch, data=data, variable_column='x', class_column='color'))) print(impurity_x)## Variable Threshold Left_branch Right_branch Gini_impurity ## 1 x 0.25 blue x 1 blue x 2; green x 5; red x 2 0.533333333333333 ## 2 x 0.8 blue x 2 blue x 1; green x 5; red x 2 0.425 ## 3 x 1.45 blue x 3 green x 5; red x 2 0.285714285714286 ## 4 x 1.85 blue x 3; red x 1 green x 5; red x 1 0.316666666666667 ## 5 x 1.95 blue x 3; red x 2 green x 5 0.24 ## 6 x 2.25 blue x 3; green x 1; red x 2 green x 4 0.366666666666667 ## 7 x 2.75 blue x 3; green x 2; red x 2 green x 3 0.457142857142857 ## 8 x 3.3 blue x 3; green x 3; red x 2 green x 2 0.525 ## 9 x 3.65 blue x 3; green x 4; red x 2 green x 1 0.577777777777778再包裝2個函數,一個計算單個變量為決策節點的各種可能決策的Gini impurity, 另一個計算所有變量依次作為決策節點的各種可能決策的Gini impurity。
Gini_impurity_for_all_possible_branches_of_one_variable <- function(data, variable, class, Init_gini_impurity=NULL){uniq_value <- sort(unique(data[[variable]]))delimiter_value <- zoo::rollmean(uniq_value,2)impurity <- as.data.frame(do.call(rbind, lapply(delimiter_value, Gini_impurity_for_split_branch, data=data, variable_column=variable, class_column=class,Init_gini_impurity=Init_gini_impurity)))if(is.null(Init_gini_impurity)){decreasing = F} else {decreasing = T}impurity <- impurity[order(impurity[[colnames(impurity)[5]]], decreasing = decreasing),]return(impurity) }Gini_impurity_for_all_possible_branches_of_all_variables <- function(data, variables, class, Init_gini_impurity=NULL){one_split_gini <- do.call(rbind, lapply(variables,Gini_impurity_for_all_possible_branches_of_one_variable, data=data, class=class,Init_gini_impurity=Init_gini_impurity))if(is.null(Init_gini_impurity)){decreasing = F} else {decreasing = T}one_split_gini[order(one_split_gini[[colnames(one_split_gini)[5]]], decreasing = decreasing),] }測試下:
Gini_impurity_for_all_possible_branches_of_one_variable(data, 'x', 'color')## Variable Threshold Left_branch Right_branch Gini_impurity ## 5 x 1.95 blue x 3; red x 2 green x 5 0.24 ## 3 x 1.45 blue x 3 green x 5; red x 2 0.285714285714286 ## 4 x 1.85 blue x 3; red x 1 green x 5; red x 1 0.316666666666667 ## 6 x 2.25 blue x 3; green x 1; red x 2 green x 4 0.366666666666667 ## 2 x 0.8 blue x 2 blue x 1; green x 5; red x 2 0.425 ## 7 x 2.75 blue x 3; green x 2; red x 2 green x 3 0.457142857142857 ## 8 x 3.3 blue x 3; green x 3; red x 2 green x 2 0.525 ## 1 x 0.25 blue x 1 blue x 2; green x 5; red x 2 0.533333333333333 ## 9 x 3.65 blue x 3; green x 4; red x 2 green x 1 0.577777777777778兩個變量的各個閾值分別進行決策,并計算Gini impurity,輸出按Gini impurity由小到大排序后的結果。根據變量x和閾值1.95(與上面選擇的閾值2獲得的決策結果一致)的決策可以獲得本步決策的最好結果。
variables <- c('x', 'y') Gini_impurity_for_all_possible_branches_of_all_variables(data, variables, class="color")## Variable Threshold Left_branch Right_branch Gini_impurity ## 5 x 1.95 blue x 3; red x 2 green x 5 0.24 ## 3 x 1.45 blue x 3 green x 5; red x 2 0.285714285714286 ## 31 y 1.75 blue x 3 green x 5; red x 2 0.285714285714286 ## 4 x 1.85 blue x 3; red x 1 green x 5; red x 1 0.316666666666667 ## 6 x 2.25 blue x 3; green x 1; red x 2 green x 4 0.366666666666667 ## 41 y 2.05 blue x 3; green x 1 green x 4; red x 2 0.416666666666667 ## 2 x 0.8 blue x 2 blue x 1; green x 5; red x 2 0.425 ## 21 y 1.25 blue x 2 blue x 1; green x 5; red x 2 0.425 ## 51 y 2.15 blue x 3; green x 1; red x 1 green x 4; red x 1 0.44 ## 7 x 2.75 blue x 3; green x 2; red x 2 green x 3 0.457142857142857 ## 71 y 2.9 blue x 3; green x 2; red x 2 green x 3 0.457142857142857 ## 61 y 2.5 blue x 3; green x 2; red x 1 green x 3; red x 1 0.516666666666667 ## 8 x 3.3 blue x 3; green x 3; red x 2 green x 2 0.525 ## 81 y 3.15 blue x 3; green x 3; red x 2 green x 2 0.525 ## 1 x 0.25 blue x 1 blue x 2; green x 5; red x 2 0.533333333333333 ## 11 y 0.75 blue x 1 blue x 2; green x 5; red x 2 0.533333333333333 ## 9 x 3.65 blue x 3; green x 4; red x 2 green x 1 0.577777777777778 ## 91 y 3.4 blue x 3; green x 4; red x 2 green x 1 0.577777777777778https://victorzhou.com/blog/intro-to-random-forests/
https://victorzhou.com/blog/gini-impurity/
https://stats.stackexchange.com/questions/192310/is-random-forest-suitable-for-very-small-data-sets
https://towardsdatascience.com/understanding-random-forest-58381e0602d2
https://www.stat.berkeley.edu/~breiman/RandomForests/reg_philosophy.html
https://medium.com/@williamkoehrsen/random-forest-simple-explanation-377895a60d2d
往期精品(點擊圖片直達文字對應教程)
后臺回復“生信寶典福利第一波”或點擊閱讀原文獲取教程合集
?
(請備注姓名-學校/企業-職務等)
總結
以上是生活随笔為你收集整理的机器学习算法-随机森林之决策树R 代码从头暴力实现(2)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 14岁的男孩说想学生信,应该给予哪些指导
- 下一篇: 遗传所屠强研究组开发Decode-seq