【机器学习基础】数学推导+纯Python实现机器学习算法14:Ridge岭回归
Python機(jī)器學(xué)習(xí)算法實(shí)現(xiàn)
Author:louwill
? ? ?
? ? ?上一節(jié)我們講到預(yù)防過(guò)擬合方法的Lasso回歸模型,也就是基于L1正則化的線性回歸。本講我們繼續(xù)來(lái)看基于L2正則化的線性回歸模型。
L2正則化
???? 相較于L0和L1,其實(shí)L2才是正則化中的天選之子。在各種防止過(guò)擬合和正則化處理過(guò)程中,L2正則化可謂第一候選。L2范數(shù)是指矩陣中各元素的平方和后的求根結(jié)果。采用L2范數(shù)進(jìn)行正則化的原理在于最小化參數(shù)矩陣的每個(gè)元素,使其無(wú)限接近于0但又不像L1那樣等于0,也許你又會(huì)問(wèn)了,為什么參數(shù)矩陣中每個(gè)元素變得很小就能防止過(guò)擬合?這里我們就拿深度神經(jīng)網(wǎng)絡(luò)來(lái)舉例說(shuō)明吧。在L2正則化中,如何正則化系數(shù)變得比較大,參數(shù)矩陣W中的每個(gè)元素都在變小,線性計(jì)算的和Z也會(huì)變小,激活函數(shù)在此時(shí)相對(duì)呈線性狀態(tài),這樣就大大簡(jiǎn)化了深度神經(jīng)網(wǎng)絡(luò)的復(fù)雜性,因而可以防止過(guò)擬合。
?????加入L2正則化的線性回歸損失函數(shù)如下所示。其中第一項(xiàng)為MSE損失,第二項(xiàng)就是L2正則化項(xiàng)。
? ? ? L2正則化相比于L1正則化在計(jì)算梯度時(shí)更加簡(jiǎn)單。直接對(duì)損失函數(shù)關(guān)于w求導(dǎo)即可。這種基于L2正則化的回歸模型便是著名的嶺回歸(Ridge Regression)。
Ridge
???? 有了上一講的代碼框架,我們直接在原基礎(chǔ)上對(duì)損失函數(shù)和梯度計(jì)算公式進(jìn)行修改即可。下面來(lái)看具體代碼。
導(dǎo)入相關(guān)模塊:
讀入示例數(shù)據(jù)并劃分:
data = pd.read_csv('./abalone.csv') data['Sex'] = data['Sex'].map({'M':0, 'F':1, 'I':2}) X = data.drop(['Rings'], axis=1) y = data[['Rings']] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25) X_train, X_test, y_train, y_test = X_train.values, X_test.values, y_train.values, y_test.values print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)模型參數(shù)初始化:
# 定義參數(shù)初始化函數(shù) def initialize(dims):w = np.zeros((dims, 1))b = 0return w, b定義L2損失函數(shù)和梯度計(jì)算:
# 定義ridge損失函數(shù) def l2_loss(X, y, w, b, alpha):num_train = X.shape[0]num_feature = X.shape[1]y_hat = np.dot(X, w) + bloss = np.sum((y_hat-y)**2)/num_train + alpha*(np.sum(np.square(w)))dw = np.dot(X.T, (y_hat-y)) /num_train + 2*alpha*wdb = np.sum((y_hat-y)) /num_trainreturn y_hat, loss, dw, db定義Ridge訓(xùn)練過(guò)程:
# 定義訓(xùn)練過(guò)程 def ridge_train(X, y, learning_rate=0.001, epochs=5000):loss_list = []w, b = initialize(X.shape[1])for i in range(1, epochs):y_hat, loss, dw, db = l2_loss(X, y, w, b, 0.1)w += -learning_rate * dwb += -learning_rate * dbloss_list.append(loss)if i % 100 == 0:print('epoch %d loss %f' % (i, loss))params = {'w': w,'b': b}grads = {'dw': dw,'db': db}return loss, loss_list, params, grads執(zhí)行示例訓(xùn)練:
# 執(zhí)行訓(xùn)練示例 loss, loss_list, params, grads = ridge_train(X_train, y_train, 0.01, 1000)模型參數(shù):
定義模型預(yù)測(cè)函數(shù):
測(cè)試集數(shù)據(jù)和模型預(yù)測(cè)數(shù)據(jù)的繪圖展示:
# 簡(jiǎn)單繪圖 import matplotlib.pyplot as plt f = X_test.dot(params['w']) + params['b']plt.scatter(range(X_test.shape[0]), y_test) plt.plot(f, color = 'darkorange') plt.xlabel('X') plt.ylabel('y') plt.show();???? 可以看到模型預(yù)測(cè)對(duì)于高低值的擬合較差,但能擬合大多數(shù)值。這樣的模型相對(duì)具備較強(qiáng)的泛化能力,不會(huì)產(chǎn)生嚴(yán)重的過(guò)擬合問(wèn)題。
最后進(jìn)行簡(jiǎn)單的封裝:
import numpy as np import pandas as pd from sklearn.model_selection import train_test_splitclass Ridge():def __init__(self):passdef prepare_data(self):data = pd.read_csv('./abalone.csv')data['Sex'] = data['Sex'].map({'M': 0, 'F': 1, 'I': 2})X = data.drop(['Rings'], axis=1)y = data[['Rings']]X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)X_train, X_test, y_train, y_test = X_train.values, X_test.values, y_train.values, y_test.valuesreturn X_train, y_train, X_test, y_testdef initialize(self, dims):w = np.zeros((dims, 1))b = 0return w, bdef l2_loss(self, X, y, w, b, alpha):num_train = X.shape[0]num_feature = X.shape[1]y_hat = np.dot(X, w) + bloss = np.sum((y_hat - y) ** 2) / num_train + alpha * (np.sum(np.square(w)))dw = np.dot(X.T, (y_hat - y)) / num_train + 2 * alpha * wdb = np.sum((y_hat - y)) / num_trainreturn y_hat, loss, dw, dbdef ridge_train(self, X, y, learning_rate=0.01, epochs=1000):loss_list = []w, b = self.initialize(X.shape[1])for i in range(1, epochs):y_hat, loss, dw, db = self.l2_loss(X, y, w, b, 0.1)w += -learning_rate * dwb += -learning_rate * dbloss_list.append(loss)if i % 100 == 0:print('epoch %d loss %f' % (i, loss))params = {'w': w,'b': b}grads = {'dw': dw,'db': db}return loss, loss_list, params, gradsdef predict(self, X, params):w = params['w']b = params['b']y_pred = np.dot(X, w) + breturn y_predif __name__ == '__main__':ridge = Ridge()X_train, y_train, X_test, y_test = ridge.prepare_data()loss, loss_list, params, grads = ridge.ridge_train(X_train, y_train, 0.01, 1000)print(params)sklearn中也提供了Ridge的實(shí)現(xiàn)方式:
???? 以上就是本節(jié)內(nèi)容,下一節(jié)我們將延伸樹(shù)模型,重點(diǎn)關(guān)注集成學(xué)習(xí)和GBDT系列。
更多內(nèi)容可參考筆者GitHub地址:
https://github.com/luwill/machine-learning-code-writing
代碼整體較為粗糙,還望各位不吝賜教。
參考資料:
Ridge Regression: Biased Estimation for Nonorthogonal Problems
往期精彩:
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法13:Lasso回歸
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法12:貝葉斯網(wǎng)絡(luò)
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法11:樸素貝葉斯
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法10:線性不可分支持向量機(jī)
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法8-9:線性可分支持向量機(jī)和線性支持向量機(jī)
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法7:神經(jīng)網(wǎng)絡(luò)
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法6:感知機(jī)
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法5:決策樹(shù)之CART算法
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法4:決策樹(shù)之ID3算法
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法3:k近鄰
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法2:邏輯回歸
數(shù)學(xué)推導(dǎo)+純Python實(shí)現(xiàn)機(jī)器學(xué)習(xí)算法1:線性回歸
往期精彩回顧適合初學(xué)者入門(mén)人工智能的路線及資料下載機(jī)器學(xué)習(xí)及深度學(xué)習(xí)筆記等資料打印機(jī)器學(xué)習(xí)在線手冊(cè)深度學(xué)習(xí)筆記專(zhuān)輯《統(tǒng)計(jì)學(xué)習(xí)方法》的代碼復(fù)現(xiàn)專(zhuān)輯 AI基礎(chǔ)下載機(jī)器學(xué)習(xí)的數(shù)學(xué)基礎(chǔ)專(zhuān)輯獲取一折本站知識(shí)星球優(yōu)惠券,復(fù)制鏈接直接打開(kāi):https://t.zsxq.com/yFQV7am本站qq群1003271085。加入微信群請(qǐng)掃碼進(jìn)群:總結(jié)
以上是生活随笔為你收集整理的【机器学习基础】数学推导+纯Python实现机器学习算法14:Ridge岭回归的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 吴恩达家免费NLP课程上线啦!
- 下一篇: 【深度学习】Github上标星1.1W的