梯度下降法进行线性回归---------二维及多维
生活随笔
收集整理的這篇文章主要介紹了
梯度下降法进行线性回归---------二维及多维
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
二維情況:y = theta0 + theta1*x
結果為:2.57549789814787, 0.613000580602551
代碼如下:
from sympy import * import mathX = [1.5, 2, 1.5, 2, 3, 3, 3.5, 3.5, 4, 4, 5, 5] Y = [3, 3.2, 4, 4.5, 4, 5, 4.2, 4.5, 5, 5.5, 4.8, 6.5]J = 0 # 損失函數 theta0, theta1 = symbols('theta0, theta1') # 定義theta參數for i in range(len(X)): # 構造損失函數J += (theta0 + X[i]*theta1 - Y[i])**2 J *= 0.5 # print(J)alpha = 0.01 # 學習步長 epsilon = 0.0000000000001 # 迭代閥值,當兩次迭代損失函數之差小于該閥值時停止迭代dtheta0 = diff(J, theta0) # 對theta0求偏導 dtheta1 = diff(J, theta1) # 對theta0求偏導 print('dthedat0=', dtheta0) print('dthedat1=', dtheta1)theta0 = 0 # 初始化theta參數 theta1 = 0 while True:last0 = theta0last1 = theta1theta0 -= alpha * dtheta0.subs({'theta0': theta0, 'theta1': theta1})theta1 -= alpha * dtheta1.subs({'theta0': theta0, 'theta1': theta1})a = J.subs({'theta0': theta0, 'theta1': theta1})b = J.subs({'theta0': last0, 'theta1': last1})print("{}, {}, {}, {}".format(theta0, theta1, a, b))if math.fabs(J.subs({'theta0': theta0, 'theta1': theta1}) - J.subs({'theta0': last0, 'theta1': last1})) < epsilon:breakprint("{}, {}".format(theta0, theta1))?
多維情況:y[i] = theta0 + theta1*x[i][0] + theta2*x[i][1] + theta3*x[i][2]?
結果為:50.3097802023958, 47.7942911922764, -13.0287743334236, 1.13282147172682
代碼如下:
from sympy import * import mathX = [(1, 0., 3), (1, 1., 3), (1, 2., 3), (1, 3., 2), (1, 4., 4)] Y = [95.364, 97.217205, 75.195834, 60.105519, 49.342380]J = 0 # 損失函數 和 theta參數 theta0, theta1, theta2, theta3 = symbols('theta0, theta1, theta2, theta3')for i in range(len(X)): # 構造損失函數J += (theta0 + X[i][0]*theta1 + X[i][1]*theta2 + X[i][2]*theta3 - Y[i])**2 J *= 0.5 # print(J)alpha = 0.01 # 學習步長 epsilon = 0.0000000000001 # 迭代閥值,當兩次迭代損失函數之差小于該閥值時停止迭代dtheta0 = diff(J, theta0) # 對theta0求偏導 dtheta1 = diff(J, theta1) # 對theta1求偏導 dtheta2 = diff(J, theta2) # 對theta2偏導 dtheta3 = diff(J, theta3) # 對theta3求偏導 print('dthedat0=', dtheta0) print('dthedat1=', dtheta1) print('dthedat2=', dtheta2) print('dthedat3=', dtheta3)theta0 = 0 # 初始化theta參數 theta1 = 0 theta2 = 0 theta3 = 0 while True:last0 = theta0last1 = theta1last2 = theta2last3 = theta3theta0 -= alpha * dtheta0.subs({'theta0': theta0, 'theta1': theta1, 'theta2': theta2, 'theta3': theta3})theta1 -= alpha * dtheta1.subs({'theta0': theta0, 'theta1': theta1, 'theta2': theta2, 'theta3': theta3})theta2 -= alpha * dtheta2.subs({'theta0': theta0, 'theta1': theta1, 'theta2': theta2, 'theta3': theta3})theta3 -= alpha * dtheta3.subs({'theta0': theta0, 'theta1': theta1, 'theta2': theta2, 'theta3': theta3})a = J.subs({'theta0': theta0, 'theta1': theta1, 'theta2': theta2, 'theta3': theta3})b = J.subs({'theta0': last0, 'theta1': last1, 'theta2': last2, 'theta3': last3})print("{}, {}, {}, {}, {}, {}".format(theta0, theta1, theta2, theta3, a, b))if math.fabs(J.subs({'theta0': theta0, 'theta1': theta1, 'theta2': theta2, 'theta3': theta3}) -J.subs({'theta0': last0, 'theta1': last1, 'theta2': last2, 'theta3': last3})) < epsilon:breakprint("{}, {}, {}, {}".format(theta0, theta1, theta2, theta3))?
總結
以上是生活随笔為你收集整理的梯度下降法进行线性回归---------二维及多维的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 用Python对数学函数进行求值、求偏导
- 下一篇: servlet和servlet-mapp