神经网络对比
1. 只能分兩類
| ''' 11行神經(jīng)網(wǎng)絡(luò)① 固定三層,兩類 ''' # 只適合 0, 1 兩類。若不是,要先轉(zhuǎn)化 import numpy as npX = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]]) y = np.array([0,1,1,0]).reshape(-1,1) # 此處reshape是為了便于算法簡潔實現(xiàn)wi = 2*np.random.randn(3,5) - 1 wh = 2*np.random.randn(5,1) - 1for j in range(10000):li = Xlh = 1/(1+np.exp(-(np.dot(li,wi))))lo = 1/(1+np.exp(-(np.dot(lh,wh))))lo_delta = (y - lo)*(lo*(1-lo))lh_delta = np.dot(lo_delta, wh.T) * (lh * (1-lh))wh += np.dot(lh.T, lo_delta)wi += np.dot(li.T, lh_delta)print('訓(xùn)練結(jié)果:', lo) | '''
11行神經(jīng)網(wǎng)絡(luò)①
層數(shù)可變,兩類
'''
# 只適合 0, 1 兩類。若不是,要先轉(zhuǎn)化
import numpy as npX = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]])
y = np.array([0,1,1,0]).reshape(-1,1) # 此處reshape是為了便于算法簡潔實現(xiàn)neurals = [3,15,1] w = [np.random.randn(i,j) for i,j in zip(neurals[:-1], neurals[1:])] + [None] l = [None] * len(neurals) l_delta = [None] * len(neurals)for j in range(1000):l[0] = Xfor i in range(1, len(neurals)):l[i] = 1 / (1 + np.exp(-(np.dot(l[i-1], w[i-1]))))l_delta[-1] = (y - l[-1]) * (l[-1] * (1 - l[-1]))for i in range(len(neurals)-2, 0, -1):l_delta[i] = np.dot(l_delta[i+1], w[i].T) * (l[i] * (1 - l[i]))for i in range(len(neurals)-2, -1, -1):w[i] += np.dot(l[i].T, l_delta[i+1])print('訓(xùn)練結(jié)果:', l[-1]) ? |
?
2.可以分多類
| '''
11行神經(jīng)網(wǎng)絡(luò)①
固定三層,多類
'''
import numpy as npX = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]])
#y = np.array([0,1,1,0]) # 可以兩類
y = np.array([0,1,2,3]) # 可以多類
wi = np.random.randn(3,5)
wh = np.random.randn(5,4) # 改
bh = np.random.randn(1,5)
bo = np.random.randn(1,4) # 改
epsilon = 0.01 # 學(xué)習(xí)速率
lamda = 0.01 # 正則化強度for j in range(1000):li = Xlh = np.tanh(np.dot(li, wi) + bh) # tanh 函數(shù)lo = np.exp(np.dot(lh, wh) + bo)probs = lo / np.sum(lo, axis=1, keepdims=True)# 后向傳播lo_delta = np.copy(probs)lo_delta[range(X.shape[0]), y] -= 1lh_delta = np.dot(lo_delta, wh.T) * (1 - np.power(lh, 2)) # 更新權(quán)值、偏置wh -= epsilon * (np.dot(lh.T, lo_delta) + lamda * wh)wi -= epsilon * (np.dot(li.T, lh_delta) + lamda * wi)bo -= epsilon * np.sum(lo_delta, axis=0, keepdims=True)bh -= epsilon * np.sum(lh_delta, axis=0)print('訓(xùn)練結(jié)果:', np.argmax(probs, axis=1)) ? | '''
11行神經(jīng)網(wǎng)絡(luò)①
層數(shù)可變,多類
'''
import numpy as npX = np.array([[0,0,1],[0,1,1],[1,0,1],[1,1,1]])
#y = np.array([0,1,1,0]) # 可以兩類
y = np.array([0,1,2,3]) # 可以多類
neurals = [3, 10, 8, 4] w = [np.random.randn(i,j) for i,j in zip(neurals[:-1], neurals[1:])] + [None] b = [None] + [np.random.randn(1,j) for j in neurals[1:]] l = [None] * len(neurals) l_delta = [None] * len(neurals)epsilon = 0.01 # 學(xué)習(xí)速率 lamda = 0.01 # 正則化強度for j in range(1000):# 前向傳播l[0] = Xfor i in range(1, len(neurals)-1):l[i] = np.tanh(np.dot(l[i-1], w[i-1]) + b[i]) # tanh 函數(shù) l[-1] = np.exp(np.dot(l[-2], w[-2]) + b[-1])probs = l[-1] / np.sum(l[-1], axis=1, keepdims=True)# 后向傳播l_delta[-1] = np.copy(probs)l_delta[-1][range(X.shape[0]), y] -= 1for i in range(len(neurals)-2, 0, -1):l_delta[i] = np.dot(l_delta[i+1], w[i].T) * (1 - np.power(l[i], 2)) # tanh 函數(shù)的導(dǎo)數(shù)# 更新權(quán)值、偏置b[-1] -= epsilon * np.sum(l_delta[-1], axis=0, keepdims=True)for i in range(len(neurals)-2, -1, -1):w[i] -= epsilon * (np.dot(l[i].T, l_delta[i+1]) + lamda * w[i])if i == 0: breakb[i] -= epsilon * np.sum(l_delta[i], axis=0)# 打印損失if j % 100 == 0:loss = np.sum(-np.log(probs[range(X.shape[0]), y]))loss += lamda/2 * np.sum([np.sum(np.square(wi)) for wi in w[:-1]]) # 可選loss *= 1/X.shape[0] # 可選print('loss:', loss)print('訓(xùn)練結(jié)果:', np.argmax(probs, axis=1)) ? |
?
本文轉(zhuǎn)自羅兵博客園博客,原文鏈接:http://www.cnblogs.com/hhh5460/p/5324748.html,如需轉(zhuǎn)載請自行聯(lián)系原作者總結(jié)
- 上一篇: 饼形图及扇型图
- 下一篇: 容器、Docker与Kubernetes