【Python-ML】神经网络激励函数-双曲正切(hyperbolic tangent,tanh)函数
生活随笔
收集整理的這篇文章主要介紹了
【Python-ML】神经网络激励函数-双曲正切(hyperbolic tangent,tanh)函数
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
# -*- coding: utf-8 -*-
'''
Created on 2018年1月27日
@author: Jason.F
@summary: 前饋神經網絡激勵函數-雙曲正切(hyperbolic tangent,tanh)函數,經過縮放的邏輯斯蒂函數,輸出值的范圍更廣,在開區間(-1,1),有利于加速反向傳播算法的收斂
'''
import numpy as np
import time
import matplotlib.pyplot as pltif __name__ == "__main__": start = time.clock() def tanh(z):e_p = np.exp(z)e_m = np.exp(-z)return (e_p -e_m)/(e_m+e_p)def net_input(X,w):z=X.dot(w)return zdef logistic(z):return 1.0/(1.0+np.exp(-z))#W:array,shape=[n_output_units,n_hidden_units+1],weight matrix for hidden layer --> output layer#note that first column (A[:][0]=1) are the bias units.W=np.array([[1.1,1.2,1.3,0.5],[0.1,0.2,0.4,0.1],[0.2,0.5,2.1,1.9]])#A:array,shape=[n_hiddern+1,n_samples],Activation of hidden layer.#note that first element (A[0][0]=1) is the bias unit.A=np.array([[1.0],[0.1],[0.3],[0.7]])#Z:array,shape=[n_output_units,n_samples],Net input of the output layer.Z=W.dot(A)y_probas = tanh(Z)print ('Probabilities:\n',y_probas)print (y_probas.sum())y_class = np.argmax(Z,axis=0)print ('predicted class label:%d'%y_class[0])#和邏輯斯蒂函數比較z=np.arange(-5,5,0.005)log_act = logistic(z)tanh_act =tanh(z)plt.ylim([-1.5,1.5])plt.xlabel('net input $z$')plt.ylabel('activation $\phi(z)$')plt.axhline(1,color='black',linestyle='--')plt.axhline(0.5,color='black',linestyle='--')plt.axhline(0,color='black',linestyle='--')plt.axhline(-1,color='black',linestyle='--')plt.plot(z,tanh_act,linewidth=2,color='black',label='tanh')plt.plot(z,log_act,linewidth=2,color='lightgreen',label='logistic')plt.legend(loc='lower right')plt.tight_layout()plt.show() end = time.clock() print('finish all in %s' % str(end - start))
結果:
('Probabilities:\n', array([[ 0.96108983],[ 0.3004371 ],[ 0.97621774]])) 2.23774466472 predicted class label:2 finish all in 14.5718582269
總結
以上是生活随笔為你收集整理的【Python-ML】神经网络激励函数-双曲正切(hyperbolic tangent,tanh)函数的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 【Python-ML】神经网络激励函数-
- 下一篇: 【Python-ML】最小二乘法