chatbot1_2 RNN简单实现
chatbot1.2
如何處理多義詞的embedding?
- 每個意思一個向量,多方疊加。在某個切面與其相同意思的向量相近
如何識別和學習詞組的向量?
- 多次出現在一起,認為是詞組
如何處理未曾見過的新詞?
- 語境平均,語境猜測
通過非監督學習訓練詞向量需要多少數據?
- Glove
- Word2vec
如何用已有的數據優化?
Retrofit
- 灰色的是非監督學習學到的向量,白色的是知識庫的詞和其關系。白色標記學到的盡量接近灰色的來加強知識庫里的詞的關系或消弱。–》兼顧兩者特征的新的。形成下面的
- Conceptnet embedding
RNN
https://r2rt.com/recurrent-neural-networks-in-tensorflow-i.html
demo1
-
這個,可以在理論上計算一個一般精確的和非常精確的模型會有什么樣的準確率,和理論數據做比較。
-
input是簡單的0,1序列,雖然排成了序列,但相互之間是獨立的。
-
輸出序列并不完全獨立,有一部分和事件相關的信息。每一個node都會和輸入序列有一定的關系。每一個node是0或者1的概率受一個先驗概率和輸入序列里面的t-3和t-8這兩個位置的數字的影響。如圖。
import numpy as np
import tensorflow as tfmatplotlib inline
import matplotlib.pyplot as plt
Global config variables
num_steps = 5 # number of truncated backprop steps (‘n’ in the discussion above)
按照數據生成合成序列數據:param size:input 和output序列的總長度:return:X,Y:input和output序列,rank-1和numpy array(即.vector)X = np.array(np.random.choice(2, size=(size,)))Y = []for i in range(size):threshold = 0.5if X[i-3] == 1:threshold += 0.5if X[i-8] == 1:threshold -= 0.25if np.random.rand() > threshold:Y.append(0)else:Y.append(1)return X, np.array(Y)
batch_size = 200
num_classes = 2
state_size = 4
learning_rate = 0.1
def gen_data(size=1000000):
caculate the perplexity
- 3中完美的模型表示,既考慮t-3的時間也考慮了t-8的時間
- 4中不完美的模型不考慮時間
簡易的RNN模型實現
- 前一個會把它自己的狀態作為輸入放到后一個里面,他也會把自己的觀測時作為另一部分放在rnn里面,兩個輸入會拼接到一起映射到一個四維的向量里,然后這個四維的向量會進一步映射到一個一維的輸出里面。這就是如何使用一個rnn模型給定一個輸入,預測一個輸出。
####tensorflow tip:tf.placeholder ####
- 外界輸入數據:tf.placeholder
- 中間數據:tf.Tensor,tensorflow operation的output
- 參數:tf.Variable
init_state,state,final_state
""" Adding rnn_cells to graphThis is a simplified version of the "static_rnn" function from Tensorflow's api. See: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/rnn/python/ops/core_rnn.py#L41 Note: In practice, using "dynamic_rnn" is a better choice that the "static_rnn": https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/rnn.py#L390 """ state = init_state#初始化的狀態,對第一個rnn單元,我們也要給他一個初始狀態,一般是0 rnn_outputs = [] for rnn_input in rnn_inputs:state = rnn_cell(rnn_input, state)rnn_outputs.append(state) final_state = rnn_outputs[-1] """ Predictions, loss, training stepLosses is similar to the "sequence_loss" function from Tensorflow's API, except that here we are using a list of 2D tensors, instead of a 3D tensor. See: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/seq2seq/python/ops/loss.py#L30 """#logits and predictions with tf.variable_scope('softmax'):W = tf.get_variable('W', [state_size, num_classes])b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0)) logits = [tf.matmul(rnn_output, W) + b for rnn_output in rnn_outputs] predictions = [tf.nn.softmax(logit) for logit in logits]# Turn our y placeholder into a list of labels y_as_list = tf.unstack(y, num=num_steps, axis=1)#losses and train_step # """ # 計算損失函數,定義優化器 # 從每一個time frame 的hidden state # 映射到每個time frame的最終output(prediction) # 和cbow或者skip_gram的最上層相同 # Predictions, loss, training step # Losses is similar to the "sequence_loss" # function from Tensorflow's API, except that here we are using a list of 2D tensors, instead of a 3D tensor. See: # https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/seq2seq/python/ops/loss.py#L30 # """ # logits and predictions losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(labels=label, logits=logit) for \logit, label in zip(logits, y_as_list)] total_loss = tf.reduce_mean(losses) train_step = tf.train.AdagradOptimizer(learning_rate).minimize(total_loss)tf.variable_scope(‘softmax’)和tf.variable_scope(‘rnn_cell’, reuse=True)中,各有兩個W,b的tf.Variable,因為在不同的variable_scope,即便用相同的名字,也是不同的對象。
##打印所有的variable all_vars =[node.name for node in tf.global_variables()] for var in all_vars:print(var) # rnn_cell/W:0 # rnn_cell/b:0 # softmax/W:0 # softmax/b:0 # rnn_cell/W/Adagrad:0計算和優化的東西 # rnn_cell/b/Adagrad:0 # softmax/W/Adagrad:0 # softmax/b/Adagrad:0all_node_names=[node for node in tf.get_default_graph().as_graph_def().node] #或者tf.get_default_graph().get_operations() all_node_values=[node.values() for node in tf.get_default_graph().getoperationa()] for i in range(0,len(all_node_values),50):print("output and operation %d:"%i)print(all_node_values[i])print('---------------------------')print(all_node_names[i])print('\n')print('\n')for i in range(len(all_node_values)):print('%d:%s'%(i,all_node_values[i]))tensor命名規則
add_7:0第七次調用add操作,返回0個向量7
""" Train the network """def train_network(num_epochs, num_steps, state_size=4, verbose=True):with tf.Session() as sess:sess.run(tf.global_variables_initializer())training_losses = []for idx, epoch in enumerate(gen_epochs(num_epochs, num_steps)):training_loss = 0training_state = np.zeros((batch_size, state_size))if verbose:print("\nEPOCH", idx)for step, (X, Y) in enumerate(epoch):tr_losses, training_loss_, training_state, _ = \sess.run([losses,total_loss,final_state,train_step],feed_dict={x:X, y:Y, init_state:training_state})training_loss += training_loss_if step % 100 == 0 and step > 0:if verbose:print("Average loss at step", step,"for last 250 steps:", training_loss/100)training_losses.append(training_loss/100)training_loss = 0return training_losses training_losses = train_network(1,num_steps) plt.plot(training_losses)總結
以上是生活随笔為你收集整理的chatbot1_2 RNN简单实现的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: JAVA面试题:HashMap和Hash
- 下一篇: 毕业设计-人脸表情识别系统、人工智能