莫烦nlp-GPT 单向语言模型
視頻鏈接:https://mofanpy.com/tutorials/machine-learning/nlp/gpt/
學習原因:
那就開始我們的學習吧。
模型Generative Pre-Training (GPT)
??模型越來越大的好處很顯而易見,模型能用更多非線性能力處理更復雜的問題。但是由此帶來另一個難題,就是難以訓練。每訓練一個超大模型, 我們都要消耗更多計算資源和更多時間。
??GPT主要的目標還是當好一個預訓練模型該有的樣子。用非監督的人類語言數據,訓練一個預訓練模型,然后拿著這個模型進行finetune, 基本上就可以讓你在其他任務上也表現出色。因為下游要finetune的任務千奇百怪,在這個教學中,我會更專注GPT模型本身。 告訴你GPT模型到底長什么樣,又會有什么樣的特性。至于后續的finetune部分,其實比起模型本身,要容易不少。
??有人說它是Transformer的Decoder,但是我覺得這可能并不準確。 它更像是一種Transformer的Decoder與Encoder的結合。用著Decoder的 Future Mask (Look Ahead Mask),但結構上又更像Encoder。
這么設計就是為了讓GPT方便訓練。用前文的信息預測后文的信息,所以用上了Future Mask。
如果不用Future Mask, 又要做大量語料的非監督學習,很可能會讓模型在預測A時看到A的信息,從而有一種信息穿越的問題。 具體解釋一下,因為Transformer這種MultiHead Attention模式,每一個Head都會看到所有的文字內容,如果用前文的信息預測后文內容,又不用Future Mask時, 模型是可以看到要預測的信息的,這種訓練是無效的。 Future Mask的應用,就是不讓模型看到被穿越的信息,用一雙無形的手,蒙蔽了它的透視眼。
另外一個與Transformer Decoder的不同之處是,它沒有借用到Encoder提供的 self-attention 信息。所以GPT的Decoder要比Transformer少一些層,都是self-attention,沒有vanilla attention。 那么最終的模型乍一看的確和Transformer的某一部分很像,不過就是有兩點不同。
論文解讀Attention is all you need,這篇簡潔、內容較準確。
任務,如何訓練模型
當然task還能有很多。就看你的數據支持的是什么樣的task了。 多種task一起來訓練一個模型,能讓這個模型在更多task上的泛化能力更強。
讓模型訓練兩個任務,1. 非監督的后文預測,2. 是否是下一句。
糾正:其實該數據集中string1和string2并不是上下文關系,
along with human annotations indicating whether each pair captures a paraphrase/semantic equivalence relationship. Last published: March 3, 2005.
結果分析
ELMo的前向lstm也是這個問題,推薦系統、知識追蹤也是。是一種冷啟動的問題。需要想想怎么解決
普通NLP玩家充當一下吃瓜群眾就好了
數據處理
utils.MRPCData():
- seqs[:, :-1]是X input中的句子信息,[ [ string1,string2] ]
- segs[:, :-1]是X input的前后句信息,判斷是否是前句還是后句。因為我們會同時將前句和后句放在seqs中一起給模型,所以模型需要搞清楚他到底是前句還是后句。
- seqs[:, 1:]是非監督學習的Y信息,即標簽,用前句預測后句。
- nsp_labels是判斷輸入的兩句話是否是前后文關系。
與bert相同
GPT框架
模型的架構我們會使用到在Transformer中的Encoder代碼,因為他們是通用的。
代碼
class GPT(keras.Model):def __init__(self, model_dim, max_len, n_layer, n_head, n_vocab, lr, max_seg=3, drop_rate=0.1, padding_idx=0):super().__init__()self.padding_idx = padding_idx #pad_id = 0self.n_vocab = n_vocab # len(self.v2i)self.max_len = max_len #72-1# I think task emb is not necessary for pretraining,# because the aim of all tasks is to train a universal sentence embedding# the body encoder is the same across all tasks,# and different output layer defines different task just like transfer learning.# finetuning replaces output layer and leaves the body encoder unchanged.# self.task_emb = keras.layers.Embedding(# input_dim=n_task, output_dim=model_dim, # [n_task, dim]# embeddings_initializer=tf.initializers.RandomNormal(0., 0.01),# )self.word_emb = keras.layers.Embedding(input_dim=n_vocab, output_dim=model_dim, # [n_vocab, dim]embeddings_initializer=tf.initializers.RandomNormal(0., 0.01),) #詞向量self.segment_emb = keras.layers.Embedding(input_dim=max_seg, output_dim=model_dim, # [max_seg, dim]embeddings_initializer=tf.initializers.RandomNormal(0., 0.01),) #片段向量,seg的值0:句子1 |1:句子2 |2:padding self.position_emb = self.add_weight(name="pos", shape=[1, max_len, model_dim], dtype=tf.float32, # [1, step, dim] 相加時broadcast第一維initializer=keras.initializers.RandomNormal(0., 0.01)) #位置向量,這里是自己學習參數。論文為固定的數學公式self.encoder = Encoder(n_head, model_dim, drop_rate, n_layer) # Transformer的內容,可直接使用self.task_mlm = keras.layers.Dense(n_vocab) #task1 預測下一個詞self.task_nsp = keras.layers.Dense(2) #task2 是否上下句關系self.cross_entropy = keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction="none")# reduction=‘auto’,這個參數是進行最后的求平均,如果是設置為losses_utils.ReductionV2.None,就不會求平均了self.opt = keras.optimizers.Adam(lr)def call(self, seqs, segs, training=False): #traning參數控制dropout mask矩陣控制attentionembed = self.input_emb(seqs, segs) # [n, step, dim]z = self.encoder(embed, training=training, mask=self.mask(seqs)) # [n, step, dim]mlm_logits = self.task_mlm(z) # [n, step, n_vocab]nsp_logits = self.task_nsp(tf.reshape(z, [z.shape[0], -1])) # [n, n_cls]return mlm_logits, nsp_logitsdef step(self, seqs, segs, seqs_, nsp_labels):...def input_emb(self, seqs, segs):return self.word_emb(seqs) + self.segment_emb(segs) + self.position_emb # [n, step, dim]def mask(self, seqs):...@propertydef attentions(self):attentions = {"encoder": [l.mh.attention.numpy() for l in self.encoder.ls],}return attentionsm = GPT(model_dim=MODEL_DIM, max_len=d.max_len - 1, n_layer=N_LAYER, n_head=4, n_vocab=d.num_word,lr=LEARNING_RATE, max_seg=d.num_seg, drop_rate=0.2, padding_idx=d.pad_id)看注釋,bert已經解釋過。bert重寫了gpt的step和mask函數,下面看看有何不同:
- step函數
tf.math.not_equal Performs a broadcast with the arguments and then an element-wise inequality comparison, returning a Tensor of boolean values.
def step(self, seqs, segs, seqs_, nsp_labels):with tf.GradientTape() as tape:mlm_logits, nsp_logits = self.call(seqs, segs, training=True)pad_mask = tf.math.not_equal(seqs_, self.padding_idx)# 非padding位置為True# mlm_logits [n, step, n_vocab]pred_loss = tf.reduce_mean(tf.boolean_mask(self.cross_entropy(seqs_, mlm_logits), pad_mask)) # 非padding位置都計算交叉熵# nsp_logits [n, n_cls]nsp_loss = tf.reduce_mean(self.cross_entropy(nsp_labels, nsp_logits))loss = pred_loss + 0.2 * nsp_lossgrads = tape.gradient(loss, self.trainable_variables)self.opt.apply_gradients(zip(grads, self.trainable_variables))return loss, mlm_logits- mask函數
tf.linalg.band_part
tf.linalg.band_part(tf.ones((5, 5)), -1, 0) Out[14]: <tf.Tensor: shape=(5, 5), dtype=float32, numpy= array([[1., 0., 0., 0., 0.],[1., 1., 0., 0., 0.],[1., 1., 1., 0., 0.],[1., 1., 1., 1., 0.],[1., 1., 1., 1., 1.]], dtype=float32)>transformer(一)有舉例子如何mask的
def mask(self, seqs):"""abcd--a011111b001111c000111d000011-000011-000011force head not to see afterward. eg. 后面乘以負無窮a is a embedding for a---b is a embedding for ab--c is a embedding for abc-later, b embedding will + b another embedding from previous residual input to predict c"""mask = 1 - tf.linalg.band_part(tf.ones((self.max_len, self.max_len)), -1, 0)pad = tf.math.equal(seqs, self.padding_idx)# 3個句子,step為5#[3,5]->[3,1,1,5] |x:1(boardcast)| y:[1,1,5,5] |---> [3,1,5,5]mask = tf.where(pad[:, tf.newaxis, tf.newaxis, :], 1, mask[tf.newaxis, tf.newaxis, :, :])return mask # (step, step)mask的形狀應該是(batch,1,step,step),而attention的形狀為# [batch, num_heads, q_step, step]
這里q_step=step,因為self-attention的矩陣是長寬一樣
- train函數
GPT的標簽容易,跟知識追蹤一樣(上一題預測下一題)
def train(model, data, step=10000, name="gpt"):t0 = time.time()for t in range(step):seqs, segs, xlen, nsp_labels = data.sample(16)loss, pred = model.step(seqs[:, :-1], segs[:, :-1], seqs[:, 1:], nsp_labels)if t % 100 == 0:pred = pred[0].numpy().argmax(axis=1)t1 = time.time()print("\n\nstep: ", t,"| time: %.2f" % (t1 - t0),"| loss: %.3f" % loss.numpy(),"\n| tgt: ", " ".join([data.i2v[i] for i in seqs[0, 1:][:xlen[0].sum()+2]]),#二次篩選長度 應該+2:到<sep>結束符"\n| prd: ", " ".join([data.i2v[i] for i in pred[:xlen[0].sum()+2]]),)t0 = t1os.makedirs("./visual/models/%s" % name, exist_ok=True)model.save_weights("./visual/models/%s/model.ckpt" % name)運行結果
num word: 12880 max_len: 72step: 100 | time: 13.26 | loss: 7.495 | tgt: the unions also staged a five-day strike in march that forced all but one of yale 's dining halls to close . <SEP> the unions also staged a five-day strike in march ; strikes have preceded eight of the last <NUM> contracts . <SEP> | prd: the . the the the the the . . the . . the the the the . . the the the . . the the the . . . the . the the . . . . . the . the . . thestep: 4900 | time: 13.55 | loss: 1.047 | tgt: they were held under section <NUM> of the terrorism act <NUM> on suspicion of involvement in the commission , preparation or instigation of acts of terrorism . <SEP> badat was arrested under section <NUM> of the terrorism act a€? on suspicion of involvement in the commission , preparation or instigation of acts of terrorism , a€? scotland yard confirmed . <SEP> | prd: the were not today section <NUM> of the terrorism act <NUM> for suspicion of terrorism in the commission , preparation or instigation of terrorism of terrorism 's <SEP> badat was arrested under section <NUM> of the terrorism act a€? on suspicion of acts in the commission , preparation or instigation of acts of terrorism , a€? scotland yard confirmed . <SEP>step: 5000 | time: 13.63 | loss: 0.937 | tgt: michael mitchell , the chief public defender in baton rouge who is representing lee , did not answer his phone wednesday afternoon . <SEP> michael mitchell , the chief public defender in baton rouge who is representing lee , was not available for comment . <SEP> | prd: the mitchell , the chief justice defender in baton rouge who is representing lee , did not attempt his lawyer the afternoon . <SEP> michael mitchell , the chief justice defender in baton rouge who is representing lee , was not available for comment . <SEP>step: 9800 | time: 13.45 | loss: 0.211 | tgt: in <NUM> , president bush named kathy gregg to the student loan marketing association board of directors . <SEP> in <NUM> , president bush named her to the student loan marketing association , the largest u.s. lender for students . <SEP> | prd: the the , for bush named kathy gregg to the student loan marketing association board of directors . <SEP> in <NUM> , president bush named her to the student loan marketing association , the largest president lender for students . <SEP>step: 9900 | time: 13.28 | loss: 0.210 | tgt: the product also features an updated release of the apache web server , as well as apache tomcat and apache axis . <SEP> panther server also includes an updated release of apache , along with apache tomcat and apache axis for creating powerful web services . <SEP> | prd: <quote> first also features an updated release of the apache web server , as well as apache tomcat and apache axis . <SEP> panther server also includes an updated release of apache , along with apache tomcat and apache axis for creating powerful web services . <SEP>total time: 22 min 28 second總結
以上是生活随笔為你收集整理的莫烦nlp-GPT 单向语言模型的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Form界面设置只读
- 下一篇: ASP.NET中的Eval()和Data