Attention Is All You Need (transformer)
Transformer 研究意義:
1、提出了self-attention,拉開了非序列化模型的序幕
2、為預訓練模型的到來打下了堅實的基礎
序列化模型主導(LSTM) <----- 2017 -----> 提出新的attention方式,實現了非序列化的模型并行化,提高效率(self-attention)
?
這篇文章提出來主要是解決機器翻譯問題,機器翻譯指標(BLEU)提升兩個點
機器翻譯指標(BLEU):
Candidate: the the the the the the the
Reference1: the cat is on the mat
Reference2: there is a cat on the mat
在Candidate里,1-gram指的是一個the,2-gram指的是兩個the,.....,相當于一個滑動窗口(N-gram,N相當于一個窗口大小)
count(the) = 7
count1-R(the) = min(7,2) = 2
count2-R(the) = min(7,1) = 1
count-clip(the) = max(2,1) = 2
p1 = count-clip(the)/count(the) = 2/7
只使用1-gram的問題是對每個詞進行翻譯就能得到很高的得分,完全沒有考慮句子的流利性
解決方法: 使用多-gram融合,bleu使用4gram
依據上面計算方法,分別計算出p2 = 0, p3=0, p4=0
P = 0.25*p1 + 0.25*p2 + 0.25*p3 + 0.25*p4
還存在的問題就是長度問題,目前的評價指標對短句有利
解決方法: 對長度增加懲罰
BP = [1 if c>r else e^(1-r/c)]
Then
BLEU = BPexp(sumWnlogPn)
?
論文主要結構:
一、Abstract
? ? ? ? ? 1、常用的序列模型都是基于卷積神經網絡或者循環神經網絡,表現最好的模型也是基于encode-decoder框架上添加attention機制
? ? ? ? ? 2、提出一種基于attention機制的新模型transformer,拋棄了傳統的模型結構
? ? ? ? ? 3、模型在2014WMT翻譯數據集上,比現存的模型的bleu值高兩個點
二、Introduction
? ? ? ? ? 主要論述序列化模型的弊端-長度依賴問題,并提出本文模型
三、The Background
? ? ? ? ? 主要論述傳統卷積模型存在學習遠距離依賴困難的弊端,并提出注意力機制
四、Architecture
? ? ? ? ? 主要論述transformer網絡結構及其內部細節(Scaled Dot-Product Attention、Multi-Head Attention、Positional Encoding)? ? ? ? ? ?
? ? ? ? ? 整體概覽模型結構:
? ? ? ? ? ?
?
主要將的是模型主要為編碼-解碼結構,編碼是將一個特征表示序列x=(x1.....xn)編碼成一個連續的序列表示z=(z1,.....zn), 解碼部分會對z進行解碼得到y=(y1,.....yn),模型整體結構如下:
?
模型整體結構部分代碼片段class EncoderDecoder(nn.Module):"""A standard Encoder-Decoder architecture. Base for this and many other models."""# super(EncoderDecoder, self).__init__() 等價于 nn.Module.__init__()def __init__(self, encoder, decoder, src_embed, tgt_embed, generator):super(EncoderDecoder, self).__init__()self.encoder = encoderself.decoder = decoderself.src_embed = src_embedself.tgt_embed = tgt_embedself.generator = generator# forward函數調用自身encode方法實現encoder,然后調用decode方式實現decoderdef forward(self, src, tgt, src_mask, tgt_mask):#src的shape=tgt的shape:[batch_size,max_length]"Take in and process masked src and target sequences."return self.decode(self.encode(src, src_mask), src_mask,tgt, tgt_mask)def encode(self, src, src_mask):return self.encoder(self.src_embed(src), src_mask)def decode(self, memory, src_mask, tgt, tgt_mask):return self.decoder(self.tgt_embed(tgt), memory, src_mask, tgt_mask)class Generator(nn.Module):"Define standard linear + softmax generation step."def __init__(self, d_model, vocab):super(Generator, self).__init__()self.proj = nn.Linear(d_model, vocab)def forward(self, x):#nn.Linear是一個線性層,x*Wreturn F.log_softmax(self.proj(x), dim=-1)? ? ? 編碼和解碼
?
編碼器
| 編碼器 | 1、編碼器由N=6個完全相同的層堆疊而成 | ? |
| 2、每一層都有兩個子層 | 1、Multi-head self-attention 機制 | |
| 2、簡單的、位置完全連接的前饋網絡 | ||
| 3、對每一個子層采用一個殘差連接、層標準化 | ? |
?
""" 編碼器部分代碼片段 """def clones(module, N):"Produce N identical layers."# copy.deepcopy硬拷貝函數return nn.ModuleList([copy.deepcopy(module) for _ in range(N)])#傳入參數 #Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N) class Encoder(nn.Module):"Core encoder is a stack of N layers"def __init__(self, layer, N):super(Encoder, self).__init__()self.layers = clones(layer, N)self.norm = LayerNorm(layer.size)def forward(self, x, mask):"Pass the input (and mask) through each layer in turn."for layer in self.layers:x = layer(x, mask)return self.norm(x)class LayerNorm(nn.Module):"Construct a layernorm module (See citation for details)."def __init__(self, features, eps=1e-6):super(LayerNorm, self).__init__()# features=layer.size=512self.a_2 = nn.Parameter(torch.ones(features))self.b_2 = nn.Parameter(torch.zeros(features))self.eps = epsdef forward(self, x):mean = x.mean(-1, keepdim=True)std = x.std(-1, keepdim=True)return self.a_2 * (x - mean) / (std + self.eps) + self.b_2class SublayerConnection(nn.Module):"""A residual connection followed by a layer norm.Note for code simplicity the norm is first as opposed to last."""def __init__(self, size, dropout):super(SublayerConnection, self).__init__()self.norm = LayerNorm(size)self.dropout = nn.Dropout(dropout)def forward(self, x, sublayer):"Apply residual connection to any sublayer with the same size."return x + self.dropout(sublayer(self.norm(x)))#傳入參數 #EncoderLayer(d_model, c(attn), c(ff), dropout), N class EncoderLayer(nn.Module):"Encoder is made up of self-attn and feed forward (defined below)"def __init__(self, size, self_attn, feed_forward, dropout):super(EncoderLayer, self).__init__()self.self_attn = self_attnself.feed_forward = feed_forwardself.sublayer = clones(SublayerConnection(size, dropout), 2)self.size = sizedef forward(self, x, mask):"Follow Figure 1 (left) for connections."x = self.sublayer[0](x, lambda x: self.self_attn(x, x, x, mask))return self.sublayer[1](x, self.feed_forward)?
解碼器
| 解碼器 | 1、解碼器由N=6個完全相同的層堆疊而成 | |
| 2、每一層都有三個子層 | 1、multi-head self-attention機制 | |
| 2、簡單的、位置完全連接的前饋網絡 | ||
| 3、對編碼器的輸出執行multi-head attention | ||
| 3、對每一個子層采用個殘差連接、層標準化 | ||
| 4、修改解碼器堆棧中的self-attention子層以防止位置關注到后面的位置-> mask 結合將輸出嵌入偏移一個位置,確保對位置的預測 i 只能依賴小于 i 的已知輸出 | ||
| 輸入 | input embedding -> positional encoding |
| encoder | for i in range(6): ? ? ? ?self-attention -> layer normalization -> feed forward -> layer normalization |
| decoder | for i in range(6): ? ? ? ?self-attention -> layer normalization -> encoder-decoder attention -> layer normalization -> feed forward -> layer normalization |
?
Batch Normalization 與 Layer Normalization區別(計算方式不一樣,一個是按列batch計算,一個是按行計算)
| ? ? ? ? ? ? ? ? Batch Normalization | Layer Normalization | mean | std | |||||||||||
| Batch | 1 | 2 | 0 | 4 | 5 | 1 | 1 | 2 | 0 | 4 | 5 | 1 | 2 | 2 |
| 2 | 1 | 1 | 6 | 2 | 0 | 2 | 1 | 1 | 6 | 2 | 0 | 2 | 2 | |
| 6 | 2 | 5 | 1 | 3 | 1 | 6 | 2 | 5 | 1 | 3 | 1 | 3 | 2 | |
| mean | 3 | 2 | 3 | 4 | 3 | 1 | ? ? ? ? ? ? ? ? ? ? ?----- | |||||||
| std | 3 | 0 | 3 | 3 | 2 | 1 | ? ? ? ? ? ? ? ? ? ? ------ | |||||||
?
Attention部分
?
?
""" attention 代碼段 """def attention(query, key, value, mask=None, dropout=None):# shape:query=key=value---->[batch_size,8,max_length,64]d_k = query.size(-1)# k的緯度交換后為:[batch_size,8,64,max_length]# scores的緯度為:[batch_size,8,max_length,max_length]scores = torch.matmul(query, key.transpose(-2, -1)) \/ math.sqrt(d_k)#padding maskif mask is not None:scores = scores.masked_fill(mask == 0, -1e9)p_attn = F.softmax(scores, dim = -1)if dropout is not None:p_attn = dropout(p_attn)return torch.matmul(p_attn, value), p_attn?
Multi-Head Attention - 將Q、K、V分成h個頭,方便并行計算
?
?
就是由原來
?
變成多頭
?
""" multi-head Attention """class MultiHeadedAttention(nn.Module):def __init__(self, h, d_model, dropout=0.1):"Take in model size and number of heads."super(MultiHeadedAttention, self).__init__()assert d_model % h == 0# We assume d_v always equals d_kself.d_k = d_model // hself.h = hself.linears = clones(nn.Linear(d_model, d_model), 4)self.attn = Noneself.dropout = nn.Dropout(p=dropout)def forward(self, query, key, value, mask=None):# 緯度# shape:query=key=value--->:[batch_size,max_legnth,embedding_dim=512]if mask is not None:# Same mask applied to all h heads.mask = mask.unsqueeze(1)nbatches = query.size(0)#第一步:將q,k,v分別與Wq,Wk,Wv矩陣進行相乘#shape:Wq=Wk=Wv----->[512,512]#第二步:將獲得的Q、K、V在第三個緯度上進行切分#shape:[batch_size,max_length,8,64]#第三部:填充到第一個緯度#shape:[batch_size,8,max_length,64]query, key, value = \[l(x).view(nbatches, -1, self.h, self.d_k).transpose(1, 2)for l, x in zip(self.linears, (query, key, value))]#進入到attention之后緯度不變,shape:[batch_size,8,max_length,64]x, self.attn = attention(query, key, value, mask=mask, dropout=self.dropout)# 將緯度進行還原# 交換緯度:[batch_size,max_length,8,64]# 緯度還原:[batch_size,max_length,512]x = x.transpose(1, 2).contiguous() \.view(nbatches, -1, self.h * self.d_k)# 最后與WO大矩陣相乘 shape:[512,512]return self.linears[-1](x)?
Feed-Forward Network
FFN(x) = max(0,xW1 + b1) *W2 + b2
前饋層: 包含兩層 1、線性結構 2、卷積結構 x: 上一層的輸出(一般是self-attention的輸出) w1、w2、b1、b2都是需要學習的參數
?
""" Feedforward """class PositionwiseFeedForward(nn.Module):"Implements FFN equation."def __init__(self, d_model, d_ff, dropout=0.1):super(PositionwiseFeedForward, self).__init__()#[512,2048]self.w_1 = nn.Linear(d_model, d_ff)#[2048,512]self.w_2 = nn.Linear(d_ff, d_model)self.dropout = nn.Dropout(dropout)def forward(self, x):return self.w_2(self.dropout(F.relu(self.w_1(x))))?
""" Embedding and softmax """"class Embeddings(nn.Module):def __init__(self, d_model, vocab):super(Embeddings, self).__init__()self.lut = nn.Embedding(vocab, d_model)self.d_model = d_modeldef forward(self, x):return self.lut(x) * math.sqrt(self.d_model)?
Position Encoding -注入序列位置信息
?
?
""" position encoding """ class PositionalEncoding(nn.Module):"Implement the PE function."def __init__(self, d_model, dropout, max_len=5000):super(PositionalEncoding, self).__init__()self.dropout = nn.Dropout(p=dropout)# Compute the positional encodings once in log space.pe = torch.zeros(max_len, d_model)position = torch.arange(0., max_len).unsqueeze(1)div_term = torch.exp(torch.arange(0., d_model, 2) *-(math.log(10000.0) / d_model))pe[:, 0::2] = torch.sin(position * div_term)pe[:, 1::2] = torch.cos(position * div_term)pe = pe.unsqueeze(0)self.register_buffer('pe', pe)def forward(self, x):x = x + Variable(self.pe[:, :x.size(1)], requires_grad=False)return self.dropout(x)?
""" Full Model """import copy def make_model(src_vocab, tgt_vocab, N=6, d_model=512, d_ff=2048, h=8, dropout=0.1):"Helper: Construct a model from hyperparameters."c = copy.deepcopyattn = MultiHeadedAttention(h, d_model)ff = PositionwiseFeedForward(d_model, d_ff, dropout)position = PositionalEncoding(d_model, dropout)model = EncoderDecoder(Encoder(EncoderLayer(d_model, c(attn), c(ff), dropout), N),Decoder(DecoderLayer(d_model, c(attn), c(attn), c(ff), dropout), N),nn.Sequential(Embeddings(d_model, src_vocab), c(position)),nn.Sequential(Embeddings(d_model, tgt_vocab), c(position)),Generator(d_model, tgt_vocab))# This was important from their code. # Initialize parameters with Glorot / fan_avg.for p in model.parameters():if p.dim() > 1:nn.init.xavier_uniform(p)return model# Small example model. tmp_model = make_model(10, 10, 2)五、Why self-attention
? ? ? ? ? 主要論述分析self-attention和cnn、lstm的時間復雜度
六、Trainging
? ? ? ? ? 主要論述訓練語料以及硬件、超參數設置介紹
七、Results
? ? ? ? ? 主要介紹transformer的實驗結果對比
八、Discussion
? ? ? ? ? 主要討論將transformer結構應用于除翻譯之外的其它領域
啟發點:
? ? ?1、在進行attention機制的時候,對于padd為0的位置可以mask去掉
? ? ?2、可以在模型中添加殘差網絡和layer normalization提高模型效果
? ? ?3、模型創新的時候可以添加self-attention結構
?
關鍵點:
? ? ?1、scaled Dot-Product Attention原理
? ? ?2、Multi Head Attention 實現
? ? ?3、Mask機制、Layer Normalization、加法attention和dot attention區別
?
九、相關代碼
?
? ? ?本篇論文已開源代碼,開源代碼地址:?https://github.com/tensorflow/tensor2tensor
? ??
?
?
?
?
總結
以上是生活随笔為你收集整理的Attention Is All You Need (transformer)的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: SGM:Sequence Generat
- 下一篇: Learning Deep Struct