loss盘点: asl loss (Asymmetric Loss) 代码解析详细版
1. BCE公式部分
可以簡單瀏覽下這篇博客的文章:
https://blog.csdn.net/qq_14845119/article/details/114121003
這是多分類 經(jīng)典 BCELossBCELossBCELoss 公式
L=?yL+?(1?y)L?L = -y L_{+} - (1-y) L_{-} L=?yL+??(1?y)L??
其中,L+/?L_{+/-}L+/?? 是正負例預(yù)測概率的log值,即:
L+=log(y^)L?=log(1?y^)y^=sigmoid(logit)\begin{aligned} L_{+} &= log( \hat{y} )\\ L_{-} &= log( 1- \hat{y} )\\ \hat{y} &= sigmoid( logit ) \end{aligned} L+?L??y^??=log(y^?)=log(1?y^?)=sigmoid(logit)?
實際上由于 labellabellabel 標簽 yyy 值,是一個 0/10/10/1 矩陣,實際上充當了一個掩碼 maskmaskmask 的作用,挑選出 L+L_{+}L+? 中正例部分 和 L?L_{-}L?? 中負例部分
假設(shè):
y=[0010]y = \begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix} y=[01?00?]
y^=[0.50.10.30.2]L+=[?0.6931?2.3026?1.2040?1.6094]L?=[?0.6931?0.1054?0.3567?0.2231]\hat{y} = \begin{bmatrix} 0.5 & 0.1 \\ 0.3 & 0.2 \end{bmatrix} \ L_{+} = \begin{bmatrix} -0.6931 & -2.3026 \\ -1.2040 & -1.6094 \end{bmatrix} \ L_{-} = \begin{bmatrix} -0.6931 & -0.1054 \\ -0.3567 & -0.2231 \end{bmatrix} y^?=[0.50.3?0.10.2?]?L+?=[?0.6931?1.2040??2.3026?1.6094?]?L??=[?0.6931?0.3567??0.1054?0.2231?]
所以,LLL 左下角為L+L_{+}L+?對應(yīng)的值的相反數(shù),左上角和右上角和右下角為L?L_{-}L??對應(yīng)的值的相反數(shù)
L=[0.69310.10541.20400.2231]L = \begin{bmatrix} 0.6931 & 0.1054 \\ 1.2040 & 0.2231 \end{bmatrix} L=[0.69311.2040?0.10540.2231?]
代碼驗證:
x = torch.tensor([0.5, 0.1, 0.3, 0.2]).reshape(2, 2).float() y = torch.tensor([0, 0, 1, 0]).reshape(2, 2).float() torch.nn.functional.binary_cross_entropy(x, y, reduction='none') tensor([[0.6931, 0.1054],[1.2040, 0.2231]])(不要小看這個 mask 代碼的操作,一會兒寫 asl 代碼會用的上)
2. focal loss 公式部分
基本公式依舊是這個:
L=?yL+?(1?y)L?L = -y L_{+} - (1-y) L_{-} L=?yL+??(1?y)L??
L+L_{+}L+? 和 L?L_{-}L?? 如下:
L+=(1?p)γ?log(p)L?=pγ?log(1?p)p=sigmoid(logit)\begin{aligned} L_{+} &= (1-p)^{\gamma} * log(p) \\ L_{-} &= p^{\gamma} * log(1-p) \\ p &= sigmoid(logit) \end{aligned} L+?L??p?=(1?p)γ?log(p)=pγ?log(1?p)=sigmoid(logit)?
3. asl 公式部分
asl loss 是 focal loss的改進版
L+=(1?p)γ+?log(p)L?=pmγ??log(1?pm)p=sigmoid(logit)pm=max(p?m,0)\begin{aligned} L_{+} &= (1-p)^{\gamma_{+}} &*& log(p) \\ L_{-} &= p_m^{\gamma_{-}} &*& log(1-p_m) \\ p &= sigmoid(logit) \\ p_m &= max(p-m, 0) \end{aligned} L+?L??ppm??=(1?p)γ+?=pmγ???=sigmoid(logit)=max(p?m,0)???log(p)log(1?pm?)
由于 pmp_mpm? 僅在 L?L_{-}L?? 中存在,而ppp一般出現(xiàn)在L+L_{+}L+?中,(1?p)(1-p)(1?p)一般出現(xiàn)在L?L_{-}L??中,所以將 pmp_mpm? 做一些反向操作
先引入一個引理,顯然成立,x和y都是函數(shù)(或者變量),二者中大的加上負號,就是二者相反數(shù)中小的
?max(x,y)==min(?x,?y)-max(x, y) == min(-x, -y) ?max(x,y)==min(?x,?y)
所以:
pm=max(p?m,0)=?min(m?p,0)?pm=min(m?p,0)1?pm=min(m?p,0)+11?pm=min(m?p+1,1)1?pm=min(m+1?p,1)1?pm=np.clip(m+1?p,max=1)\begin{aligned} p_m &= max(p-m, 0) \\ &= -min(m-p, 0) \\ -p_m &= min(m-p, 0) \\ 1-p_m &= min(m-p, 0) + 1 \\ 1-p_m &= min(m-p+ 1, 1) \\ 1-p_m &= min(m+ 1-p, 1) \\ 1-p_m &= np.clip(m+ 1-p, max=1) \\ \end{aligned} pm??pm?1?pm?1?pm?1?pm?1?pm??=max(p?m,0)=?min(m?p,0)=min(m?p,0)=min(m?p,0)+1=min(m?p+1,1)=min(m+1?p,1)=np.clip(m+1?p,max=1)?
這一行咱等會要用到
4. asl 代碼
看看 asl loss 的代碼,torch代碼來自:
https://github.com/Alibaba-MIIL/ASL/blob/main/src/loss_functions/losses.py
- self.gamma_neg 是 γ?\gamma_{-}γ??
- self.gamma_pos 是 γ+\gamma_{+}γ+?
- self.eps 是用作 log 函數(shù)內(nèi)部,防止溢出
來咱單獨看一下代碼:
# 非對稱裁剪 if self.clip is not None and self.clip > 0:self.xs_neg.add_(self.clip).clamp_(max=1) # 給 self.xs_neg 加上 clip 值這兩行用于計算:
1?pm=np.clip(m+1?p,max=1)\begin{aligned} 1-p_m &= np.clip(m+ 1-p, max=1) \end{aligned} 1?pm??=np.clip(m+1?p,max=1)?
這兩行用于計算紅框部分:
注意 self.targets 和 self.anti_targets 都相當于掩碼 mask 的作用,此處的 self.loss 矩陣的shape是和 self.targets 一樣的 shape,不理解可以回憶一下 BCE公式部分 的計算
而前面的 冪 相當于權(quán)重,就是代碼中的 self.asymmetric_w,也就是此處的:
self.asymmetric_w 是這樣計算的,這部分很妙!
self.xs_pos = self.xs_pos * self.targets self.xs_neg = self.xs_neg * self.anti_targets self.asymmetric_w = torch.pow(1 - self.xs_pos - self.xs_neg,self.gamma_pos * self.targets + self.gamma_neg * self.anti_targets)插一句 torch.pow 該函數(shù)會將兩個shape相同的張量的對應(yīng)位置做冪運算,看這個例子
>>> x = torch.tensor([1, 2, 3, 4]) >>> y = torch.tensor([2, 2, 3, 1]) >>> torch.pow(x, y) tensor([ 1, 4, 27, 4])計算 self.asymmetric_w 時,只需將pow的 xxx 參數(shù)對應(yīng)位置寫成 (1?p)(1-p)(1?p) 或者 pmp_mpm?,將pow的 yyy 參數(shù)對應(yīng)位置寫成 γ?\gamma_{-}γ?? 或者 γ+\gamma_{+}γ+? 即可,先看簡單的,yyy 參數(shù)這里計算:
self.gamma_pos * self.targets + self.gamma_neg * self.anti_targets也是通過 self.targets 的 mask 操作來進行的,而 xxx 參數(shù)這樣計算:
1 - self.xs_pos - self.xs_neg當計算 L+L_{+}L+? 時,self.xs_neg==0,xxx 參數(shù)對應(yīng)位置就是 1 - self.xs_pos 即 (1-p)
當計算 L?L_{-}L?? 時,self.xs_pos==0,xxx 參數(shù)對應(yīng)位置就是 1 - self.xs_neg 即 (1?(1?pm))=pm(1-(1-p_m))=p_m(1?(1?pm?))=pm?
通過一個 torch.pow 巧妙的計算了 self.asymmetric_w NICE!
之后二者對應(yīng)位置相乘即可
self.loss *= self.asymmetric_w5. asl 代碼 Paddle 實現(xiàn)
class AsymmetricLossOptimizedWithLogit(nn.Layer):''' Notice - optimized version, minimizes memory allocation and gpu uploading,favors inplace operations'''def __init__(self, gamma_neg=4, gamma_pos=1, clip=0.05, eps=1e-5, disable_paddle_grad_focal_loss=False):super(AsymmetricLossOptimizedWithLogit, self).__init__()self.gamma_neg = gamma_negself.gamma_pos = gamma_posself.clip = clipself.disable_paddle_grad_focal_loss = disable_paddle_grad_focal_lossself.eps = epsself.targets = self.anti_targets = self.xs_pos = self.xs_neg = self.asymmetric_w = self.loss = Nonedef forward(self, x, y, weights=None):""""Parameters----------x: input logitsy: targets (multi-label binarized vector)"""self.targets = yself.anti_targets = 1 - y# Calculating Probabilitiesself.xs_pos = F.sigmoid(x)self.xs_neg = 1.0 - self.xs_pos# Asymmetric Clippingif self.clip is not None and self.clip > 0:# self.xs_neg.add_(self.clip).clip_(max=1)self.xs_neg = (self.xs_neg + self.clip).clip_(max=1)# Basic CE calculationself.loss = self.targets * paddle.log(self.xs_pos.clip(min=self.eps))self.loss.add_(self.anti_targets * paddle.log(self.xs_neg.clip(min=self.eps)))# Asymmetric Focusingif self.gamma_neg > 0 or self.gamma_pos > 0:if self.disable_paddle_grad_focal_loss:paddle.set_grad_enabled(False)self.xs_pos = self.xs_pos * self.targetsself.xs_neg = self.xs_neg * self.anti_targetsself.asymmetric_w = paddle.pow(1 - self.xs_pos - self.xs_neg,(self.gamma_pos * self.targets + \self.gamma_neg * self.anti_targets).astype("float32"))if self.disable_paddle_grad_focal_loss:paddle.set_grad_enabled(True)self.loss *= self.asymmetric_wif weights is not None:self.loss *= weights_loss = -self.loss.sum()return _lossif __name__ == "__main__":np.random.seed(11070109)x = np.random.randn(3, 3)x = paddle.to_tensor(x).cast("float32")y = (x > 0.5).cast("float32")loss = AsymmetricLossOptimizedWithLogit()out = loss(x, y)總結(jié)
以上是生活随笔為你收集整理的loss盘点: asl loss (Asymmetric Loss) 代码解析详细版的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 免费的收银系统靠谱吗
- 下一篇: NOTES修改服务器密码,修改notes