医学图像分割之肝脏分割(2D)
文章目錄
- 肝臟分割數(shù)據(jù)集
- 一、預(yù)處理
- 二、圖像增強(qiáng)
- 三、訓(xùn)練
- 四、測(cè)試
- 總結(jié)
肝臟分割數(shù)據(jù)集
肝臟分割分割主要有三個(gè)開源數(shù)據(jù)集:LiTS17,Sliver07,3DIRCADb
LiTS17包含131個(gè)數(shù)據(jù),可以用于肝臟及腫瘤的分割
Sliver07包含20個(gè)數(shù)據(jù),但僅僅包含肝臟,并沒有腫瘤的金標(biāo)準(zhǔn)
3DIRCADb包含20個(gè)數(shù)據(jù),可以用于肝臟及腫瘤的分割
提示:以下是本篇文章正文內(nèi)容,下面案例可供參考
一、預(yù)處理
這三個(gè)肝臟分割數(shù)據(jù)集都是3D數(shù)據(jù),所以如過要進(jìn)行訓(xùn)練必須要對(duì)數(shù)據(jù)進(jìn)行切片。
對(duì)3D nii文件進(jìn)行調(diào)窗,找到肝臟區(qū)域并向外擴(kuò)張,將數(shù)據(jù)z軸的spacing調(diào)整到1mm。
將3D nii格式的數(shù)據(jù)切片為2D png格式。
# 將灰度值在閾值之外的截?cái)嗟? upper = 200# lower = -200ct_array[ct_array > para.upper] = para.upperct_array[ct_array < para.lower] = para.lower#肝臟分割設(shè)置seg_array[seg_array > 0] = 1# 找到肝臟區(qū)域開始和結(jié)束的slice,并各向外擴(kuò)張z = np.any(seg_array, axis=(1, 2))start_slice, end_slice = np.where(z)[0][[0, -1]]# 對(duì)CT數(shù)據(jù)在橫斷面上進(jìn)行降采樣(下采樣),并進(jìn)行重采樣,將所有數(shù)據(jù)的z軸的spacing調(diào)整到1mm ct_array = ndimage.zoom(ct_array,(ct.GetSpacing()[-1] / para.slice_thickness, para.down_scale, para.down_scale),order=3) seg_array = ndimage.zoom(seg_array, (ct.GetSpacing()[-1] / para.slice_thickness, para.down_scale, para.down_scale), order=0)#進(jìn)行切片操作for i in range(volume.shape[0]):slice = volume[i,:,:]slice_post = WL(slice, WC, WW)slice_post = slice_post.astype(np.uint8)# slice_post = cv.equalizeHist(slice_post)slices_in_order.append(slice_post)? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 原始CT? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 調(diào)窗后
二、圖像增強(qiáng)
同時(shí)對(duì)肝臟的 Original CT,Ground Truth進(jìn)行圖像增強(qiáng)。
# 圖像旋轉(zhuǎn): 按照概率0.8執(zhí)行,最大左旋角度10,最大右旋角度10 # p.rotate(probability=0.5, max_left_rotation=20, max_right_rotation=20)# 圖像左右互換: 按照概率0.5執(zhí)行 # p.flip_left_right(probability=0.8)# # # 圖像放大縮小: 按照概率0.8執(zhí)行,面積為原始圖0.85倍 # p.zoom_random(probability=1, percentage_area=0.8)?
三、訓(xùn)練
1.模型搭建
import torch.nn as nn import torch from torch import autogradclass DoubleConv(nn.Module):def __init__(self, in_ch, out_ch):super(DoubleConv, self).__init__()self.conv = nn.Sequential(nn.Conv2d(in_ch, out_ch, 3, padding=1),nn.BatchNorm2d(out_ch),nn.ReLU(inplace=True),nn.Conv2d(out_ch, out_ch, 3, padding=1),nn.BatchNorm2d(out_ch),nn.ReLU(inplace=True))def forward(self, input):return self.conv(input)class UNet(nn.Module):def __init__(self, in_ch, out_ch):super(UNet, self).__init__()self.conv1 = DoubleConv(in_ch, 64)self.pool1 = nn.MaxPool2d(2)self.conv2 = DoubleConv(64, 128)self.pool2 = nn.MaxPool2d(2)self.conv3 = DoubleConv(128, 256)self.pool3 = nn.MaxPool2d(2)self.conv4 = DoubleConv(256, 512)self.pool4 = nn.MaxPool2d(2)self.conv5 = DoubleConv(512, 1024)self.up6 = nn.ConvTranspose2d(1024, 512, 2, stride=2)self.conv6 = DoubleConv(1024, 512)self.up7 = nn.ConvTranspose2d(512, 256, 2, stride=2)self.conv7 = DoubleConv(512, 256)self.up8 = nn.ConvTranspose2d(256, 128, 2, stride=2)self.conv8 = DoubleConv(256, 128)self.up9 = nn.ConvTranspose2d(128, 64, 2, stride=2)self.conv9 = DoubleConv(128, 64)self.conv10 = nn.Conv2d(64, out_ch, 1)def forward(self, x):c1 = self.conv1(x)p1 = self.pool1(c1)c2 = self.conv2(p1)p2 = self.pool2(c2)c3 = self.conv3(p2)p3 = self.pool3(c3)c4 = self.conv4(p3)p4 = self.pool4(c4)c5 = self.conv5(p4)up_6 = self.up6(c5)merge6 = torch.cat([up_6, c4], dim=1)c6 = self.conv6(merge6)up_7 = self.up7(c6)merge7 = torch.cat([up_7, c3], dim=1)c7 = self.conv7(merge7)up_8 = self.up8(c7)merge8 = torch.cat([up_8, c2], dim=1)c8 = self.conv8(merge8)up_9 = self.up9(c8)merge9 = torch.cat([up_9, c1], dim=1)c9 = self.conv9(merge9)c10 = self.conv10(c9)out = nn.Sigmoid()(c10)return out2.模型訓(xùn)練
if __name__ == '__main__':device = torch.device("cuda" if torch.cuda.is_available() else "cpu")LEARNING_RATE = 1e-3LR_DECAY_STEP = 2LR_DECAY_FACTOR = 0.5WEIGHT_DECAY = 5e-4BATCH_SIZE = 1MAX_EPOCHS = 2model = UNet()criterion = DiceLoss()if opt.use_gpu:criterion = criterion.cuda(opt.device)lr = opt.lroptimizer = torch.optim.Adam(model.parameters(), lr=lr, weight_decay=opt.weight_decay)lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer, opt.lr_decay)MODEL = Unet(out_ch=2)OPTIMIZER = torch.optim.Adam(MODEL.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)LR_SCHEDULER = torch.optim.lr_scheduler.StepLR(OPTIMIZER, step_size=LR_DECAY_STEP, gamma=LR_DECAY_FACTOR)CRITERION = DiceLoss().to(device)四、測(cè)試
for i in range(1, 10):idx_list.append(i)ct = sitk.ReadImage(data_path + 'volume-' + str(i) + '.nii',sitk.sitkInt16)seg = sitk.ReadImage(data_path + 'segmentation-' + str(i) + '.nii',sitk.sitkInt16)predictions_in_order = []for slice in slices_in_order:slice = torch.from_numpy(slice).float() / 255.output = model(slice.unsqueeze(0).unsqueeze(0))prediction = sm(output)_, prediction = torch.max(prediction, dim=1) #返回每一行中最大值的那個(gè)元素,且返回其索引(返回最大元素在這一行的列索引)prediction = prediction.squeeze(0).cpu().detach().numpy().astype(np.uint8)predictions_in_order.append(prediction)測(cè)試結(jié)果可視化:
總結(jié)
第一次寫,也是隨手一寫,后續(xù)會(huì)更新3D肝臟分割,2.5D肝臟分割,以及肝臟腫瘤分割。
總結(jié)
以上是生活随笔為你收集整理的医学图像分割之肝脏分割(2D)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: linux终端文件名前特殊符号,Linu
- 下一篇: nas做服务器虚拟化共享存储,NAS虚拟