Class Activation Mapping (CNN可视化) Python示例
生活随笔
收集整理的這篇文章主要介紹了
Class Activation Mapping (CNN可视化) Python示例
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
Class Activation Mapping
論文:CVPR2016《Learning Deep Features for Discriminative Localization》
代碼:https://github.com/acheketa/pytorch-CAM/blob/master/update.py
1、首先定義并訓練好CNN網絡,很重要的一點是網絡的最后一個卷積層必須只有一個通道,并且緊跟著全連接層(最后一層),可以參考github上inception-v3網絡結構,下面是我自己的網絡結構最后兩層。
這是我的網絡里最后的設計,其中conv3是要觀察的熱力圖,fcl1是最后dense到類數。
假設網路訓練好,得到一個best_net。
2、CAM代碼
# generate class activation mapping for the top1 prediction def returnCAM(feature_conv, weight_softmax, class_idx):# generate the class activation maps upsample to 256x256size_upsample = (256, 256)bz, nc, h, w = feature_conv.shapeoutput_cam = []for idx in class_idx:#cam = weight_softmax[class_idx].dot(feature_conv.reshape((nc,h*w)))cam = weight_softmax[class_idx]*(feature_conv.reshape((nc,h*w)))cam = cam.reshape(h, w)cam = cam - np.min(cam)cam_img = cam / np.max(cam)cam_img = np.uint8(255 * cam_img)output_cam.append(cv2.resize(cam_img, size_upsample))return output_cam# hook the feature extractor features_blobs = [] def hook_feature(module, input, output):features_blobs.append(output.data.cpu().numpy()) #last conv layer followed with one channel by last fully connected layer final_conv = 'conv3' best_net._modules.get(final_conv).register_forward_hook(hook_feature) #get weights parameters params = list(best_net.parameters()) #get the last and second last weights, like [classes, hiden nodes] weight_softmax = np.squeeze(params[-2].data.cpu().numpy()) # define class type classes = {0: 'Pos', 1: 'Neg'} #read image root='/test.png' img = [] img.append( cv2.resize(cv2.imread(root).astype(np.float32), (256, 256)))#(256, 256) is the model input size data = torch.from_numpy(np.array(img)).type(torch.FloatTensor).cuda() logit = best_net(data.permute(0, 3, 1, 2))#forword h_x = F.softmax(logit, dim=1).data.squeeze()#softmax probs, idx = h_x.sort(0, True) #probabilities of classes# output: the prediction for i in range(0, 2):line = '{:.3f} -> {}'.format(probs[i], classes[idx[i].item()])print(line) #get the class activation maps CAMs = returnCAM(features_blobs[0], weight_softmax, [idx[0].item()])# render the CAM and show print('output CAM.jpg for the top1 prediction: %s' % classes[idx[0].item()]) img = cv2.imread(root) height, width, _ = img.shape CAM = cv2.resize(CAMs[0], (width, height)) heatmap = cv2.applyColorMap(CAM, cv2.COLORMAP_JET) result = heatmap * 0.3 + img * 0.5 cv2.imwrite('cam.jpg', result)這個代碼是核心。
總結
以上是生活随笔為你收集整理的Class Activation Mapping (CNN可视化) Python示例的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Feature Map of Pytor
- 下一篇: Grad-CAM (CNN可视化) P