点云深度学习——点云配准网络DCP复现
生活随笔
收集整理的這篇文章主要介紹了
点云深度学习——点云配准网络DCP复现
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
點云配準網絡DCP復現
- 前言
- 一、效果展示
- 1.1 open3d中效果展示
- 二、復現源碼
- 2.1 參考鏈接
- 2.2 復現流程
- 2.3遇到問題:
- 三、模型測試單個數據,并用open3d顯示
- 3.1 單個數據測試代碼
- 3.2 問題
前言
最近在學習點云深度學習,主要看了一下點云的配準網絡,比如PointNetLK , DCP, DeepICP 等,打算先跑通代碼,在進行原理的細致學習。下面記錄一下DCP網絡的實現。
一、效果展示
1.1 open3d中效果展示
下圖為open3d測試訓練好的模型,測試集為modelnet40中的樣本
二、復現源碼
2.1 參考鏈接
Github源碼參考鏈接
2.2 復現流程
1)配置好pytorch 、cuda等相關環境
可以參照網上教程
2)依賴項配置
scipy安裝
conda install scipy
h5py安裝
conda install h5py
tqdm安裝
conda install tqdm
tensorBoardX安裝
conda install tensorboard
conda install tensorboardx
3)訓練模型
參考readme training
4)測試模型
2.3遇到問題:
1)缺少nose模塊
conda install nose2)from-dcm 報錯
錯誤原因:
from_dcm 替換為from_matrix
三、模型測試單個數據,并用open3d顯示
3.1 單個數據測試代碼
import numpy as np import torch import time import os from model import DCP from util import transform_point_cloud, npmat2euler import argparse from scipy.spatial.transform import Rotation from data import ModelNet40 import glob import h5py import open3d as o3ddef transform_input(pointcloud):"""random rotation and transformation the inputpointcloud: N*3"""anglex = np.random.uniform() * np.pi / 4angley = np.random.uniform() * np.pi / 4anglez = np.random.uniform() * np.pi / 4# anglex = 0.04# angley = 0.04# anglez = 0.04print('angle: ',anglex,angley,anglez)cosx = np.cos(anglex)cosy = np.cos(angley)cosz = np.cos(anglez)sinx = np.sin(anglex)siny = np.sin(angley)sinz = np.sin(anglez)Rx = np.array([[1, 0, 0],[0, cosx, -sinx],[0, sinx, cosx]])Ry = np.array([[cosy, 0, siny],[0, 1, 0],[-siny, 0, cosy]])Rz = np.array([[cosz, -sinz, 0],[sinz, cosz, 0],[0, 0, 1]])R_ab = Rx.dot(Ry).dot(Rz)R_ba = R_ab.Ttranslation_ab = np.array([np.random.uniform(-0.5, 0.5),np.random.uniform(-0.5, 0.5),np.random.uniform(-0.5, 0.5)])# translation_ab = np.array([0.01,0.05,0.05])print('trans: ',translation_ab)translation_ba = -R_ba.dot(translation_ab)pointcloud1 = pointcloud[:,:3].Trotation_ab = Rotation.from_euler('zyx', [anglez, angley, anglex])pointcloud2 = rotation_ab.apply(pointcloud1.T).T + np.expand_dims(translation_ab, axis=1)euler_ab = np.asarray([anglez, angley, anglex])euler_ba = -euler_ab[::-1]rotation_ba = Rotation.from_euler('zyx', euler_ba)pointcloud1 = np.random.permutation(pointcloud1.T)pointcloud2 = np.random.permutation(pointcloud2.T)return pointcloud1.astype('float32'), pointcloud2.astype('float32'), \rotation_ab,translation_ab, rotation_ba,translation_badef run_one_pointcloud(src,target,net):if len(src.shape)==2 and len(target.shape)==2: ## (N,3)print("src/target shape:", src.shape,target.shape)src = np.expand_dims(src[:,:3],axis=0)src = np.transpose(src,[0,2,1]) ## (1, 3, 1024)target = np.expand_dims(target[:,:3],axis=0)target = np.transpose(target,[0,2,1]) ## (1, 3, 1024)net.eval()src = torch.from_numpy(src).cuda()target = torch.from_numpy(target).cuda()rotation_ab_pred, translation_ab_pred, \rotation_ba_pred, translation_ba_pred = net(src, target)target_pred = transform_point_cloud(src, rotation_ab_pred,translation_ab_pred)src_pred = transform_point_cloud(target, rotation_ba_pred,translation_ba_pred)# put on cpu and turn into numpysrc_pred = src_pred.detach().cpu().numpy()src_pred = np.transpose(src_pred[0],[1,0])target_pred = target_pred.detach().cpu().numpy()target_pred = np.transpose(target_pred[0],[1,0])rotation_ab_pred = rotation_ab_pred.detach().cpu().numpy()translation_ab_pred = translation_ab_pred.detach().cpu().numpy()rotation_ba_pred = rotation_ba_pred.detach().cpu().numpy()translation_ba_pred = translation_ba_pred.detach().cpu().numpy()return src_pred,target_pred,rotation_ab_pred, translation_ab_pred,rotation_ba_pred, translation_ba_predif __name__ == "__main__":parser = argparse.ArgumentParser(description='Point Cloud Registration')parser.add_argument('--exp_name', type=str, default='', metavar='N',help='Name of the experiment')parser.add_argument('--model', type=str, default='dcp', metavar='N',choices=['dcp'],help='Model to use, [dcp]')parser.add_argument('--emb_nn', type=str, default='dgcnn', metavar='N',choices=['pointnet', 'dgcnn'],help='Embedding nn to use, [pointnet, dgcnn]')parser.add_argument('--pointer', type=str, default='transformer', metavar='N',choices=['identity', 'transformer'],help='Attention-based pointer generator to use, [identity, transformer]')parser.add_argument('--head', type=str, default='svd', metavar='N',choices=['mlp', 'svd', ],help='Head to use, [mlp, svd]')parser.add_argument('--emb_dims', type=int, default=512, metavar='N',help='Dimension of embeddings')parser.add_argument('--n_blocks', type=int, default=1, metavar='N',help='Num of blocks of encoder&decoder')parser.add_argument('--n_heads', type=int, default=4, metavar='N',help='Num of heads in multiheadedattention')parser.add_argument('--ff_dims', type=int, default=1024, metavar='N',help='Num of dimensions of fc in transformer')parser.add_argument('--dropout', type=float, default=0.0, metavar='N',help='Dropout ratio in transformer')parser.add_argument('--batch_size', type=int, default=32, metavar='batch_size',help='Size of batch)')parser.add_argument('--test_batch_size', type=int, default=1, metavar='batch_size',help='Size of batch)')parser.add_argument('--epochs', type=int, default=250, metavar='N',help='number of episode to train ')parser.add_argument('--use_sgd', action='store_true', default=False,help='Use SGD')parser.add_argument('--lr', type=float, default=0.001, metavar='LR',help='learning rate (default: 0.001, 0.1 if using sgd)')parser.add_argument('--momentum', type=float, default=0.9, metavar='M',help='SGD momentum (default: 0.9)')parser.add_argument('--no_cuda', action='store_true', default=False,help='enables CUDA training')parser.add_argument('--seed', type=int, default=1234, metavar='S',help='random seed (default: 1)')parser.add_argument('--eval', action='store_true', default=False,help='evaluate the model')parser.add_argument('--cycle', type=bool, default=False, metavar='N',help='Whether to use cycle consistency')parser.add_argument('--model_path', type=str,default= 'pretrained/dcp_v2.t7',metavar='N',help='Pretrained model path')args = parser.parse_args()torch.backends.cudnn.deterministic = Truetorch.manual_seed(args.seed)torch.cuda.manual_seed_all(args.seed)# net preparednet = DCP(args).cuda()net.load_state_dict(torch.load( args.model_path), strict=False)f = h5py.File('data/modelnet40_ply_hdf5_2048/ply_data_train2.h5','r')data = f['data'][:].astype('float32') # (2048, 2048, 3) <class 'numpy.ndarray'>f.close()# index = np.random.randint(data.shape[0])index=0point1 = data[index,:,:]_,point2,_,_,_,_ = transform_input(point1)# src1=o3d.io.read_point_cloud("/home/pride/3d_registration/dcp-master/0_modelnet_src.ply")# point1=np.asarray(src1.points)# print(point1)# _, point2, _, _, _, _ = transform_input(point1)src,target = point1,point2## runsrc_pred, target_pred,r_ab,t_ab,r_ba,t_ba, = run_one_pointcloud(src, target,net)print("############# src -> target :\n", r_ab, t_ab)print("############# src <- target :\n", r_ba, t_ba)#np->open3dsrc_cloud=o3d.geometry.PointCloud()src_cloud.points=o3d.utility.Vector3dVector(point1)tgt_cloud = o3d.geometry.PointCloud()tgt_cloud.points = o3d.utility.Vector3dVector(point2)trans_cloud = o3d.geometry.PointCloud()trans_cloud.points = o3d.utility.Vector3dVector(src_pred)# viewsrc_cloud.paint_uniform_color([1,0,0])tgt_cloud.paint_uniform_color([0, 1, 0])trans_cloud.paint_uniform_color([0, 0, 1])o3d.visualization.draw_geometries([src_cloud,tgt_cloud,trans_cloud],width=800)3.2 問題
1)用的都是modelnet的數據集,沒有用過自己數據集測試
2)讀取的是h5模型,后續會更改到利用open3d讀取模型
over!!!
總結
以上是生活随笔為你收集整理的点云深度学习——点云配准网络DCP复现的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: rdkit 计算环、芳香环数
- 下一篇: linux可执行文件的后缀是什么?