pytorch3d学习之pytorch3d.ops
pytorch3d.ops是pytorch提供的一些關于3d數據,即計算機圖形學的一些運算的包。
1.pytorch3d.ops.ball_query()
pytorch3d.ops.ball_query(p1: torch.Tensor, p2: torch.Tensor, lengths1: Optional[torch.Tensor] = None, lengths2: Optional[torch.Tensor] = None, K: int = 500, radius: float = 0.2, return_nn: bool = True)Ball Query is an alternative to KNN. It can be used to find all points in p2 that are within a specified radius to the query point in p1 (with an upper limit of K neighbors).
The neighbors returned are not necssarily the?nearest?to the point in p1, just the first K values in p2 which are within the specified radius.
在p2中找距離p1的在特定半徑范圍內的點,它不一定是k最鄰近點。
2.pytorch3d.ops.cubify()
pytorch3d.ops.cubify(voxels,?thresh,?device=None,?align: str = 'topleft')?→ pytorch3d.structures.meshes.Meshes[source]?
Converts a voxel to a mesh by replacing each occupied voxel with a cube consisting of 12 faces and 8 vertices. Shared vertices are merged, and internal faces are removed. :param voxels: A FloatTensor of shape (N, D, H, W) containing occupancy probabilities. :param thresh: A scalar threshold. If a voxel occupancy is larger than
將voxel轉為mesh
3.pytorch3d.ops.knn_gather
pytorch3d.ops.knn_gather(x: torch.Tensor,?idx: torch.Tensor,?lengths: Optional[torch.Tensor] = None)[source]?
A helper function for knn that allows indexing a tensor x with the indices?idx?returned by?knn_points.
For example, if?dists, idx = knn_points(p, x, lengths_p, lengths, K)?where p is a tensor of shape (N, L, D) and x a tensor of shape (N, M, D), then one can compute the K nearest neighbors of p with?p_nn = knn_gather(x, idx, lengths). It can also be applied for any tensor x of shape (N, M, U) where U != D.
返回p的k最鄰近點以及點的坐標·
4.pytorch3d.ops.knn_points
pytorch3d.ops.knn_points(p1: torch.Tensor,?p2: torch.Tensor,?lengths1: Optional[torch.Tensor] = None,?lengths2: Optional[torch.Tensor] = None,?K: int = 1,?version: int = -1,?return_nn: bool = False,?return_sorted: bool = True)?→ pytorch3d.ops.knn.KNN[source]
| dists?– Tensor of shape (N, P1, K) giving the squared distances to the nearest neighbors. This is padded with zeros both where a cloud in p2 has fewer than K points and where a cloud in p1 has fewer than P1 points. idx: LongTensor of shape (N, P1, K) giving the indices of the K nearest neighbors from points in p1 to points in p2. Concretely, if?p1_idx[n, i, k] = j?then?p2[n, j]?is the k-th nearest neighbors to?p1[n, i]?in?p2[n]. This is padded with zeros both where a cloud in p2 has fewer than K points and where a cloud in p1 has fewer than P1 points. nn: Tensor of shape (N, P1, K, D) giving the K nearest neighbors in p2 for each point in p1. Concretely,?p2_nn[n, i, k]?gives the k-th nearest neighbor for?p1[n, i]. Returned if?return_nn?is True. The nearest neighbors are collected using?knn_gather which is a helper function that allows indexing any tensor of shape (N, P2, U) with the indices?p1_idx?returned by?knn_points. The output is a tensor of shape (N, P1, K, U). |
5.pytorch3d.ops.corresponding_points_alignment
pytorch3d.ops.corresponding_points_alignment(X: Union[torch.Tensor, Pointclouds], Y: Union[torch.Tensor, Pointclouds], weights: Union[torch.Tensor, List[torch.Tensor], None] = None, estimate_scale: bool = False, allow_reflection: bool = False, eps: float = 1e-09)?→ pytorch3d.ops.points_alignment.SimilarityTransform[source]?
Finds a similarity transformation (rotation?R, translation?T?and optionally scale?s) between two given sets of corresponding?d-dimensional points?X?and?Y?such that:
s[i] X[i] R[i] + T[i] = Y[i],
for all batch indexes?i?in the least squares sense.
The algorithm is also known as Umeyama [1].
計算兩個向量x,y他們之間的相似變換矩陣,包括旋轉矩陣R, 平移矩陣T,縮放矩陣s。
6.拉普拉斯矩陣計算
pytorch3d.ops.cot_laplacian(verts: torch.Tensor,?faces: torch.Tensor,?eps: float = 1e-12)?→ Tuple[torch.Tensor, torch.Tensor][source]
Returns the Laplacian matrix with cotangent weights and the inverse of the face areas.
|
| 2-element tuple containing -?L: Sparse FloatTensor of shape (V,V) for the Laplacian matrix. Here, L[i, j] = cot a_ij + cot b_ij iff (i, j) is an edge in meshes. See the description above for more clarity.
|
?
pytorch3d.ops.laplacian(verts: torch.Tensor,?edges: torch.Tensor)?→ torch.Tensor[source]
Computes the laplacian matrix. The definition of the laplacian is L[i, j] = -1 , if i == j L[i, j] = 1 / deg(i) , if (i, j) is an edge L[i, j] = 0 , otherwise where deg(i) is the degree of the i-th vertex in the graph.
|
| L?– Sparse FloatTensor of shape (V, V) |
?
pytorch3d.ops.norm_laplacian(verts: torch.Tensor,?edges: torch.Tensor,?eps: float = 1e-12)?→ torch.Tensor[source]
Norm laplacian computes a variant of the laplacian matrix which weights each affinity with the normalized distance of the neighboring nodes. More concretely, L[i, j] = 1. / wij where wij = ||vi - vj|| if (vi, vj) are neighboring nodes
|
| L?– Sparse FloatTensor of shape (V, V) |
?
7. ICP算法
pytorch3d.ops.iterative_closest_point(X: Union[torch.Tensor, Pointclouds], Y: Union[torch.Tensor, Pointclouds], init_transform: Optional[pytorch3d.ops.points_alignment.SimilarityTransform] = None, max_iterations: int = 100, relative_rmse_thr: float = 1e-06, estimate_scale: bool = False, allow_reflection: bool = False, verbose: bool = False)?→ pytorch3d.ops.points_alignment.ICPSolution[source]
Executes the iterative closest point (ICP) algorithm [1, 2] in order to find a similarity transformation (rotation?R, translation?T, and optionally scale?s) between two given differently-sized sets of?d-dimensional points?X?and?Y, such that:
?
總結
以上是生活随笔為你收集整理的pytorch3d学习之pytorch3d.ops的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: HTTPS 协议
- 下一篇: torchvision.ops.nms与