MinkowskiPooling池化(下)
MinkowskiPooling池化(下)
MinkowskiPoolingTranspose
 class MinkowskiEngine.MinkowskiPoolingTranspose(kernel_size, stride, dilation=1, kernel_generator=None, dimension=None)
 稀疏張量的池轉(zhuǎn)置層。
 展開功能,然后將其除以貢獻(xiàn)的非零元素的數(shù)量。
 init(kernel_size, stride, dilation=1, kernel_generator=None, dimension=None)
 用于稀疏張量的高維解卷層。
 Args:
 kernel_size (int, optional): 輸出張量中內(nèi)核的大小。如果未提供,則region_offset應(yīng)該是 RegionType.CUSTOM并且region_offset應(yīng)該是具有大小的2D矩陣N×D 這樣它列出了所有D維度的 N 偏移量。.
 stride (int, or list, optional): stride size of the convolution layer. If non-identity is used, the output coordinates will be at least stride ×× tensor_stride away. When a list is given, the length must be D; each element will be used for stride size for the specific axis.
 dilation (int, or list, optional): 卷積內(nèi)核的擴(kuò)展大小。給出列表時(shí),長(zhǎng)度必須為D,并且每個(gè)元素都是軸特定的膨脹。所有元素必須> 0。
 kernel_generator (MinkowskiEngine.KernelGenerator, optional): 定義自定義內(nèi)核形狀。
 dimension(int):定義所有輸入和網(wǎng)絡(luò)的空間的空間尺寸。例如,圖像在2D空間中,網(wǎng)格和3D形狀在3D空間中。
 cpu() → T
 將所有模型參數(shù)和緩沖區(qū)移至CPU。
 返回值:
 模塊:self
 cuda(device: Optional[Union[int, torch.device]] = None) → T
 將所有模型參數(shù)和緩沖區(qū)移至GPU。
 這也使關(guān)聯(lián)的參數(shù)并緩沖不同的對(duì)象。因此,在構(gòu)建優(yōu)化程序之前,如果模塊在優(yōu)化過程中可以在GPU上運(yùn)行,則應(yīng)調(diào)用它。
 參數(shù):
 設(shè)備(整數(shù),可選):如果指定,則所有參數(shù)均為
 復(fù)制到該設(shè)備
 返回值:
 模塊:self
 double() →T
 將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為double數(shù)據(jù)類型。
 返回值:
 模塊:self
 float() →T
 將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為float數(shù)據(jù)類型。
 返回值:
 模塊:self
 forward(input: SparseTensor.SparseTensor, coords: Union[torch.IntTensor, MinkowskiCoords.CoordsKey, SparseTensor.SparseTensor] = None)
 input (MinkowskiEngine.SparseTensor): Input sparse tensor to apply a convolution on.
 coords ((torch.IntTensor, MinkowskiEngine.CoordsKey, MinkowskiEngine.SparseTensor), optional): If provided, generate results on the provided coordinates. None by default.
 to(*args, **kwargs)
 Moves and/or casts the parameters and buffers.
 This can be called as
 to(device=None, dtype=None, non_blocking=False)
 to(dtype, non_blocking=False)
 to(tensor, non_blocking=False)
 to(memory_format=torch.channels_last)
 其簽名類似于torch.Tensor.to(),但僅接受所需dtype的浮點(diǎn)s。另外,此方法將僅將浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為dtype (如果給定的話)。device如果給定了整數(shù)參數(shù)和緩沖區(qū) ,但dtype不變。當(dāng) non_blocking被設(shè)置時(shí),它試圖轉(zhuǎn)換/如果可能異步相對(duì)于移動(dòng)到主機(jī),例如,移動(dòng)CPU張量與固定內(nèi)存到CUDA設(shè)備。
 請(qǐng)參見下面的示例。
 Args:
 device (torch.device): the desired device of the parameters
 and buffers in this module
 dtype (torch.dtype): the desired floating point type of
 the floating point parameters and buffers in this module
 tensor (torch.Tensor): Tensor whose dtype and device are the desired
 dtype and device for all parameters and buffers in this module
 memory_format (torch.memory_format): the desired memory
 format for 4D parameters and buffers in this module (keyword only argument)
 Returns:
 Module: self
 Example:
linear = nn.Linear(2, 2)
linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)gpu1 = torch.device(“cuda:1”)
linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device=‘cuda:1’)cpu = torch.device(“cpu”)
linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
type(dst_type: Union[torch.dtype, str]) → T
Casts all parameters and buffers to dst_type.
Arguments:
dst_type (type or string): the desired type
Returns:
Module: self
MinkowskiGlobalPooling
class MinkowskiEngine.MinkowskiGlobalPooling(average=True, mode=<GlobalPoolingMode.AUTO: 0>)
將所有輸入功能集中到一個(gè)輸出。
將稀疏坐標(biāo)減少到原點(diǎn),即將每個(gè)點(diǎn)云減少到原點(diǎn),返回batch_size點(diǎn)的數(shù)量[[0,0,…,0],[0,0,…,1] , [0, 0,…,2]],其中坐標(biāo)的最后一個(gè)元素是批處理索引。
 Args:
 average (bool): 當(dāng)為True時(shí),返回平均輸出。如果不是,則返回所有輸入要素的總和。
 cpu() → T
 將所有模型參數(shù)和緩沖區(qū)移至CPU。
 返回值:
 模塊:自我
 Module: self
 cuda(device: Optional[Union[int, torch.device]] = None) → T
 將所有模型參數(shù)和緩沖區(qū)移至GPU。
 這也使關(guān)聯(lián)的參數(shù)并緩沖不同的對(duì)象。因此,在構(gòu)建優(yōu)化程序之前,如果模塊在優(yōu)化過程中可以在GPU上運(yùn)行,則應(yīng)調(diào)用它。
 參數(shù):
 device (int, optional): if specified, all parameters will be
 copied to that device
 Returns:
 Module: self
 double() → T
 將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為double數(shù)據(jù)類型。
 Returns:
 Module: self
 float() → T
 將所有浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為float數(shù)據(jù)類型。
 返回值:
 模塊:self
 forward(input)
 to(*args, **kwargs)
 移動(dòng)和/或強(qiáng)制轉(zhuǎn)換參數(shù)和緩沖區(qū)。
 這可以稱為
 to(device=None, dtype=None, non_blocking=False)
 to(dtype, non_blocking=False)
 to(tensor, non_blocking=False)
 to(memory_format=torch.channels_last)
 其簽名類似于torch.Tensor.to(),但僅接受所需dtype的浮點(diǎn)s。另外,此方法將僅將浮點(diǎn)參數(shù)和緩沖區(qū)強(qiáng)制轉(zhuǎn)換為dtype (如果給定的話)。device如果給定了整數(shù)參數(shù)和緩沖區(qū) ,dtype不變。當(dāng) non_blocking被設(shè)置時(shí),它試圖轉(zhuǎn)換/如果可能異步相對(duì)于移動(dòng)到主機(jī),例如,移動(dòng)CPU張量與固定內(nèi)存到CUDA設(shè)備。
 請(qǐng)參見下面的示例。
 Args:
 device (torch.device): the desired device of the parameters
 and buffers in this module
 dtype (torch.dtype): the desired floating point type of
 the floating point parameters and buffers in this module
 tensor (torch.Tensor): Tensor whose dtype and device are the desired
 dtype and device for all parameters and buffers in this module
 memory_format (torch.memory_format): the desired memory
 format for 4D parameters and buffers in this module (keyword only argument)
 Returns:
 Module: self
 Example:
linear = nn.Linear(2, 2)
linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]])linear.to(torch.double)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1913, -0.3420],
[-0.5113, -0.2325]], dtype=torch.float64)gpu1 = torch.device(“cuda:1”)
linear.to(gpu1, dtype=torch.half, non_blocking=True)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16, device=‘cuda:1’)cpu = torch.device(“cpu”)
linear.to(cpu)
Linear(in_features=2, out_features=2, bias=True)linear.weight
Parameter containing:
tensor([[ 0.1914, -0.3420],
[-0.5112, -0.2324]], dtype=torch.float16)
type(dst_type: Union[torch.dtype, str]) → T
Casts all parameters and buffers to dst_type.
Arguments:
dst_type (type or string): the desired type
Returns:
Module: self
總結(jié)
以上是生活随笔為你收集整理的MinkowskiPooling池化(下)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
 
                            
                        - 上一篇: MinkowskiPooling池化(上
- 下一篇: MinkowskiBroadcast广播
