numba numpy计算加速器 官方教程 GPU CUDA配置
生活随笔
收集整理的這篇文章主要介紹了
numba numpy计算加速器 官方教程 GPU CUDA配置
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
官網:http://numba.pydata.org/
官方教程:http://numba.pydata.org/numba-doc/latest/user/5minguide.html
因為我3.7版本的python(也有可能是其他因素影響)找不到numba.autojit加速了,所以想到官網看看到底發生了什么
示例
以下代碼加速理想
from numba import jit import numpy as npx = np.arange(100).reshape(10, 10)@jit(nopython=True) # Set "nopython" mode for best performance, equivalent to @njit def go_fast(a): # Function is compiled to machine code when called the first timetrace = 0.0for i in range(a.shape[0]): # Numba likes loopstrace += np.tanh(a[i, i]) # Numba likes NumPy functionsreturn a + trace # Numba likes NumPy broadcastingprint(go_fast(x))以下代碼加速不理想(函數不能享受numba加速)
from numba import jit import pandas as pdx = {'a': [1, 2, 3], 'b': [20, 30, 40]}@jit def use_pandas(a): # Function will not benefit from Numba jitdf = pd.DataFrame.from_dict(a) # Numba doesn't know about pd.DataFramedf += 1 # Numba doesn't understand what this isreturn df.cov() # or this!print(use_pandas(x))需要注意的是,numba使用函數裝飾器來加速函數,第一次執行函數時,會將函數編譯成機器碼,需要耗費一定時間,以后每次調用函數,就是直接用機器碼執行,從而獲得加速
所以,一般比較常用的是@njit或@jit(nopython=True)(一樣的)
其他功能
Numba has quite a few decorators, we’ve seen @jit, but there’s also:
@njit - this is an alias for @jit(nopython=True) as it is so commonly used!@vectorize - produces NumPy ufunc s (with all the ufunc methods supported). Docs are here.@guvectorize - produces NumPy generalized ufunc s. Docs are here.@stencil - declare a function as a kernel for a stencil like operation. Docs are here.@jitclass - for jit aware classes. Docs are here.@cfunc - declare a function for use as a native call back (to be called from C/C++ etc). Docs are here.@overload - register your own implementation of a function for use in nopython mode, e.g. @overload(scipy.special.j0). Docs are here.Extra options available in some decorators:
parallel = True - enable the automatic parallelization of the function.fastmath = True - enable fast-math behaviour for the function.ctypes/cffi/cython interoperability:
cffi - The calling of CFFI functions is supported in nopython mode.ctypes - The calling of ctypes wrapped functions is supported in nopython mode. .Cython exported functions are callable.GPU targets:
Numba can target Nvidia CUDA and (experimentally) AMD ROC GPUs. You can write a kernel in pure Python and have Numba handle the computation and data movement (or do this explicitly). Click for Numba documentation on CUDA or ROC.http://numba.pydata.org/numba-doc/latest/cuda/index.html#numba-for-cuda-gpus
有點多
不過沒有看到@autojit,莫非是取消了??
總結
以上是生活随笔為你收集整理的numba numpy计算加速器 官方教程 GPU CUDA配置的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: python numpy矩阵运算加速器
- 下一篇: Topaz Labs AI深度学习图像处