Which one is faster: Java heap or native memory?
2019獨(dú)角獸企業(yè)重金招聘Python工程師標(biāo)準(zhǔn)>>>
Which one is faster: Java heap or native memory?
11-29-2012?by??Sergio Oliveira Jr.??|??8 Comments
One of the?advantages?of the Java language is that you do not need to deal with memory allocation and deallocation. Whenever you instantiate an object with the?new?keyword, the necessary memory is allocated in the JVM heap. The heap is then managed by the garbate collector which reclaims the memory after the object goes out-of-scope. However there is a backdoor to reach the off-heap native memory from the JVM. In this article I am going to show how an object can be stored in memory as a sequence of bytes and how you can choose between storing these bytes in heap memory or in direct (i.e. native) memory. Then I will try to conclude which one is faster to access from the JVM: heap memory or direct memory.
Allocating and Deallocating with Unsafe
The?sun.misc.Unsafe?class allows you to allocate and deallocate native memory from Java like you were calling?malloc?andfree?from C. The memory you create goes off the heap and are not managed by the garbage collector so it becomes your responsibility to deallocate the memory after you are done with it. Here is my?Direct?utility class to gain access to the?Unsafeclass.
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 | publicclassDirectimplementsMemory { privatestaticUnsafe unsafe; privatestaticbooleanAVAILABLE =false; static{ try{ Field field = Unsafe.class.getDeclaredField("theUnsafe"); field.setAccessible(true); unsafe = (Unsafe)field.get(null); AVAILABLE =true; }catch(Exception e) { // NOOP: throw exception later when allocating memory } } publicstaticbooleanisAvailable() { returnAVAILABLE; } privatestaticDirect INSTANCE =null; publicstaticMemory getInstance() { if(INSTANCE ==null) { INSTANCE =newDirect(); } returnINSTANCE; } privateDirect() { } @Override publiclongalloc(longsize) { if(!AVAILABLE) { thrownewIllegalStateException("sun.misc.Unsafe is not accessible!"); } returnunsafe.allocateMemory(size); } @Override publicvoidfree(longaddress) { unsafe.freeMemory(address); } @Override publicfinallonggetLong(longaddress) { returnunsafe.getLong(address); } @Override publicfinalvoidputLong(longaddress,longvalue) { unsafe.putLong(address, value); } @Override publicfinalintgetInt(longaddress) { returnunsafe.getInt(address); } @Override publicfinalvoidputInt(longaddress,intvalue) { unsafe.putInt(address, value); } } |
Placing an object in native memory
Let’s move the following Java object to native memory:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | publicclassSomeObject { privatelongsomeLong; privateintsomeInt; publiclonggetSomeLong() { returnsomeLong; } publicvoidsetSomeLong(longsomeLong) { this.someLong = someLong; } publicintgetSomeInt() { returnsomeInt; } publicvoidsetSomeInt(intsomeInt) { this.someInt = someInt; } } |
Note that all we are doing below is saving its properties in the?Memory:
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | publicclassSomeMemoryObject { privatefinalstaticintsomeLong_OFFSET =0; privatefinalstaticintsomeInt_OFFSET =8; privatefinalstaticintSIZE =8+4;// one long + one int privatelongaddress; privatefinalMemory memory; publicSomeMemoryObject(Memory memory) { this.memory = memory; this.address = memory.alloc(SIZE); } @Override publicvoidfinalize() { memory.free(address); } publicfinalvoidsetSomeLong(longsomeLong) { memory.putLong(address + someLong_OFFSET, someLong); } publicfinallonggetSomeLong() { returnmemory.getLong(address + someLong_OFFSET); } publicfinalvoidsetSomeInt(intsomeInt) { memory.putInt(address + someInt_OFFSET, someInt); } publicfinalintgetSomeInt() { returnmemory.getInt(address + someInt_OFFSET); } } |
Now let’s benchmark read/write access for two arrays: one with millions of?SomeObjects and another one with millions ofSomeMemoryObjects. The code can be seen?here?and the results are below:
// with JIT: Number of Objects: 1,000 1,000,000 10,000,000 60,000,000 Heap Avg Write: 107 2.30 2.51 2.58 Native Avg Write: 305 6.65 5.94 5.26 Heap Avg Read: 61 0.31 0.28 0.28 Native Avg Read: 309 3.50 2.96 2.16 // without JIT: (-Xint) Number of Objects: 1,000 1,000,000 10,000,000 60,000,000 Heap Avg Write: 104 107 105 102 Native Avg Write: 292 293 300 297 Heap Avg Read: 59 63 60 58 Native Avg Read: 297 298 302 299Conclusion:?Crossing the JVM barrier to reach native memory is approximately 10 times slower for reads and 2 times slower for writes.?But notice that each?SomeMemoryObject?is allocating its own native memory space so the reads and writes are not continuous, in other words, each direct memory object reads and writes from and to its own allocated memory space that can be located anywhere.?Let’s benchmark read/write access to continuous direct and heap memory to try to determine which one is faster.
Accessing large chunks of continuous memory
The test consist of allocating a byte array in the heap and a corresponding chunk of native memory to hold the same amount of data. Then we sequentially write and read a couple of times to measure which one is faster. We also test random access to any location of the array and compare the results. The sequential test can be seen?here. The random one can be seenhere. The results:
// with JIT and sequential access: Number of Objects: 1,000 1,000,000 1,000,000,000 Heap Avg Write: 12 0.34 0.35 Native Avg Write: 102 0.71 0.69 Heap Avg Read: 12 0.29 0.28 Native Avg Read: 110 0.32 0.32 // without JIT and sequential access: (-Xint) Number of Objects: 1,000 1,000,000 10,000,000 Heap Avg Write: 8 8 8 Native Avg Write: 91 92 94 Heap Avg Read: 10 10 10 Native Avg Read: 91 90 94 // with JIT and random access: Number of Objects: 1,000 1,000,000 1,000,000,000 Heap Avg Write: 61 1.01 1.12 Native Avg Write: 151 0.89 0.90 Heap Avg Read: 59 0.89 0.92 Native Avg Read: 156 0.78 0.84 // without JIT and random access: (-Xint) Number of Objects: 1,000 1,000,000 10,000,000 Heap Avg Write: 55 55 55 Native Avg Write: 141 142 140 Heap Avg Read: 55 55 55 Native Avg Read: 138 140 138Conclusion:?Heap memory is always faster than direct memory for sequential access. For random access, heap memory is a little bit slower for big chunks of data, but not much.
Final Conclusion
Working with Native memory from Java has its usages such as when you need to work with large amounts of data (> 2 gigabytes) or when you want to escape from the garbage collector [1]. However in terms of latency, direct memory access from the JVM is not faster than accessing the heap as demonstrated above. The results actually make sense since crossing the JVM barrier must have a cost. That’s the same dilema between using a direct or a heap?ByteBuffer. The speed advantage of the direct?ByteBuffer?is not?access speed?but the ability to talk directly with the operating system’s native I/O operations. Another great example discussed by?Peter Lawrey?is the use of?memory-mapped files?when working with time-series.
轉(zhuǎn)載于:https://my.oschina.net/fourthmoon/blog/116146
總結(jié)
以上是生活随笔為你收集整理的Which one is faster: Java heap or native memory?的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 可恶的.NET FRAME,将一切变得更
- 下一篇: Nginx 学习笔记(四) Nginx+