A collection with billions of entries
Overview
There are a number of problems with having a large number of records in memory. One way around this is to use direct memory, but this is too low level for most developers. Is there a way to make this more friendly?Limitations of large numbers of objects
- The overhead per object is between 12 and 16 bytes for 64-bit JVMs. If the object is relatively small, this is significant and could be more than the data itself.
- The GC pause time increases with the number of objects. Pause times can be around one second per GB of objects.
- Collections and arrays only support two billion elements
Huge collections
One way to store more data and still follow object orientated principles is have wrappers for direct ByteBuffers. This can be tedious to write, but very efficient.What would be ideal is to have these wrappers generated automatically.
Small JavaBean Example
This is an example of JavaBean which would have far more overhead than actual data contained.interface MutableByte { public void setByte(byte b); public byte getByte(); }
It is also small enough that I can create billions of these on my machine. This example creates a List<Bytes> with 16 billion elements.
final long length = 16_000_000_000L; HugeArrayList<MutableByte> hugeList = new HugeArrayBuilder<MutableByte>() {{ allocationSize = 4 * 1024 * 1024; capacity = length; }}.create(); List<MutableByte> list = hugeList; assertEquals(0, list.size()); hugeList.setSize(length); // add a GC to see what the GC times are like. System.gc(); assertEquals(Integer.MAX_VALUE, list.size()); assertEquals(length, hugeList.longSize()); byte b = 0; for (MutableByte mb : list) mb.setByte(b++); b = 0; for (MutableByte mb : list) { byte b2 = mb.getByte(); byte expected = b++; if (b2 != expected) assertEquals(expected, b2); }From start to finish, the heap memory used is as follows. with -verbosegc
0 sec - 3100 KB used [GC 9671K->1520K(370496K), 0.0020330 secs] [Full GC 1520K->1407K(370496K), 0.0063500 secs] 10 sec - 3885 KB used 20 sec - 4428 KB used 30 sec - 4428 KB used ... deleted ... 1380 sec - 4475 KB used 1390 sec - 4476 KB used 1400 sec - 4476 KB used 1410 sec - 4476 KB usedThe only GC is one triggered explicitly. Without the System.gc(); no GC logs appear.
After 20 sec, the increase in memory used is from logging how much memory was used.
Conclusion
The library is relatively slow. Each get or set takes about 40 ns which really adds up when there are so many calls to make. I plan to work on it so it is much faster. ;)On the upside, it wouldn't be possible to create 16 billion objects with the memory I have, nor could it be put in an ArrayList, so having it a little slow is still better than not working at all.
What if objects don't have same size (polymorphism)?
ReplyDeleteWhat with overhead for .contains(Object) method?
What with random access for fixed size and random size objects?
@kobrys, Good questions. The Objects have to be described with a single interface so no polymorphism is supported. Another "object" can be referenced.
ReplyDeleteThis could be supported in the future.
The overhead of contains is likely to be light but the performance horrendous. As the data is not sorted or indexed it it will be slow. O(n) The scan time is per field compared. Scanning one field would take 10 ns per entry.
The random access time is about 80 ns (less if not all fields are used) Random size elements need to be referenced in another "object" adding another 10-20 ns.
I have added some tests. If all the fields are accessed randomly, it takes 340 ns for a 12 field object on average. If you scan end to end it takes about 10 ns per object per field.
ReplyDeleteFor JavaBeans, it takes 184 ns to randomly access the same way, and 14 ns per object to scan.
If you have a List with 10 million objects and you search for an entry it can take 0.05 seconds.
ReplyDelete