Hello,
we have an entity which has a <map>-type of collection attached to it, that uses <many-to-many> mapping. The map has cache setting of <cache usage="nonstrict-read-write"/>
The problem is that the map is often quite large, 500-1000 elements in it. But when Hibernate3 instantiates it from cache (PersistentMap.initializeFromCache()), it will create a HashMap with default parameters and then .put() each item from the serialized cache data to the map.
HashMap default size is 16 and it resizes it to double always when it has 75%*capacity elements in it. So, initializing a HashMap with 1000 entries will cause 7 resizes (which are expensive): 16->32->64->128->256->512->1024->2048. This consumes a lot of memory and cpu...
Is there are way to go around this or should I hack the source?
Best regards,
Aapo
|