I've got an application where I need to load large (1/2 million elements) arrays of bytes. I'm currently using the primitive-array mapping, and that works just fine, but for some reason it's taking huge amounts of memory to load the array. (A 1/2 million element array in an object (1/2 MB in size, since they're all bytes) is requiring 96MB of memory to load. Yes. 96.)
Is there any better way to map this? The array elements are stored in a seperate from the main object table, and I can't get around that. Also, I usually have to load the entire array, repeatedly.
Here's the mapping:
Code:
<class name="org.zographos.tilepile.data.MuralData" table="murals">
<cache usage="read-only"/>
<id name="id" type="int">
<generator class="native"/>
</id>
<version name="version" unsaved-value="null"/>
<property name="name" type="string" not-null="true" unique="true">
<meta attribute="use-in-tostring">true</meta>
</property>
<primitive-array name="tiles" batch-size="100" outer-join="false">
<cache usage="read-only"/>
<key column="mural"/>
<index column="idx"/>
<element column="color" type="byte" not-null="true"/>
</primitive-array>
</class>
It takes 96MB to do the following query:
Code:
session.createCriteria(MuralData.class)
.add(Expression.eq("name", muralName)).uniqueResult();
Any suggestions for reducing memory usage?