Hi,
I am very new to Hibernate (and java in general), so my best effort at finding the source of this problem may seem pathetic to y'all.
I think my problem is similar to:
http://forum.hibernate.org/viewtopic.php?t=934169
Could it be that that lazily loaded collections are not being "unloaded" by
session.clear()?...or am I missing something.
I have followed the guidelines w.r.t. batch processing at:
http://blog.hibernate.org/cgi-bin/blosxom.cgi/Gavin King/batch.html
I am runing this code in the context of a junit test using Eclipse as the IDE. All my tests have been working nicely until I decided to extend the size of the test data from 25 to 10,000 (of 360,000) debtor's.
Hibernate version: 2.1.6
Mapping documents: - debtor.getContacts() is implemented as:
<bag name="contacts" lazy="true" inverse="true" cascade="all">
<key column="accountNo"></key>
<one-to-many class="za.org.nsfas.lms.data.Contact"/>
</bag>
The methods getDebtorStates(), getAddresses(), getTransactions() are implemented similarly (only class name differs).
Code between sessionFactory.openSession() and session.close():
Transaction tx = session.beginTransaction();
try {
Query query = session.createQuery(
"from " +
"Debtor " +
"where " +
"AccountNo > 1 and AccountNo <= 10000 "+
"order by " +
"AccountNo");
ScrollableResults list = query.scroll();
assertNotNull(list);
int count = 0;
int sSize = 0, cSize = 0, aSize = 0, tSize = 0;
String str = null;
while (list.next()) {
Debtor debtor = (Debtor) list.get(0);
//sSize += debtor.getDebtorStates().size();
cSize += debtor.getContacts().size();
aSize += debtor.getAddresses().size();
tSize += debtor.getTransactions().size();
//usually this line calls a complex calculation,
//but that isn't the source of the error it seems.
AbstractDebtorStateBaseEnum newState =
new DebtorStateEnum000();
debtor.addDebtorState(newState);
session.save(newState); // session.save(debtor);
if (++count % 20 == 0) {
log.debug("FLUSHING " + count + " (" +
"sSize=" + sSize + ", " +
"cSize=" + cSize + ", " +
"aSize=" + aSize + ", " +
"tSize=" + tSize + ")");
session.flush();
session.clear();
}
}
tx.commit();
} catch (HibernateException e) {
if (tx != null) {
tx.rollback();
}
throw e;
} catch (OutOfMemoryError e) {
e.printStackTrace();
throw e;
} catch (Exception e) {
throw e;
}
Full stack trace of any exception that occurs:
....
10:12:43,018 DEBUG DebtorDebtorStatesTest:516 - FLUSHING 600 (sSize=0, cSize=18498, aSize=1551, tSize=51865)
10:12:46,159 DEBUG DebtorDebtorStatesTest:516 - FLUSHING 620 (sSize=0, cSize=19197, aSize=1610, tSize=53690)
10:12:49,237 DEBUG DebtorDebtorStatesTest:516 - FLUSHING 640 (sSize=0, cSize=19757, aSize=1657, tSize=55271)
10:12:52,315 DEBUG DebtorDebtorStatesTest:516 - FLUSHING 660 (sSize=0, cSize=20171, aSize=1711, tSize=56741)
10:12:56,503 DEBUG DebtorDebtorStatesTest:516 - FLUSHING 680 (sSize=0, cSize=20837, aSize=1767, tSize=58526)
10:13:06,190 WARN JDBCExceptionReporter:38 - SQL Error: 0, SQLState: 08S01
10:13:06,190 ERROR JDBCExceptionReporter:46 - TDS Protocol error: Invalid packet type 0x0
java.lang.OutOfMemoryError
10:13:06,222 ERROR JDBCExceptionReporter:38 - could not initialize collection: [za.org.nsfas.lms.data.Debtor.transactions#708]
java.sql.SQLException: TDS Protocol error: Invalid packet type 0x0
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:1654)
at net.sourceforge.jtds.jdbc.TdsCore.getNextRow(TdsCore.java:599)
at net.sourceforge.jtds.jdbc.JtdsResultSet.next(JtdsResultSet.java:511)
at net.sourceforge.jtds.jdbc.JtdsResultSet.close(JtdsResultSet.java:426)
at net.sf.hibernate.impl.BatcherImpl.closeQueryStatement(BatcherImpl.java:99)
at net.sf.hibernate.loader.Loader.doQuery(Loader.java:302)
at net.sf.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:133)
at net.sf.hibernate.loader.Loader.loadCollection(Loader.java:990)
....
Name and version of the database you are using: M$SQL 2000
If I coment out the line that loads to largest collection (tSize += debtor.getTransactions().size();) then the test runs to around record 3500.
If I comment out all three "get*.size();" lines the loop goes further, but I still get the OutOfMemory exception.
I notice that the "caveat" to the "batch processing blog" mentions disabling second-level caching for the "Customer" (in my case "Debtor"). How do I do that? Google doesn't seem to offer much help. A pointer in the right direction would be very much appreciated.
Many thanks,
BruceS