The page gives the following code:
Code:
for ( int i=0; i<100000; i++ ) {
Item item = new Item(...);
session.save(item);
if ( i % 100 == 0 ) {
session.flush();
session.clear();
}
}
and says beneath:
Quote:
(...)remember to set the hibernate.jdbc.batch_size configuration property to an equivalent value and disable the second-level cache for the persistent class(...)
- equivalent value here means 100.
In the first iteration, i is 0, and 0 % 100 is 0, so the session gets flushed and cleared, but underlying jdbc does not (1 is not 100). Doesn't the session then get out of sync with the underlying jdbc batch size? (won't hibernate be, so as to call it, 1 row behind jdbc?) Or maybe this is not a problem at all?
This same piece of code is also in the reference guide, 13.1 Batch inserts.
However, there are another listings when it comes to batch updates:
Code:
int count=0;
while ( itemCursor.next() ) {
Item item = (Item) itemCursor.get(0);
modifyItem(item);
if ( ++count % 100 == 0 ) {
session.flush();
session.clear();
}
}
This in turn stays in sync with jdbc (of course, if my previous assumption was correct), because the first iteration increments i before it computes the modulo. The same code can be found in the reference.
Can you guys please explain this issue?
Thanks.