Steve, thanks for that input. Ok, here is maybe a bit better example, please bear with me:
Our application processes changes to user accounts. This is a webservices based system using SOAP. Each SOAP request entering the system can contain one or more account changes for one or more people. Sometimes, as in the case I am trying to troubleshoot, the request may contain two or more changes to the same account. Each SOAP request containing multiple account changes is considered one business transaction. Either all updates succeed or all get rolled back. This is what we consider a "Unit of Work".
The general application flow looks something like this (all this happens within the same hibernate session):
0. A SOAP request enters the system requesting changes to accounts 110, 120. There are 2 updates requested for account 110...so....
1. find account 110 for update
2. update account 110 with info
3. place reference to updated acct 110 in unit of work
4. find account 120 for update
5. update account 120 with new info
6. place reference to updated acct 120 in unit of work
7. again, find account 110 for update
8. update account 110 with info
9. place reference to updated acct 110 in unit of work
10. If no errors, execute commit
As our application processes each update contained in the incoming SOAP request, it makes the changes to the account entities and places a reference to each updated account into a UnitOfWork object. We have represented this Unit of Work as a simple class containing a list of accounts and the operation to be performed (delete/save). Our account objects use optimistic locking with timestamp versioning.
At commit, our application loops over each list and executes a session.delete() or session.saveOrupdate(). Then, once we know we don't have any exceptions, all change requests queued up are committed as a whole. If there are faults, all changes are rolled back.
Code:
session = getActiveSession();
tx = session.beginTransaction( sessionContext );
if( !ServiceRequestContext.getInstance().getSSPSOAPFault().hasFault() )
{
Iterator i = entityChanges.iterator();
while( i.hasNext() )
{
EntityOperation entityOperation = (EntityOperation) i.next();
BaseEntity b = entityOperation.getEntity();
Object op = entityOperation.getOperation();
// switch based on requested operation
// == instead of equals() is OK because we're comparing static final String refs
if(UnitOfWorkImpl.DELETE == op) {
session.delete( b );
} else if (UnitOfWorkImpl.SAVE_OR_UPDATE == op) {
session.saveOrUpdate( b );
}
}
}
tx.commit();
}
OK, and now the problem:
As you can see from the above list, acct 110 is fetched twice, updated twice with different info and referenced twice in our update list. When we loop over our accounts to commit the changes, the first flush to acct 110 executes fine but the second naturally causes a StaleObjectException since the second acct 110 reference's is now out of synch with the changes made to the first instance already flushed.
The reason we constructed our persistence logic like this is because we need tight control of what is considered a transaction and that we need to roll back a whole package worth of changes is something goes wrong anywhere. Naturally, in this case when we might have multiple changes to the same account we run into this problem.
I have experimented with FlushModes, second level cache and refreshing the accounts at different stages, but I still run into the same problem. Any ideas of how this either could be solved or redesigned?
Many thanks.