That was helpful although it didn't sole our problem. Basically, our application works almost exclusively with detached objects. And object is retrieved in one session sent out to the front end in DTO form, edited by a user, and submitted back for persistence in a new session. Inside this new session almost all of our updates a predicated by looking the incoming object up in the database, thus populating the hibernate cache.
When we submit the incoming record for persistence with the merge call, what we were seeing is that the merge overwrote the version field of the incoming record, with the version of the record just looked up in the cache. This makes sense from what I've read. The problem is that it breaks our concurrent update checking and when flush is called, no optimistic lockeing exceptions are thrown (which again makes sense, the version field is correct thanks to the merging of the incoming detached object with the recently looked up object).
So the way I solved it was to write my own custom MergeEventListener. My listener waits for a merge, checks the cache for a record, if it is there, I compare the version of the incoming record against the cached version and throw my own optimistic locking exception. If there are no problems then is imply pass the data onto the DefaultMergeEventListener provided by hibernate.
|