mihalcea_vlad wrote:
Your assumptions go against the
application-level repeatable reads guarantees. Hibernate will always return the currently managed entity, no matter if you lock it or not. That is desirable because it allows you to prevent
lost updates.
However, you can still achieve what you want by manually refreshing the entity after you acquired the lock.
Thank you for your patience and detailed response, but I think I understand your point and I still disagree. We do actually get lost updates (scenario below). I believe current Hibernate behavior actually makes it easier to run into lost update when you are using PESSIMISTIC locking. I don't think that application level repeatable read can avoid lost update. FWIW while Postgres database level repeatable read does avoid lost update, MySQL's repeatable read isolation level doesn't prevent lost updates.
My expectation does go against application level repeatable reads guarantees, but since I am locking the entity with PESSIMISTIC_WRITE, it makes sense to NOT have application level repeatable read guarantee when querying, but read the actual value from the database cell (MySQL operates under the same assumption. It does provided database level repeatable read but breaks it when issuing SELECT FOR UPDATE, for this very reason - that the row is locked for this transaction).
As you mentioned in the blog. " If there would be only one database transaction at any time then all operations would be executed sequentially." When using PESSIMISTIC_WRITE, there will be only one database transaction, because you are locking the entity with PESSIMISTIC_WRITE and you need the current value at the time of acquiring lock in order to avoid lost updates.
This is the scenario that causes lost updates for us:
1. Transaction T1 loads Person p1
2. Transaction T2 loads Person p2
3. T2 loads value of p2.bankBalance=VAL1. This entity is added to persistence context
4. T1 loads value of p1.bankBalance=VAL1, locks the entity and updates it to a new value VAL3, successfully commits transaction.
5. T2 gets lock using PESSIMISTIC_WRITE. Now hibernate correctly issues SELECT FOR UPDATE which loads VAL3 for the balance from the database (not parsing from result set into entity yet). However because of application-level repeatable read, hibernate won't use this value that it retrieved from the database, instead it will use p2.bankBalance which is old VAL1
6. T2 updates p2.bankBalance, e.g. by increasing it by integer 10. Now p2.bankBalance = VAL1+10
7. T2 successfully commits. But the update (4) is lost
I understand that this scenario can be prevented by OPTIMISTIC LOCKING. In this particular case though we would prefer some kind of PESSIMISTIC LOCKING. With current Hibernate behavior the only way to avoid the scenario above is to refresh EVERY entity that is retrieved by the query (5) or by clearing persistence context before acquiring the lock.