hypernate wrote:
I mean, why don't you trust the network layer to communicate data correctly, your hard disks and RAM to store data reliably, and so on? Nowadays, pretty much every layer of the hardware stack implements data reliability safeguards.
It's an not uncommon alternate implementation of optimistic locking.
I have the typical need for it on this project: lots of legacy applications
(Cobol, Uniface) that aren't going to get upgraded to make them
version-aware. So, you do something like this:
1. Serialize the object that you retrieve from the DB and compute
a CRC (or other checksum, MD5 if you really want to be certain) and
attach it to the outbound object.
2. At some point, you get the object back for an update, so you lock
the row that you're updating (e.g., select ... for update), retrieve a fresh
object from the DB, serialize it, and compute the checksum for the
current set of values.
3. If the checksum of the object you just retrieved matches the
checksum of the object you just got back for an update, write the
new values to the DB. If not, throw a stale data exception.
There are a lot of drawbacks to this method, but sometimes
circumstances force you to adopt the less-than-optimal solutions.
-r