Joined: Thu Aug 17, 2006 10:18 am Posts: 9
|
We have a cluster contanining 2 servers. There are multiple queues on each server and each queue gets the tasks that it wants to process from the DB which is unique since the tasks in the database table have the queueId as one of the columns. This configuration works fine in a single node but in a cluster the database table could receive the request for tasks with the same queueId. We want to make sure that a task is processed only by one queue in the cluster. I played around with the hibernate locks and used the lock startegy of LockMode.URGRADE which does not seem to be solving my problem.
The problem is I want the first queue reading the tasks in the DB to lock all the rows which have a queueId of lets say 123 and then batch fetch all the records and not have other transactions in other sessions of having to read the same rows again (a flag is set once the row is read from the first session for the second transaction not to read the same row again). When both the transactions are stared simultaneously the behavior which we noticed was that the transactions were overstepping each other, in other words the first record is read by the first thread and then second thread tries to read the same row and process it (the first thread has'nt committed yet). Is there a way in Hibernate to solve this problem or am I doing something wrong ? Hope my problem is clear and I'll be able to get some helpfull suggestion on this. Appreciate all your responses.
|
|