I have the following problem in WebSphere 6.1 on DB2. My question first: Is it possible not to lock rows that are read via a WebSphere-managed data source?
During a transaction, all table rows that are read through hibernate are being locked. I did not have this issue in the standard J2SE environment, but only after deploying the application on WebSphere.
The test case is the following:
- (app) read from DB
- (external process) update one of the rows read in first step
- (app) change one of the rows read in first step
- (app) commit
The expected behavior (from the test case):
- The external application does not show any problems and just updates the row.
- The application complains about unrecognized changes outside the session in step 4.
However, the following behavior is observed: The update command executed by the external application (step 2) is blocked. In step 4, the external application receives SQL0911N: deadlock or timeout. The tested application succeeds.
The problem is that the application reads a lot of tables during the transaction. These tables are all locked until step 4, where the application commits its changes.
The code is roughly as follows:
Code:
// start transaction
s.beginTransaction();
...
// read something from the db
myDataObject = dao.readFromDb();
// change something
myDataObject.setName("something else");
// commit
dao.commit();
I have tried setting the connection isolation level (
hibernate.connection.isolation) to 2 (read commited), as described in the
appropriate section in the WebSphere documentation. This does not change anything.
It is possible to observe a slightly different behavior with
hibernate.connection.release_mode set to
after_statement: In step 4, the application reports a deadlock error, while the external application updates the table without errors.
Other settings: flush mode is set to commit, no transaction factory class is used, because two persistence contexts are used.
I hope somebody can help with this issue.
Kariem