Hibernate version:3.0.5
Name and version of the database you are using: Oracle 10g
Ok, here's the deal.
I got one oracle db.
I got multiply processes accessing the db using a single API which wrapps hibernate.
Those processes communicate with the outside world and all together should be able to insert the db an approximate amount of 10 million records a day.
Because of the high amount of data, the preformance is critical and therefore I don't want to use any kind of locking on the db.
The chances of two process trying to add the db the same unique data at the same time are highly slim, but possible.
Some problematic scenrios can happen - for instance:
process 1: insert record with id=123
process 2: select record with id=123 (not found cause process 1 didn't commit)
process 1: commit
process 2: insert record with id=123
process 2: commit
When process 2 commits, it'll crash throwing the runtime exception :
org.hibernate.exception.ConstraintViolationException: could not insert: [dbObjects.generated.TestTableDTO]
at org.hibernate.exception.ErrorCodeConverter.convert(ErrorCodeConverter.java:74)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.persister.entity.BasicEntityPersister.insert(BasicEntityPersister.java:1869)
at org.hibernate.persister.entity.BasicEntityPersister.insert(BasicEntityPersister.java:2200)
at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:46)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:239)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:223)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:136)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:274)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:730)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:324)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:86)
The problem is this :
In my API, in order to improve preformance, I don't perform commit after each session.save.
Instead I preform the commit only after 1000 records I added using session.save
How do I commit?
session.flush();
transaction.commit();
If some other process manages to insert and commit the record in the period of time between my select check and the commit, I fail with the aforementioned exception.
As you can see, the exception doesn't indicate the specific unique record which failed the whole commit. (unlike when trying to do session.save on a unique data which already been inserted in the current session - in this case the exception message includes the the value of the unique id).
What I'm asking, is there any way of knowing what was the first record that caused the unique constraint error during the transaction.commit() ?
|