-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 10 posts ] 
Author Message
 Post subject: Need help with versioning on optimistic locking mechanism
PostPosted: Fri Jul 06, 2007 4:00 am 
Newbie

Joined: Fri Jul 06, 2007 3:03 am
Posts: 6
The version no. of the persistence object will be automatically incremented in optimistic locking mechanism.

But the version column is fixed length integer or long, so what is the behaviour when the value of version column reach maximum (e.g. One record has been modified for many times)? Will Hibernate handle this case or just let the exception "Value is too large for this column" throw? Will the value be rotated to 1?


Last edited by eugene.hsieh on Sat Jul 07, 2007 12:35 am, edited 1 time in total.

Top
 Profile  
 
 Post subject:
PostPosted: Sat Jul 07, 2007 12:34 am 
Newbie

Joined: Fri Jul 06, 2007 3:03 am
Posts: 6
Any one have idea about it?


Top
 Profile  
 
 Post subject:
PostPosted: Sat Jul 07, 2007 8:18 am 
Newbie

Joined: Fri Jul 06, 2007 3:03 am
Posts: 6
It is quite strange that no one meet this problem. How about the administrator of Hibernate?


Top
 Profile  
 
 Post subject:
PostPosted: Sun Jul 08, 2007 4:13 pm 
Regular
Regular

Joined: Fri Dec 17, 2004 10:38 am
Posts: 54
As far as I can see in DefaultFlushEntityEventListener it just call VersionType.next(...), and no implementation of VersionType is type overflow aware.
So you just get overflowed version value, if db cannot handle it you get "value to large" SQLException


Top
 Profile  
 
 Post subject:
PostPosted: Mon Jul 09, 2007 2:34 am 
Newbie

Joined: Fri Jul 06, 2007 3:03 am
Posts: 6
The test is performed and the exception will be thrown. For long-term running system, it is suggested to use timestamp instead of integer in version column.

2007-07-09 13:31:00,661 DEBUG [ConnectionManager]: transaction completed on session with on_close connection release mode; be sure to close the session to release JDBC resources!
org.springframework.dao.InvalidDataAccessResourceUsageException: could not update: [com.dbs.vickers.framework.persistence.data.User#1190805162]; nested exception is org.hibernate.exception.DataException: could not update: [com.dbs.vickers.framework.persistence.data.User#1190805162]
Caused by: org.hibernate.exception.DataException: could not update: [com.dbs.vickers.framework.persistence.data.User#1190805162]
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:75)
at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2222)
at org.hibernate.persister.entity.AbstractEntityPersister.updateOrInsert(AbstractEntityPersister.java:2118)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2374)
at org.hibernate.action.EntityUpdateAction.execute(EntityUpdateAction.java:84)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:243)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:227)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:141)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:296)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1009)
at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:356)
at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:575)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:651)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:621)
at com.dbs.vickers.framework.persistence.IntegrateWithAuditTrailFrameworkTest.updateNormal(IntegrateWithAuditTrailFrameworkTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
at org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
at org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
at org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
at org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:71)
at org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
at org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
at org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
at org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:38)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
Caused by: java.sql.SQLException: ORA-01438: value larger than specified precision allowed for this column

at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.ttc7.TTIoer.processError(TTIoer.java:289)
at oracle.jdbc.ttc7.Oall7.receive(Oall7.java:573)
at oracle.jdbc.ttc7.TTC7Protocol.doOall7(TTC7Protocol.java:1891)
at oracle.jdbc.ttc7.TTC7Protocol.parseExecuteFetch(TTC7Protocol.java:1093)
at oracle.jdbc.driver.OracleStatement.executeNonQuery(OracleStatement.java:2047)
at oracle.jdbc.driver.OracleStatement.doExecuteOther(OracleStatement.java:1940)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:2709)
at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:589)
at org.hibernate.persister.entity.AbstractEntityPersister.update(AbstractEntityPersister.java:2208)
... 35 more


Top
 Profile  
 
 Post subject:
PostPosted: Mon Jul 09, 2007 6:09 am 
Regular
Regular

Joined: Fri Dec 17, 2004 10:38 am
Posts: 54
Eugene, could you please increase db column size (to hold MAX_VALUE of your version field type) and run you test again?

Besides, if you having SO much updates, using timestamp version may break concurrent update version increase.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Jul 09, 2007 6:27 am 
Newbie

Joined: Fri Jul 06, 2007 3:03 am
Posts: 6
fuzebest wrote:
Eugene, could you please increase db column size (to hold MAX_VALUE of your version field type) and run you test again?

Besides, if you having SO much updates, using timestamp version may break concurrent update version increase.


Of course, increasing column size can solve the problem, but there is still a chance (I know it is very small) to throw exception. But using timestamp can avoid this exception.

For concurrent access, I also have AOP interceptor to monitor it. Actually when concurrent update the same record, a special exception will be thrown and AOP interceptor will handle it. But using timestamp can minimize the chance to throw this special exception because the timestamp (down to millisecond level) will be used as version comparsion instead of integer.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Jul 09, 2007 8:24 am 
Regular
Regular

Joined: Fri Dec 17, 2004 10:38 am
Posts: 54
actually i think using integer versioning is more preferable since it illiminate all java and db timestamp precission issues.

As for max integer version, problem can be either in incorrect hibernate dialect column mapping or jdbc driver bug which can not handle large (but still valid) field values.
Do you generate db schema using hibernate toolset?

I tried to reproduce your bug with memory h2 db, version type - short, iterating from 0 to Integer.MAX_VALUE...
No error... after 5 minutes I ran out of passion and abort test...

Update:
I can confirm H2 correctly handles overflowed version values.


Top
 Profile  
 
 Post subject:
PostPosted: Mon Jul 09, 2007 6:03 pm 
Newbie

Joined: Fri Jul 06, 2007 3:03 am
Posts: 6
fuzebest wrote:
actually i think using integer versioning is more preferable since it illiminate all java and db timestamp precission issues.

As for max integer version, problem can be either in incorrect hibernate dialect column mapping or jdbc driver bug which can not handle large (but still valid) field values.
Do you generate db schema using hibernate toolset?

I tried to reproduce your bug with memory h2 db, version type - short, iterating from 0 to Integer.MAX_VALUE...
No error... after 5 minutes I ran out of passion and abort test...

Update:
I can confirm H2 correctly handles overflowed version values.


So what is behaviour when the version no. is incremented to (Integer.MAX_VALUE + 1)? Of course you ran out of passion, because you need to do Integer.MAX_VALUE times of updates on 1 record in your test. Why don't you set a smaller column size for testing? e.g. Set column in integer with size 2 and perform updates 100 times on 1 record.

So do you think it is only jdbc driver bug or problem in settings? I doesn't mean jdbc driver cannot handle large value. I am asking what behavior when the value of version no. is incremented to be larger than the column size.


Top
 Profile  
 
 Post subject:
PostPosted: Tue Jul 10, 2007 4:08 pm 
Regular
Regular

Joined: Fri Dec 17, 2004 10:38 am
Posts: 54
Ok. Here is my test case

Code:
public class VerType {
   private long id;
   private int value;
                // version is short!
   private short version;
}


and mapping
Code:
   <class name="VerType" table="VER_TYPES">
      <id name="id" column="ID" type="long" unsaved-value="0">
         <generator class="native" />
      </id>
      <version name="version" type="short" />

      <property name="value" type="integer" />
   </class>


I loop incrementing value of field 'value' from 0 to Integer.MAX_VALUE
I use
Code:
<property name="hibernate.hbm2ddl.auto">update</property>
so hibernate creates table for me.
Test shows - after Short.MAX_VALUE version field become negative (overflowed) and H2 jdbc driver handles this value correctly, version starts increment again to Short.MAX_VALUE and so on.
I see 2 possible scenario in your test:
1) you generate db schema yourself and set column size to small to handle MAX_VALUE of you version type - just increase column size
2) db schema is generated by hibernate toolset - either your db dialect creates column too small to handle large version values (dialect class fix required) or jdbc driver unable to handle large value (driver bug).


Quote:
I am asking what behavior when the value of version no. is incremented to be larger than the column size.

You got SQLException wrapped in hibernate unchecked exception thats for sure.


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 10 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.