Hi ,
In one of our applications we are trying to insert more than 2 million records into database. We have to use hibernate and our API to build the objects into database because of the complexity of logic involved in constructing those objects.
Obviously we have to do batch processing because of the volume of inserts to not hit out of memory problem, and we are using JTA for transaction.
Until now I was under impression the performance of this type of application will be faster if we try to reduce the number of commits to database by increasing the batch size, of course keeping memory constraints in mind. So first I used batch size of 1000 and trying the different numbers to get an optimal number. The performance kept on improving as lowered the batch size to commit and was much faster when batch size was 10.
Does anyone have idea why is this behavior? The performance must be better if we reduce the number of calls to database. Instead it is better the other way.
Thanks.
|