Hallo,
I have to fix an application which uses a single transaction to update several thousand database records in a batch-like job. The updates are relatively simple, only one timestamp is written for each record. JPA is used. Unfortunately the entities are detached, modified and then merged again (funny architecture, nothing I can do about now). With several thousand records the application works sufficiently fast. However with more than 10.000 the transaction never completes. Unfortunately the job has to be completed in an all or nothing way hence the transaction. We have profiled the application and we cannot see any resource bottlenecks, neither in the application server nor the database. Are there any possibilities to influence this behavior via configuration or simple modifications of the code? What would be the best way to achieve atomicity in large batch-like jobs?
Many thanks for any input
|