Hibernate version:
3.2.x
i tried to improve the speed of my application and so i set the global batch fetch size. unfortunately this only work fine for some cases. in some other cases the application crashes with a concurrent modification exception. because its not so easy to strip down the code and mapping files to a simple case i try to explain what happen in general:
when i try to load my (problem) entity, hibernate looks up unloaded 1-n associations ids via the BatchFetchQueue getEntityBatch method. the getEntityBatch method iterates over the entries inside the batchLoadableEntityKeys map which throws the ConcurrentModificationException. but how can this happen?
the map is not modified inside the method at least not at first view. after setting some breakpoints i recognized the problem:
inside the iteration the method calls persister.getIdentifierType().isEqual(...) to compare two identifiers. if one or both of the identifiers is (as a result of lazy loading) not loaded yet, hibernate fires a load event. under some circumstances (i want to discuss later), hibernate will find itself in the same(!) BatchFetchQueue object and method. Then the found ids get loaded and removed from batchLoadableEntityKeys. When persister.getIdentifierType().isEqual(...) returns the batchLoadableEntityKeys Map is corrupted and throws an exception with the next call hasNext() or getNext() method.
i am not sure why this happens, because our model is quite complex and i can not produce a clear case. but i think that it has something to do with cycle relationships between entities.
i also fixed the problem by copying the batchLoadableEntityKeys variable before i let me give the iterator and it works fine.
nevertheless i would like to know, if i do something wrong or if this is really a hibernate problem?! garvin its your turn...
|