I have been trying to get bulk inserts working, but continually come across an OutOfMemoryError after around 200,000 inserts. I'm flushing and clearing every 100 inserts, but this doesn't help.
After examining a heap dump, it looks like the problem is that the MySQL JDBC driver is holding onto way too many result sets: about one per insert. It looks like the problem is that the PreparedStatement keeps these open (List<ResultSet> com.mysql.jdbc.StatementImpl.openResults) until the PreparedStatement is closed, or getMoreResults(CLOSE_ALL_RESULTS) is called.
So, why is this in a Hibernate forum? Well, shouldn't the flush process also close all PreparedStatments? I understand that there's a performance hit here... but this is the only way to ensure we're not holding onto resources between flushes - one of the major usages of flush() in the first place. Personally, I'd rather take a performance hit rather than risk taking the VM down.
Any other ideas?
Corey
|