Hi,
I'm nearly complete converting our application from Compass to Hibernate Search, but we're hitting a nasty issue now while testing against production data. It seems that initial indexing will run for some amount of time, usually around 10 minutes to 1 1/2 hours, and then the initial indexing threads all die suddenly with Connection Reset errors. I'm experimenting right now with different database drivers, connection pools, and instances of SQL Server, but I thought I would post here too incase anyone else has experienced this. We did not have this issue with Compass, but it also only did one thread per type, whereas Hibernate Search's MassIndexer is much more intensive.
The thread deaths look like this (these are just a few of them):
DBComms.transmit exception:[java.net.SocketException: Connection reset] context:[(98) [Thread[Hibernate Search: collectionsloader-3,5,main], IO:e4a85, Dbc:2eb01]] 15:00:02 WARN o.h.u.JDBCExceptionReporter:100 | SQL Error: 0, SQLState: 08S01 15:00:02 ERROR o.h.u.JDBCExceptionReporter:101 | DBComms.transmit exception:[java.net.SocketException: Connection reset] context:[(100) [Thread[Hibernate Search: collectionsloader-2,5,main], IO:4de63, Dbc:fc9ee]] 15:00:02 WARN o.h.u.JDBCExceptionReporter:100 | SQL Error: 0, SQLState: 08S01 15:00:02 ERROR o.h.u.JDBCExceptionReporter:101 | DBComms.transmit exception:[java.net.SocketException: Connection reset] context:[(97) [Thread[Hibernate Search: collectionsloader-2,5,main], IO:d03ef, Dbc:d9825]]
They are all labeled "Hibernate Search: collectionsloader". We are using SQL Server 2008 R2 with the JSQLConnect jdbc driver and c3p0 connection pool. A fully built Lucene index for this instance would typically take up ~5.5GB.
I tried turning on trace logging for all of hibernate search, but nothing seemed unusual around these thread deaths. I could post it if it would be of interest. It does not consistently die on the same objects from what I can tell.
If anyone has any thoughts or suggestions I would really appreciate it.
Thanks, Ben
|