I'm investigating an OutOfMemory error in our application and I've found that hibernate-search seems to be at the center of the issue. The JVM was kind enough to dump an hprof file when it ran out of memory and digging around has revealed that we have 152
Code:
CacheableMultiReader
instances whose
Code:
normsCache
field points to a map containing 2-4 byte arrays between 1MB and 4MB in size. The net result is close to 1GB of memory used.
Under normal conditions, however, there nowhere near this level of memory used.
I've told hibernate search to use a
Code:
hibernate.search.worker.buffer_queue.max
of 50 and
Code:
hibernate.search.worker.thread_pool.size
is 1. The webserver uses a maximum of 90 DB connections at once. The JVM dump only shows 30 instances of Thread.
I haven't been able to reproduce this in a laboratory environment so I can't determine exactly what is causing this pile-up of readers and their associated large arrays. One possibility could be that a long-running operation is holding a lock on the search index and then these other operations are waiting for that one to complete, but the system runs out of memory before they have a chance to do so.
Any idea on why there could be so many of these readers left around? Based on the way things are configured, I wouldn't expect so many readers to be open with so few threads.
Also, any thoughts on how to eliminate all of these
Code:
normsCache
arrays altogether? I thought maybe putting NO_NORMS on all my fields might do the trick but I noticed that "_hibernate_class" is still creating a norms array when entities are deleted and I don't see any way to delete the norms for that field - the field setup looks to be hard-coded for both the class and DocumentId.