-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 9 posts ] 
Author Message
 Post subject: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 4:16 am 
Beginner
Beginner

Joined: Mon Oct 27, 2008 6:26 am
Posts: 36
Greeting,

We are facing memory leak issue recently on one of our apps.

Development environment : Lucene2.4.0, hibernate search3.2.0, hibernate 3.5.0, spring2.5 and ehcache 1.4.1

The problem is that memory in old gen gradually goes up in a time period. Eventually, JVM runs out of memory as we see from jvm stats that old generation capacity reaches the maximum. As a result, I have to restart web to release all memory.

I generated a heap dump from app and use memory analyzer to check it. I see this:

123,726 instances of "org.apache.lucene.index.TermInfosReader$ThreadResources", loaded by "org.apache.catalina.loader.WebappClassLoader @ 0x7f5d71ffe3c8" occupy 3,139,449,272 (79.54%) bytes. These instances are referenced from one instance of "java.util.concurrent.ConcurrentHashMap$Segment[]", loaded by "<system class loader>"

Keywords
java.util.concurrent.ConcurrentHashMap$Segment[]
org.apache.lucene.index.TermInfosReader$ThreadResources
org.apache.catalina.loader.WebappClassLoader @ 0x7f5d71ffe3c8


Code:
Biggest Top-Level Dominator Packages:
org.apache.catalina.loader.WebappClassLoader @ 0x7f5d71ffe3c8 90%

                                                              Retained Heap            Retained Heap             # Top Dominators
org.apache.lucene.index (package)              3,337,164,168                84.55%                    1,018,753
.TermInfosReader$ThreadResources            3,139,449,272                 79.54%                    123,727
.ReadOnlySegmentReader                             63,653,808                  1.61%                     2,976
.SegmentReader                                        54,508,184                    1.38%                     1,270
.Term                                                      51,717,232                    1.31%                     876,567

Total:                                            4 entries           3,309,328,496     1,004,540


Can you give me some advices please?

thanks


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 5:22 am 
Hibernate Team
Hibernate Team

Joined: Thu Apr 05, 2007 5:52 am
Posts: 1689
Location: Sweden
Hi,

have you tried to upgrade to Search 3.4.0.Final. We addressed several memory issues during the last releases. Of course there is no guarantee that this will help. There is a little to little information to give you a better advice atm.
How much memory do you have? How much do you allocate to the VM? On which operating system? How do you use Search? Either way, I think a good first step is a Search upgrade.

--Hardy


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 5:52 am 
Beginner
Beginner

Joined: Mon Oct 27, 2008 6:26 am
Posts: 36
here is print out of jmap histo:

-bash-3.2$ /usr/java/jdk1.6.0_18/bin/jmap -histo:live 22587 | grep apache.lucene.index | head -20 | sort -k2 -nr

4: 6140314 245612560 org.apache.lucene.index.TermInfo
7: 2833248 90663936 org.apache.lucene.index.Term
18: 365805 14632200 org.apache.lucene.index.FieldInfo
12: 362766 23217024 org.apache.lucene.index.SegmentReader$Norm
21: 234342 13123152 org.apache.lucene.index.TermBuffer
23: 122647 11774112 org.apache.lucene.index.CompoundFileReader$CSIndexInput
32: 78114 8748768 org.apache.lucene.index.SegmentTermEnum
58: 71071 2274272 org.apache.lucene.index.TermInfosReader$ThreadResources
39: 63491 6095136 org.apache.lucene.index.SegmentInfo
78: 38224 1223168 org.apache.lucene.index.CompoundFileReader$FileEntry
66: 31488 1511424 org.apache.lucene.index.FreqProxTermsWriter$PostingList
105: 7215 577200 org.apache.lucene.index.FieldsReader
97: 6828 710112 org.apache.lucene.index.TermInfosReader
29: 6643 11229160 [Lorg.apache.lucene.index.TermInfo;
28: 6643 11229160 [Lorg.apache.lucene.index.Term;
70: 6498 1386280 [Lorg.apache.lucene.index.RawPostingList;
90: 6078 875232 org.apache.lucene.index.TermsHashPerField
79: 4936 1184640 org.apache.lucene.index.ReadOnlySegmentReader
86: 4170 1000800 org.apache.lucene.index.SegmentReader
109: 2338 522200 [Lorg.apache.lucene.index.SegmentReader;


We specified 12G memory to jvm, and allocated 8G to old Gen. old gen memory could be filled up in days. Operating system is centos.

I will try to upgrade lib, and see what will happen

Thanks, Handy


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 6:09 am 
Beginner
Beginner

Joined: Mon Oct 27, 2008 6:26 am
Posts: 36
Handy, I just noticed one of our methods does not close indexreader, which might be the leak contributor

...
FullTextSession fullTextSession = Search.getFullTextSession( sessionFactory.getCurrentSession() );
SearchFactory searchFactory = fullTextSession.getSearchFactory();
DirectoryProvider provider = searchFactory.getDirectoryProviders(Book.class)[0];
ReaderProvider readerProvider = searchFactory.getReaderProvider();
IndexReader reader = readerProvider.openReader(provider);
IndexSearcher indexSearcher = new IndexSearcher(reader);
try{

Hits hits = indexSearcher.search(query);
count = hits.length();
}catch(Exception ex){

ex.printStackTrace();
}
finally{


}
return count;

I wonder besides to close indexreader by readerProvider.closeReader(reader), do I need to close indexSearcher ? and also fullTextSession ?

thanks


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 6:20 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Hi,
yes it's mandatory to always close the IndexReader, and the Session (or FullTextSession).
An IndexSearcher usually needs to be closed so that it closes the embedded IndexReader, but if you open it by passing in a reader, it will suffice to close the reader itself.

Do you think this text was not clear enough, should I add a red warning?
http://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#d0e6354

Also using the IndexReader is not recommended because of this: people tend to forget how to properly manage these resources; are you sure you need to use these advanced APIs ? What is the reason for you to use the "advanced" API instead of the easy one? I'm just being curious, to see if we can make it easier.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 6:42 am 
Beginner
Beginner

Joined: Mon Oct 27, 2008 6:26 am
Posts: 36
Thank you, Sanne

In some of our methods, we retrieve some value in lucene fields directly, also want to make sure querying on index only. So we use "advanced " api to operate on lucene index.

So in my previous example, it is enough to close indexreader without closing searcher, right?

readerProvider.closeReader(reader);

We are using Spring Transaction Management. I wonder if FullTextSession will be closed when hibernate session close? Do I need to close FullTextSession?

Ian


s.grinovero wrote:
Hi,
yes it's mandatory to always close the IndexReader, and the Session (or FullTextSession).
An IndexSearcher usually needs to be closed so that it closes the embedded IndexReader, but if you open it by passing in a reader, it will suffice to close the reader itself.

Do you think this text was not clear enough, should I add a red warning?
http://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#d0e6354

Also using the IndexReader is not recommended because of this: people tend to forget how to properly manage these resources; are you sure you need to use these advanced APIs ? What is the reason for you to use the "advanced" API instead of the easy one? I'm just being curious, to see if we can make it easier.


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 6:51 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
In some of our methods, we retrieve some value in lucene fields directly, also want to make sure querying on index only. So we use "advanced " api to operate on lucene index.

You should use projections then, much simpler: http://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#projections

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 7:24 am 
Beginner
Beginner

Joined: Mon Oct 27, 2008 6:26 am
Posts: 36
cool

thanks, Hardy and Sanne


Top
 Profile  
 
 Post subject: Re: what could be the reason for memory leak issue we have?
PostPosted: Wed Jul 20, 2011 8:21 am 
Hibernate Team
Hibernate Team

Joined: Thu Apr 05, 2007 5:52 am
Posts: 1689
Location: Sweden
no worries. Seems you already narrowed the problem down :-)


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 9 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.