-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 9 posts ] 
Author Message
 Post subject: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Tue Jul 28, 2009 2:48 pm 
Regular
Regular

Joined: Fri Oct 05, 2007 3:22 am
Posts: 69
Hi,

We just upgraded to Hibernate Search 3.1.1 with a master/slave configuration and there are many .nfs files being produced in the slave index location.

2009-07-28 11:29:59,582 ERROR [org.hibernate.search.util.FileHelper] Could not delete /location-to-slave-index/2/.nfs000000000054401c00000175

The refresh period is set to one minute and the index is only roughly 80MB at this time.

We were on 3.0.1 previously and although there were issues we didn't encounter this one.


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Tue Aug 04, 2009 8:03 pm 
Regular
Regular

Joined: Fri Oct 05, 2007 3:22 am
Posts: 69
Has anyone else encountered this?
Is anyone using the clustered configuration?


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Wed Aug 05, 2009 11:04 am 
Hibernate Team
Hibernate Team

Joined: Thu Apr 05, 2007 5:52 am
Posts: 1689
Location: Sweden
Hi,

is the a once off migration problem or is this a re-occurring problem since the upgrade?
I guess you are using NFS to get access to the shared directories. Is there anything in the NFS logs?
Do you have a stack trace?

--Hardy


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Mon Aug 10, 2009 5:10 pm 
Regular
Regular

Joined: Fri Oct 05, 2007 3:22 am
Posts: 69
hardy.ferentschik wrote:
Hi,

is the a once off migration problem or is this a re-occurring problem since the upgrade?
I guess you are using NFS to get access to the shared directories. Is there anything in the NFS logs?
Do you have a stack trace?

--Hardy


No stack, just the error.
Yes, NFS is where the shared directory is located.
Looking back in some older logs this had been present before but appears to be more frequent now and will prevent new data from being visible to the slaves if it fails to remove these directories by the looks of it.

Thanks


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Mon Aug 10, 2009 6:51 pm 
Regular
Regular

Joined: Fri Oct 05, 2007 3:22 am
Posts: 69
So this is fairly serious.

For these .nfs files to appear you basically have a scenario where a process or user has deleted a file that is open by another process or user so I'm suspecting there is a missing synchronization between the master and slave nodes during the copy the process.


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Wed Aug 12, 2009 9:11 pm 
Regular
Regular

Joined: Fri Oct 05, 2007 3:22 am
Posts: 69
So I guess nobody else has experienced this?


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Thu Aug 13, 2009 12:05 pm 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
sorry for the delay

I've noticed that Lucene 2.4 introduced some problems with IndexReader.reopen(), which is used by the ReaderProvider used by default in Hibernate Search since 3.1
The problems are supposed to be fixed in Lucene 2.4.1 (make sure you use that one), but NFS has always added some trouble.
If you're already using Lucene 2.4.1 or if switching to it doesn't help, try
Code:
hibernate.search.reader.strategy=not-shared

That's not the most efficient setting but it avoids the usage of IndexReader.reopen(), and then please get us back some feedback about this.

BTW, do you really require to use NFS? it's discouraged for Lucene indexes for many reasons: it might work but AFAIK nobody is caring to test that.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Thu Aug 13, 2009 4:59 pm 
Regular
Regular

Joined: Fri Oct 05, 2007 3:22 am
Posts: 69
s.grinovero wrote:
sorry for the delay

I've noticed that Lucene 2.4 introduced some problems with IndexReader.reopen(), which is used by the ReaderProvider used by default in Hibernate Search since 3.1
The problems are supposed to be fixed in Lucene 2.4.1 (make sure you use that one), but NFS has always added some trouble.
If you're already using Lucene 2.4.1 or if switching to it doesn't help, try
Code:
hibernate.search.reader.strategy=not-shared

That's not the most efficient setting but it avoids the usage of IndexReader.reopen(), and then please get us back some feedback about this.

BTW, do you really require to use NFS? it's discouraged for Lucene indexes for many reasons: it might work but AFAIK nobody is caring to test that.


For the shared location yes I need to use NFS but the actual writes occur to local disk. I'm using the master/slave approach as recommended.

I'll try your suggestion and get back with the results. Thanks for the information.


Top
 Profile  
 
 Post subject: Re: Getting a lot of .nfs*** files with 3.1.1
PostPosted: Wed Aug 19, 2009 3:58 pm 
Regular
Regular

Joined: Fri Oct 05, 2007 3:22 am
Posts: 69
s.grinovero wrote:
sorry for the delay

I've noticed that Lucene 2.4 introduced some problems with IndexReader.reopen(), which is used by the ReaderProvider used by default in Hibernate Search since 3.1
The problems are supposed to be fixed in Lucene 2.4.1 (make sure you use that one), but NFS has always added some trouble.
If you're already using Lucene 2.4.1 or if switching to it doesn't help, try
Code:
hibernate.search.reader.strategy=not-shared

That's not the most efficient setting but it avoids the usage of IndexReader.reopen(), and then please get us back some feedback about this.

BTW, do you really require to use NFS? it's discouraged for Lucene indexes for many reasons: it might work but AFAIK nobody is caring to test that.


I have been able to get back to looking at this and we are using Lucene 2.4.1. I have tried applying the reader strategy configuration you mention here but I am still seeing the problem.

The setup I'm using consists of two slaves one master using the slave and master directory providers and configuration as promoted in the book.
The local location of the slaves and the master is local disk while the master will push index updates to a shared location which is NFS.
The main difference I guess is instead of using the JMS queue for handling incremental updates there is a periodic job that scans for changes and will use the manual indexing apis. This is done by the master process.

Since I'm using manual indexing in the master I have set the
Code:
<property name="hibernate.search.indexing_strategy" value="manual"/>

property since I'm not using JMS. This appears acceptable based on what I have read.

One more thing, if any updates are detected during the periodic db scan I will purge those specific entities before indexing them.

Since using the manual apis are safe I assume that is fine as well.


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 9 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.