-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 
Author Message
 Post subject: HS : Master and Slave without JMS
PostPosted: Fri Dec 18, 2009 6:07 am 
Regular
Regular

Joined: Thu Oct 08, 2009 10:34 am
Posts: 55
Hi Guys,
I have a working prototype at the moment, I am using the master and slave directories but without the JMS setup. I have a dedicated server app that runs a job every x mins to check if any records need to be updated/reindexed.

Now this server app is configured with the master directory set up, my question is this. Since lucene has no update capability one has to delete first and then add to update the index, my concern is that while this job runs, it deletes the records that need to be re-indexed but before the adding of the updated records, a copy of the index to the shared directory could take place thus delivering an index which is corrupt i.e missing records. What I need is a write lock on the master copy to stop this from happening while the whole delete/add is taking place, how do i get this?? Is it enough to start a transaction for the whole delete/add process(Currently I have 2 transactions one for delete and one for the add)??

How did you guys solve it for the JMS setup?? Cant find it in the source tree!!

Thanks guys,
LL


Top
 Profile  
 
 Post subject: Re: HS : Master and Slave without JMS
PostPosted: Sat Dec 19, 2009 4:49 am 
Pro
Pro

Joined: Wed Oct 03, 2007 2:31 pm
Posts: 205
Why don't you just use the JMS configuration that comes out of the box with Hibernate Search? It's tried and tested.


Top
 Profile  
 
 Post subject: Re: HS : Master and Slave without JMS
PostPosted: Mon Dec 21, 2009 8:10 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
How did you guys solve it for the JMS setup??

In the JMS setup the index is always consistent, so there's no need to care about when the copy might start.

In your case your index producer could push the index to a reference directory when you're done, and have the clients pull it from there periodically; in this case the sources might help as the master/slave DirectoryProviders implement this. Of course you could use these DirectoryProviders: no need to setup JMS for them to work, just tune the timers as you need to make sure a switch doesn't occur during a copy.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: HS : Master and Slave without JMS
PostPosted: Mon Dec 21, 2009 9:18 am 
Regular
Regular

Joined: Thu Oct 08, 2009 10:34 am
Posts: 55
Hi Sanne,
Quote:
In the JMS setup the index is always consistent, so there's no need to care about when the copy might start.

Can you expand on this works?? How do you keep it consistent?? The way I understand it with the JMS setup is as follows, you configure one node to be the master, this node receives all updates from other nodes and is also responsible for delivering the master index to other members at periodic stages. Now I can see in the FSMasterDirectory src code that a lock is acquired first before the master version is copied to the shared directory for other members to pick up. My question is though do you use the same mechanism for updating?? I couldnt find the JMS src code where it handles the updating of the index. I assume that for updates to the master index you acquire a lock to stop the master version getting delivered into the shared directory before the update is complete?? Can you point me in the src tree where this code is??(Thanks)


Quote:
in this case the sources might help as the master/slave DirectoryProviders implement this. Of course you could use these DirectoryProviders: no need to setup JMS for them to work, just tune the timers as you need to make sure a switch doesn't occur during a copy.

Thats exactly what I have, i am reusing the master and slave directories provided by you guys but I have my own implementation of updating the master index. I want to make sure the master index doesnt get copied when Im updating the index. So I assume i need to re-use the same locking mechanism used in the master directory implementation but i just want to confirm this and also see how it was done in the JMS implementation.

Thanks for your time Sanne,
LL


Top
 Profile  
 
 Post subject: Re: HS : Master and Slave without JMS
PostPosted: Mon Dec 21, 2009 10:53 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
Can you expand on this works?? How do you keep it consistent??

In Lucene changes to the index are done by an IndexWriter, this one will acquire a lock on the index. If you open an IndexWriter to the same index it will use the configured LockFactory to obtain a lock, this is different than Hibernate Search'es DirectoryLock.
(I wrote about Lucene LockFactories here you can reconfigure Hibernate Search to use any implementation)
Basically the configured Lucene lock's purpose is to prevent that two IndexWriters apply changes at the same time.

The Directory locks used internally in Hibernate Search are indipendent from the low level IndexWriter one: the IndexWriters locks are used to avoid index corruption, what Search does with the faster in memory locks is to make sure they are consistently changed in relation to add/delete as the database does. As an update is delete+add these two operations are wrapped in the ownership of the directory lock.
So Search will acquire this lock to make sure the index it copies is not containing an applied delete operation while it's missing a write operation.

But this shouldn't be important to your case I guess? I assume you disabled realtime indexing, so Search isn't applying updates for you, so there's no need to take this lock as nobody else will try to acquire it.
Give a look into Lucene's SnapshotDeletionPolicy to be able to take consistent snapshots even without stopping the indexwriter tasks.

Actually you could integrate the use of SnapshotDeletionPolicy in the DirectoryProviders and contribute that if you like, so we can stop using the lock. Would be an interesting improvement.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: HS : Master and Slave without JMS
PostPosted: Tue Dec 22, 2009 5:54 am 
Regular
Regular

Joined: Thu Oct 08, 2009 10:34 am
Posts: 55
Thanks for that Sanne. Will have a look at it after xmas and let you know how I got on.


Top
 Profile  
 
 Post subject: Re: HS : Master and Slave without JMS
PostPosted: Mon Jan 04, 2010 10:08 am 
Regular
Regular

Joined: Thu Oct 08, 2009 10:34 am
Posts: 55
Hi Sanne,

Ok have had a look at it today and hit a wall.

Quick recap.......I am re-using HS master and slave directories out of the box without the JMS setup. I have implemented my own jobs/secheduled tasks for keeping the master index up to date. My concern or worry is that the master node will copy the master index while an update is occuring thus copying a corrupt index for clients to use. What I want to do is prevent HS master directory from copying the index while an update is occuring to the index.

I re-used the locking mechanism that is used in the FSMasterDirectory implementation. A snippet of my implementation of an update looks like so

Code:
               session = getFullTextSession();
               directoryProviderLock = this.getDirectoryLock(session, entity);
               if(directoryProviderLock != null){
                  directoryProviderLock.lock();
                  removeOldIndex(entity,session);
                  indexEntity(entity,session);
               }else{
                  log.error("Lock is null, failed to get a hold of directory lock for entity"+entity);
               }
........lock is released


Problem with this approach is this, the directory gets locked first by calling lock(). Later on though in removeOldIndex() i have
Code:
private void removeOldIndex(Class<?> entity,FullTextSession fullTextSession){
      log.info("About to purge index for "+entity);
      try {
         fullTextSession.beginTransaction();
         //purge all records before we add new ones,no update in lucene out of the box
         fullTextSession.purgeAll(entity);
         fullTextSession.getTransaction().commit();//triggers a new thread which looks to acquire a lock on the directory but wont get it because we acquired it earlier
         log.info("Finished purging for  "+entity);
      } catch (HibernateException e) {
         log.error("Problem occured for purging index "+entity,e);
      }

   }


On the commit, it spawns off a seperate thread or "FutureTask", this futureTask looks also to acquire a lock on the directory but it wont get it because the parent thread has the lock. Basically it ends up in an infinite wait in PerDPQueueProcessor.java(84).

I have split the deletion and addition across two transactions, should i be doing this or should i be using one transaction?? I read somewhere that to split it was best practice(Cant find source now)!! I dont think even the one transaction would solve my problem because in my "adding to index" code im calling the "fullTextSession.flushToIndexes()" which I assume writes to disk!! (Very similar to whats in the manual for manual indexing)


I have looked at the SnapshotDeletionPolicy briefly, it looks like a round about way of what I want to do, i just want to prevent the master directory from copying the index while an update is in process on the index. You must have something like this already in place for the JMS implementation?? Update comes in off the queue and the master node needs to apply this update to the index, can you point me to this code in the tree?? I cant find it thanks!!

It looks like at the moment I have hit a stumbling block, only idea i have at the moment is to come up with my own version of FSMasterDirectoryProvider, this version would check to see if any updates are in progress and hold off on the copying till the updates are finished.


Quote:
As an update is delete+add these two operations are wrapped in the ownership of the directory lock.
So Search will acquire this lock to make sure the index it copies is not containing an applied delete operation while it's missing a write operation.


This is probably the lock i need to get but if my purge spins off a seperate thread and looks to acquire the same lock then the current way i do it wont work for me.

Any ideas or input is welcome??



Thanks Sanne,
LL


Top
 Profile  
 
 Post subject: Re: HS : Master and Slave without JMS
PostPosted: Mon Jan 11, 2010 7:24 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
On the commit, it spawns off a seperate thread or "FutureTask", this futureTask looks also to acquire a lock on the directory but it wont get it because the parent thread has the lock. Basically it ends up in an infinite wait in PerDPQueueProcessor.java(84).

right, the delete operation is happening in it's own thread so you're deadlocking it. you don't need (and should not) take the lock before you apply this operation: locks are driven by Search under the covers when applying the "purgeAll" (or any other) operationm they are not meant as public apis.
Because you're delegating this operation to the appropriate thread, this is going to be nicely serialized with other change operations.

Quote:
I have split the deletion and addition across two transactions, should i be doing this or should i be using one transaction?? I read somewhere that to split it was best practice(Cant find source now)!!

That's considered good practice (source: the book) because it will prevent you from using twice the space on disk, but it's only needed for huge sized indexes when you have space problems.
I'm actually not understanding why you delete whole the index? Why don't you copy it over and apply updates? And if you need to delete it all, you can just delete the directory and all containing files: using purgeAll is to delete only the documents of one type (and subtypes), to remove the whole index you can just delete it.

Quote:
I have looked at the SnapshotDeletionPolicy briefly, it looks like a round about way of what I want to do, i just want to prevent the master directory from copying the index while an update is in process on the index. You must have something like this already in place for the JMS implementation?? Update comes in off the queue and the master node needs to apply this update to the index, can you point me to this code in the tree?? I cant find it thanks!!

No there's no code taking locks because of the way Lucene works: you can always make a copy of the index "hot" it will be consistent. When the backend applies changes, it does commit() at the end: until it commits no change is visible, after it does they are all atomically visible (either all or no change).
Acquiring the DirectoryLock will suspend the threads applying changes to the same index, so that is enough to make sure the changes are going to be applied on the correct index, and that you're not getting changes you didn't want. Using a SnapshotDeletionPolicy you can get a similar semantics without having to suspend the workers (no locks needed at all): so it's good to make a copy, but you'll loose updates done after the snapshot was taken.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.