-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 23 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Mon May 04, 2015 6:55 am 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
In our setup we have two Wildfly instances which can have access to a shared storage. The writes to the index are not often so we would like to avoid the hassle of configuring JGroups and Infinispan (at least for now - I'll probably start another thread in the future :) ).

Is it possible to have the two Wildfly instances use the same storage location and manage the locks so that they are both able to access it without problems? We could use the master-slave filesystem DirectoryProvider configuration but the refresh rates are too low for out requirements as we would like to have the index available instantly to both nodes.

In the most simple case, we would simply like to use the filesystem DirectoryProvider pointing to the same location on both Wildfly instances? This would be an NFS setup so I assume we would need to use the 'simple' LockFactory?


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 05, 2015 4:29 am 
Hibernate Team
Hibernate Team

Joined: Sun Sep 14, 2003 3:54 am
Posts: 7256
Location: Paris, France
It is possible but not exactly how you describe it.
You need to still configure the master / slave logic so that a single node acquires the lock and updates the index. You can store this index in a shared storage.
The other nodes must be slaves but you can have them point to the shared storage directory and thus see the index "instantly" as you say.
So essentially, you don't need the copy and delay as all nodes share the same directory structure, it's just that a single node writes.

_________________
Emmanuel


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 05, 2015 4:40 am 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
emmanuel wrote:
It is possible but not exactly how you describe it.
You need to still configure the master / slave logic so that a single node acquires the lock and updates the index. You can store this index in a shared storage.
The other nodes must be slaves but you can have them point to the shared storage directory and thus see the index "instantly" as you say.
So essentially, you don't need the copy and delay as all nodes share the same directory structure, it's just that a single node writes.


Thanks, but that doesn't quite do the trick. We need both nodes to write to the index. We never know which node will receive requests from clients and which node will update the database. That's the problem we have now with two nodes with separate indexes. When one of the nodes updates the db and the index, the other node is not aware of the updates.


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 05, 2015 5:13 am 
Hibernate Team
Hibernate Team

Joined: Sun Sep 14, 2003 3:54 am
Posts: 7256
Location: Paris, France
That is why you need the master / slave approach with either JMS or JGroups to redirect the workload to a single node. I understand it requires the configuration of JMS.

The shared lock atop a shared file system is not necessarily guaranteed to work (esp with NFS). But I think your nodes don't see each other's changes because you have hibernate.search.​[default|<indexname>].​exclusive_index_use to true (the default I think). Make it false.

_________________
Emmanuel


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 05, 2015 5:44 am 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
Right now they don't see the changes because they use completely separate storages, so that's no surprise :)

So basically there's no way to just rely on file lock mechanisms? JMS is the only way to go? Do you know of a good example which I could use, in that case?


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 05, 2015 11:47 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Hi,
there is an Arquillian test which runs the most common JMS based deployment here:
https://github.com/hibernate/hibernate-search/blob/master/integrationtest/wildfly/src/test/java/org/hibernate/search/test/integration/jms/SearchNewEntityJmsMasterSlaveUsingModulesIT.java

I would suggest to check out the whole sources so you can see how it's configured.

A warning: those integration test "cheat" a bit, as they actually don't have a shared database, so they only test the backend and index replication but then you can't really test all of the normal queries that way (unless you change it to use a real datasource).

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Mon May 11, 2015 12:39 pm 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
Hi Sanne,

Thanks for the link. I followed some of the link you provided and also used the Hibernate Search reference and I made some progress but I didn't get it to work in the end.

One question I have is how are my slaves supposed to know which master URL to connect to. I did some pure-JMS tests and it requires to provide the Context.PROVIDER_URL property, which indicates the remote JMS server we're connecting to.

Code:
final Properties env = new Properties();
env.put(Context.INITIAL_CONTEXT_FACTORY, INITIAL_CONTEXT_FACTORY);
env.put(Context.PROVIDER_URL,
         System.getProperty(Context.PROVIDER_URL, PROVIDER_URL));
env.put(Context.SECURITY_PRINCIPAL, PRINCIPAL);
env.put(Context.SECURITY_CREDENTIALS, PASSWORD);

Context namingContext = new InitialContext(env);


I don't see such an option in the Hibernate Search JMS configuration (http://docs.jboss.org/hibernate/search/5.2/reference/en-US/html/search-configuration.html#jms-backend)

In my test setup I'm running two applications, one acting as the master, and the other one as the slave.

Master config:

Code:
         <!-- Properties for Hibernate Search -->
         <!-- Define Lucene version -->
         <property name="hibernate.search.lucene_version" value="LUCENE_4_10_4" />
         <!-- ## DirectoryProvider -->
         <!-- # (remote) master location where information is copied to -->
         <property name="hibernate.search.default.sourceBase" value="/opt/wildfly/lucene/shared/" />

         <!-- # local master location -->
         <property name="hibernate.search.default.indexBase" value="/opt/wildfly/lucene/local/" />

         <!-- # refresh every 10 seconds -->
         <property name="hibernate.search.default.refresh" value="10" />

         <!-- # appropriate directory provider -->
         <property name="hibernate.search.default.directory_provider"
            value="filesystem-master" />
         <!-- We don't need synchronous index updates - speed is not that important
            to us -->
         <property name="hibernate.search.default.worker.execution"
            value="async" />


Slave configuration:

Code:
<property name="hibernate.search.default.sourceBase" value="/opt/wildfly/lucene/shared/" />
         <property name="hibernate.search.default.indexBase" value="/opt/wildfly/lucene/indexes/" />
         <property name="hibernate.search.default.directory_provider"
            value="filesystem-slave" />

         <!-- # refresh every 10 seconds -->
         <property name="hibernate.search.default.refresh" value="10" />

         <!-- Properties for Hibernate Search -->
         <!-- Define Lucene version -->
         <property name="hibernate.search.lucene_version" value="LUCENE_4_10_4" />

         <!-- We don't need synchronous index updates - speed is not that important
            to us -->
         <property name="hibernate.search.default.worker.execution"
            value="async" />

         <!-- ## Backend configuration -->
         <property name="hibernate.search.default.worker.backend"
            value="jms" />
         <property name="hibernate.search.default.worker.jms.connection_factory"
            value="java:jboss/exported/jms/RemoteConnectionFactory" />
         <property name="hibernate.search.default.worker.jms.queue"
            value="java:jboss/exported/jms/queues/luceneIndexQueue" />
         <!-- #optionally authentication credentials: -->
         <property name="hibernate.search.default.worker.jms.login"
            value="myuser" />
         <property name="hibernate.search.default.worker.jms.password"
            value="mypassword" />


During deployment, I get the following error:

Code:
Caused by: org.hibernate.search.exception.SearchException: Failed to initialize DirectoryProvider "com.mypkg.TestClass": could not find marker file in index source


And I'm kind of stuck at this point. I'm not sure if running this on the same Wildfly instance makes sense for tests but I don't see why it wouldn't.

Edit: To clear up, my master application does not know anything about the database - it's only purpose is to collect the messages from the slaves, which use the database. That's our plan for production:

- One 'master' node which only takes the messages and manages the master index
- Two 'slave' nodes, which connect to the same database and send JMS messages to the master
- The shared index uses NFS and is copied by the slaves to their respective local directories

Edit 2: By the way, it seems that the signature of an abstract method in AbstractJMSHibernateSearchController has changed:

Code:
   @Override
   protected SearchIntegrator getSearchIntegrator() {

   }


The reference shows another method:
Code:
@Override
protected SearchFactory getSearchFactory() {
FullTextEntityManager fullTextEntityManager = Search.getFullTextEntityManager(em);
return fullTextEntityManager.getSearchFactory();
}


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Mon May 11, 2015 6:39 pm 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Hi,
your configuration looks correct. The architecture you describe seems correct too.

So - to confirm - it's /opt/wildfly/lucene/shared/ which is local to the master node, but visible to the slaves via NFS? Can you confirm that the master node created the marker file, and it's visible to both slaves?

The refresh period of 10 seconds seems a bit aggressive to me - especially if you plan to transfer large indexes - but this shouldn't be the problem.

BTW thanks for reporting the documentation mistake, I've reported that as https://hibernate.atlassian.net/browse/HSEARCH-1868

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Mon May 11, 2015 7:55 pm 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
Right now the slave application and the master application are running on the same machine, on the same Wildfly instance so there's no NFS. There are three directories:

- /local - 'local' to the master
- /indexes - 'local' to the slave
- /shared - written to by the master, read from by the slave

Having deployed only the master application, however, those folders (local and shared) weren't created at all. This looked suspicious to me but I thought that could be the case until the first message with index update is sent from the salve. Unfortunately, the slave application won't deploy at all, since the files are missing. I need to check that the persistence.xml file is in the correct place in the master application and it gets recognized at all. Can you confirm that the fact that the master application has no connection to the database make no difference? It also doesn't know about the entities that are to be indexed. It's supposed to act only as a collector of messages from the multiple slave entities.

Also, I still don't understand how to make sure that, in the final setup, the slave nodes connect to the master. Where do I put the PROVIDER_URL property as shown in the simple remote JMS example? If the master is running on master.mydomain.com and the slaves are slavex.mydomain.com and slavey.mydomain.com how do I force the slave to connect to http-remoting://master.mydomain.com:8080 ?

We need really fast index update times but we don't expect the indices to change much often, at least not soon, so there won't be much traffic there. As far as I understand, there is some kind of differential mechanism, which checks for changes in the files and doesn't copy everything, right?

As far as the discrepancy in the documentation goes, maybe this is also a problem? I'm casting the SearchFactory returned by the FullTextEntityManager instance to ImmutableSearchFactory, which implements SearchIntegrator but I'm not sure this is the correct approach.

Thanks for claryfing all those things for me and I hope with just a few more answers I will be able to get it going :)


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 12, 2015 5:37 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
Having deployed only the master application, however, those folders (local and shared) weren't created at all. This looked suspicious to me but I thought that could be the case

The master will create the sub-directories it needs, but the base path must exist! I'm surprised you don't get an appropriate error message, that might be related with the next point:

Quote:
Can you confirm that the fact that the master application has no connection to the database make no difference? It also doesn't know about the entities that are to be indexed. It's supposed to act only as a collector of messages from the multiple slave entities.

No that's not possible, not without some small changes. First of, it still is an Hibernate deployment so you need to get Hibernate to boot, and it has to be able to recognize the same entities: the entity classes need to be defined and available to the master deployment. You are correct that it won't actually need to load anything from the database, so the connection to the RDBMs won't be needed, but I think one is mandatory to start Hibernate.
The main problem is that the indexes are started based on the @Indexed annotation on entities, if there are none, then the indexing backends won't be started, the DirectoryProvider configuration won't be loaded and you won't get validation of the directories..

I do agree with your idea though. We have plans for a standalone master node which won't need even the Hibernate (ORM) dependency around, nor the annotated entities. It's going to be quite some work to get there tough, probably version 6.0.

Quote:
Also, I still don't understand how to make sure that, in the final setup, the slave nodes connect to the master. Where do I put the PROVIDER_URL property as shown in the simple remote JMS example? If the master is running on master.mydomain.com and the slaves are slavex.mydomain.com and slavey.mydomain.com how do I force the slave to connect to http-remoting://master.mydomain.com:8080 ?

Honestly, I don't know. There are several users running the JMS backend successfully and nobody ever asked for a PROVIDER_URL so I'm not sure why you would need that. I'm not too familiar with JMS myself, I'd assume that this is something you configure somewhere else in the application server? AFAIK all you need is to lookup a queue by name, while routing configuration is not something to be defined / configured within the deployment.

Quote:
We need really fast index update times but we don't expect the indices to change much often, at least not soon, so there won't be much traffic there. As far as I understand, there is some kind of differential mechanism, which checks for changes in the files and doesn't copy everything, right?

That's correct, the transfer is differential. But it works at file level, and index segments are organized in approximately 10 (depending on your merge configuration) large, immutable files. Sometimes two files will be merged in one, meaning two (or more) are deleted and a new one is created. The new file will be entirely different than the previous chunks..
So while you won't be transferring all the index each time, you might want to test for this to happen fast enough.
If you need really fast (real time) index changes replication, you could switch the DirectoryProvider to Infinispan; you'd still need the JMS backend though, so let's start getting that to work first ;-)

Alternatively you could tune your merge factors to keep the index more fragmented; that will have a small penalty on query speed, but query speed is also affected by the fact that segments don't change (the caches get better usage) so the the effect might somewhat compensate for each other.

Quote:
As far as the discrepancy in the documentation goes, maybe this is also a problem? I'm casting the SearchFactory returned by the FullTextEntityManager instance to ImmutableSearchFactory, which implements SearchIntegrator but I'm not sure this is the correct approach.


Code:
SearchIntegrator si = fulltextEntityManager.getSearchFactory().unwrap(SearchIntegrator.class);

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 12, 2015 5:52 am 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
sanne.grinovero wrote:
No that's not possible, not without some small changes. First of, it still is an Hibernate deployment so you need to get Hibernate to boot, and it has to be able to recognize the same entities: the entity classes need to be defined and available to the master deployment. You are correct that it won't actually need to load anything from the database, so the connection to the RDBMs won't be needed, but I think one is mandatory to start Hibernate.
The main problem is that the indexes are started based on the @Indexed annotation on entities, if there are none, then the indexing backends won't be started, the DirectoryProvider configuration won't be loaded and you won't get validation of the directories..

I do agree with your idea though. We have plans for a standalone master node which won't need even the Hibernate (ORM) dependency around, nor the annotated entities. It's going to be quite some work to get there tough, probably version 6.0.


So that's where the problem has to be then :) I was hoping for a standalone master node but now I understand that's not possible. It makes sense then that with no @Entities, no index directories and files were created by the master. The problem with two nodes running a master-slave setup is that if the master fails, the slave nodes will not get their index files updated. Is there a mechanism to somehow switch over to standalone filesystem DirectoryProvider or to change the master dynamically? I saw in the reference document that this is an experimental feature for the JGroups backend but this is something quite crucial, as it introduces a single point of failure.

Quote:
Honestly, I don't know. There are several users running the JMS backend successfully and nobody ever asked for a PROVIDER_URL so I'm not sure why you would need that. I'm not too familiar with JMS myself, I'd assume that this is something you configure somewhere else in the application server? AFAIK all you need is to lookup a queue by name, while routing configuration is not something to be defined / configured within the deployment.


I need to go back to the drawing board then and test the JMS example again some other way by configuring it at the server level.

Quote:
That's correct, the transfer is differential. But it works at file level, and index segments are organized in approximately 10 (depending on your merge configuration) large, immutable files. Sometimes two files will be merged in one, meaning two (or more) are deleted and a new one is created. The new file will be entirely different than the previous chunks..
So while you won't be transferring all the index each time, you might want to test for this to happen fast enough.
If you need really fast (real time) index changes replication, you could switch the DirectoryProvider to Infinispan; you'd still need the JMS backend though, so let's start getting that to work first ;-)

Alternatively you could tune your merge factors to keep the index more fragmented; that will have a small penalty on query speed, but query speed is also affected by the fact that segments don't change (the caches get better usage) so the the effect might somewhat compensate for each other.


Thanks for the clarification. I'll try testing it with two 'real' applications using only the JMS backend with files and I'll get back with more questions after I do some more tests :)


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 12, 2015 6:28 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
The problem with two nodes running a master-slave setup is that if the master fails, the slave nodes will not get their index files updated. Is there a mechanism to somehow switch over to standalone filesystem DirectoryProvider or to change the master dynamically? I saw in the reference document that this is an experimental feature for the JGroups backend but this is something quite crucial, as it introduces a single point of failure.


The JGroups backend is still experimental, we didn't have time to work more on that yet but I plan to resume that research as soon as possible. Generally people using the JMS backend will be able to setup a second master node, and configure the queues for failover of the singleton consumer.. how you do that though depends on the JMS implementation.

Note that it might not be worth the complexity: your application is not "down" when the master crashes, as it can still do queries and generate index update messages. Also, you are already designing it to be able to deal with queries being slightly out of date. The only effect of a master node being down is that the delay of index updates gets a bit longer, and how long exactly just depends on your monitoring systems to restart the master node. Since the queue can be configured persisted, the system is eventually consistent with no data loss and no critical points of failure, you'll just have some minutes if stake indexes, and only when the master crashes.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Tue May 12, 2015 9:26 am 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
We've been trying to stay away from JGroups so far in other areas of our production environment so I don't want to start here :)

You're right that it usually shouldn't take long to bring the master node back to life. However, our system is designed in a way that both nodes are equal and it should be possible to even stop one node entirely and run the system only on one of them for longer periods of time. In that case, we would have to change the Hibernate Search setup to use the local index only.

I don't know if there's another way to keep the system modular. I think that it would be nice to be able to use one node as both a master and a slave. In our case each node would send the JMS messages to the other and there would be two separate 'shared' index locations. At copy time, some kind of arbiter would decide where to get the copy from. This would also be a scalable solution. In a system with N nodes, M of them would be the master/slave ones and N-M would be pure slave ones. I don't know how the indices are constructed on the inside and how difficult it would be to detect which of the 'shared' locations to copy but that's something that seems useful to me from an architectural standpoint.


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Wed May 13, 2015 1:50 pm 
Beginner
Beginner

Joined: Mon Feb 16, 2015 6:41 am
Posts: 32
Location: Lodz, Poland
I made some progress in that the index files are copied from one node to another. However, the JMS is still an issue. I know you mentioned you're not an expert, Sanne, but maybe someone else can help :)

The error I'm getting is very generic and has to do with transactions:

Code:
19:37:33,961 ERROR [stderr] javax.jms.JMSException: Could not create a session: IJ000460: Error checking for a transaction
19:37:33,962 ERROR [stderr]     at org.hornetq.ra.HornetQRASessionFactoryImpl.allocateConnection(HornetQRASessionFactoryImpl.java:1002)
19:37:33,962 ERROR [stderr]     at org.hornetq.ra.HornetQRASessionFactoryImpl.createSession(HornetQRASessionFactoryImpl.java:566)
19:37:33,962 ERROR [stderr]     at org.hornetq.ra.HornetQRASessionFactoryImpl.createSession(HornetQRASessionFactoryImpl.java:822)
19:37:33,962 ERROR [stderr]     at org.hornetq.ra.HornetQRASessionFactoryImpl.createSession(HornetQRASessionFactoryImpl.java:828)
19:37:33,963 ERROR [stderr]     at com.tellyo.rtc.utils.LuceneIndexJMSBackendWorker$1.run(LuceneIndexJMSBackendWorker.java:122)
19:37:33,963 ERROR [stderr]     at com.tellyo.rtc.utils.LuceneIndexJMSBackendWorker.applyWork(LuceneIndexJMSBackendWorker.java:139)
19:37:33,963 ERROR [stderr]     at org.hibernate.search.indexes.spi.DirectoryBasedIndexManager.performOperations(DirectoryBasedIndexManager.java:112)
19:37:33,963 ERROR [stderr]     at org.hibernate.search.backend.impl.WorkQueuePerIndexSplitter.commitOperations(WorkQueuePerIndexSplitter.java:49)
19:37:33,964 ERROR [stderr]     at org.hibernate.search.backend.impl.BatchedQueueingProcessor.performWorks(BatchedQueueingProcessor.java:81)
19:37:33,964 ERROR [stderr]     at org.hibernate.search.backend.impl.PostTransactionWorkQueueSynchronization.afterCompletion(PostTransactionWorkQueueSynchronization.java:88)
19:37:33,964 ERROR [stderr]     at org.hibernate.search.backend.impl.EventSourceTransactionContext$DelegateToSynchronizationOnAfterTx.doAfterTransactionCompletion(EventSourceTransactionContext.java:192)
19:37:33,964 ERROR [stderr]     at org.hibernate.engine.spi.ActionQueue$AfterTransactionCompletionProcessQueue.afterTransactionCompletion(ActionQueue.java:744)
19:37:33,965 ERROR [stderr]     at org.hibernate.engine.spi.ActionQueue.afterTransactionCompletion(ActionQueue.java:379)
19:37:33,965 ERROR [stderr]     at org.hibernate.internal.SessionImpl.afterTransactionCompletion(SessionImpl.java:529)
19:37:33,965 ERROR [stderr]     at org.hibernate.engine.transaction.internal.TransactionCoordinatorImpl.afterTransaction(TransactionCoordinatorImpl.java:150)
19:37:33,965 ERROR [stderr]     at org.hibernate.engine.transaction.synchronization.internal.SynchronizationCallbackCoordinatorNonTrackingImpl.doAfterCompletion(SynchronizationCallbackCoordinatorNonTrackingImpl.java:140)
19:37:33,966 ERROR [stderr]     at org.hibernate.engine.transaction.synchronization.internal.SynchronizationCallbackCoordinatorTrackingImpl.afterCompletion(SynchronizationCallbackCoordinatorTrackingImpl.java:87)
19:37:33,966 ERROR [stderr]     at org.hibernate.engine.transaction.synchronization.internal.RegisteredSynchronization.afterCompletion(RegisteredSynchronization.java:56)
19:37:33,966 ERROR [stderr]     at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.afterCompletion(SynchronizationImple.java:96)
19:37:33,966 ERROR [stderr]     at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.afterCompletion(TwoPhaseCoordinator.java:532)
19:37:33,967 ERROR [stderr]     at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:101)
19:37:33,967 ERROR [stderr]     at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:162)
19:37:33,967 ERROR [stderr]     at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1166)
19:37:33,967 ERROR [stderr]     at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:126)
19:37:33,968 ERROR [stderr]     at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
19:37:33,968 ERROR [stderr]     at org.jboss.as.ejb3.tx.CMTTxInterceptor.endTransaction(CMTTxInterceptor.java:93)
19:37:33,968 ERROR [stderr]     at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:277)
19:37:33,969 ERROR [stderr]     at org.jboss.as.ejb3.tx.CMTTxInterceptor.requiresNew(CMTTxInterceptor.java:364)
19:37:33,969 ERROR [stderr]     at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:241)
19:37:33,969 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,969 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
19:37:33,970 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,970 ERROR [stderr]     at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
19:37:33,970 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,971 ERROR [stderr]     at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:95)
19:37:33,971 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,971 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
19:37:33,971 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,972 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59)
19:37:33,972 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,972 ERROR [stderr]     at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
19:37:33,972 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,973 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.AdditionalSetupInterceptor.processInvocation(AdditionalSetupInterceptor.java:55)
19:37:33,973 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,973 ERROR [stderr]     at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
19:37:33,973 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,974 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:326)
19:37:33,974 ERROR [stderr]     at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:439)
19:37:33,974 ERROR [stderr]     at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
19:37:33,974 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,975 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:326)
19:37:33,975 ERROR [stderr]     at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
19:37:33,975 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,975 ERROR [stderr]     at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
19:37:33,976 ERROR [stderr]     at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:185)
19:37:33,976 ERROR [stderr]     at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:182)
19:37:33,976 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,976 ERROR [stderr]     at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
19:37:33,977 ERROR [stderr]     at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
19:37:33,977 ERROR [stderr]     at com.tellyo.rtc.dao.MomentDAO$$$view204.shareMoment(Unknown Source)
19:37:33,977 ERROR [stderr]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
19:37:33,977 ERROR [stderr]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
19:37:33,978 ERROR [stderr]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
19:37:33,978 ERROR [stderr]     at java.lang.reflect.Method.invoke(Method.java:606)
19:37:33,978 ERROR [stderr]     at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:414)
19:37:33,978 ERROR [stderr]     at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:127)
19:37:33,979 ERROR [stderr]     at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56)
19:37:33,979 ERROR [stderr]     at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:65)
19:37:33,979 ERROR [stderr]     at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100)
19:37:33,979 ERROR [stderr]     at com.tellyo.rtc.dao.MomentDAO$Proxy$_$$_Weld$EnterpriseProxy$.shareMoment(Unknown Source)
19:37:33,980 ERROR [stderr]     at com.tellyo.rtc.logic.MomentLogic.shareProRecording(MomentLogic.java:2229)
19:37:33,980 ERROR [stderr]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
19:37:33,980 ERROR [stderr]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
19:37:33,980 ERROR [stderr]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
19:37:33,981 ERROR [stderr]     at java.lang.reflect.Method.invoke(Method.java:606)
19:37:33,981 ERROR [stderr]     at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
19:37:33,981 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,981 ERROR [stderr]     at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
19:37:33,982 ERROR [stderr]     at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
19:37:33,982 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,982 ERROR [stderr]     at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
19:37:33,982 ERROR [stderr]     at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:82)
19:37:33,983 ERROR [stderr]     at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93)
19:37:33,983 ERROR [stderr]     at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
19:37:33,983 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,983 ERROR [stderr]     at org.jboss.invocation.WeavedInterceptor.processInvocation(WeavedInterceptor.java:53)
19:37:33,984 ERROR [stderr]     at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
19:37:33,984 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,984 ERROR [stderr]     at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
19:37:33,984 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,985 ERROR [stderr]     at org.jboss.as.jpa.interceptor.SBInvocationInterceptor.processInvocation(SBInvocationInterceptor.java:47)
19:37:33,985 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,985 ERROR [stderr]     at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:407)
19:37:33,985 ERROR [stderr]     at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:55)
19:37:33,986 ERROR [stderr]     at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83)
19:37:33,986 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,986 ERROR [stderr]     at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
19:37:33,986 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,987 ERROR [stderr]     at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21)
19:37:33,987 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,987 ERROR [stderr]     at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
19:37:33,987 ERROR [stderr]     at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:53)
19:37:33,988 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,988 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.NonPooledEJBComponentInstanceAssociatingInterceptor.processInvocation(NonPooledEJBComponentInstanceAssociatingInterceptor.java:59)
19:37:33,988 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,988 ERROR [stderr]     at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInOurTx(CMTTxInterceptor.java:273)
19:37:33,988 ERROR [stderr]     at org.jboss.as.ejb3.tx.CMTTxInterceptor.required(CMTTxInterceptor.java:340)
19:37:33,989 ERROR [stderr]     at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:239)
19:37:33,989 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,989 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41)
19:37:33,989 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,990 ERROR [stderr]     at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:43)
19:37:33,990 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,990 ERROR [stderr]     at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:95)
19:37:33,990 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,991 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64)
19:37:33,991 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,991 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:59)
19:37:33,991 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,992 ERROR [stderr]     at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50)
19:37:33,992 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,992 ERROR [stderr]     at org.jboss.as.ejb3.component.interceptors.AdditionalSetupInterceptor.processInvocation(AdditionalSetupInterceptor.java:55)
19:37:33,992 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,993 ERROR [stderr]     at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64)
19:37:33,993 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,993 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:326)
19:37:33,993 ERROR [stderr]     at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:448)
19:37:33,994 ERROR [stderr]     at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61)
19:37:33,994 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,994 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:326)
19:37:33,994 ERROR [stderr]     at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80)
19:37:33,995 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,995 ERROR [stderr]     at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
19:37:33,995 ERROR [stderr]     at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:185)
19:37:33,995 ERROR [stderr]     at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:182)
19:37:33,996 ERROR [stderr]     at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:309)
19:37:33,996 ERROR [stderr]     at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61)
19:37:33,996 ERROR [stderr]     at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73)
19:37:33,996 ERROR [stderr]     at com.tellyo.rtc.logic.MomentLogic$$$view209.shareProRecording(Unknown Source)
19:37:33,997 ERROR [stderr]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
19:37:33,997 ERROR [stderr]     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
19:37:33,997 ERROR [stderr]     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
19:37:33,997 ERROR [stderr]     at java.lang.reflect.Method.invoke(Method.java:606)
19:37:33,998 ERROR [stderr]     at org.jboss.weld.util.reflection.Reflections.invokeAndUnwrap(Reflections.java:414)
19:37:33,998 ERROR [stderr]     at org.jboss.weld.bean.proxy.EnterpriseBeanProxyMethodHandler.invoke(EnterpriseBeanProxyMethodHandler.java:127)
19:37:33,998 ERROR [stderr]     at org.jboss.weld.bean.proxy.EnterpriseTargetBeanInstance.invoke(EnterpriseTargetBeanInstance.java:56)
19:37:33,998 ERROR [stderr]     at org.jboss.weld.bean.proxy.InjectionPointPropagatingEnterpriseTargetBeanInstance.invoke(InjectionPointPropagatingEnterpriseTargetBeanInstance.java:65)
19:37:33,999 ERROR [stderr]     at org.jboss.weld.bean.proxy.ProxyMethodHandler.invoke(ProxyMethodHandler.java:100)
19:37:33,999 ERROR [stderr]     at com.tellyo.rtc.logic.MomentLogic$Proxy$_$$_Weld$EnterpriseProxy$.shareProRecording(Unknown Source)
19:37:33,999 ERROR [stderr]     at com.tellyo.rtc.web.rest.ProRESTService$3.run(ProRESTService.java:735)
19:37:33,999 ERROR [stderr]     at java.lang.Thread.run(Thread.java:745)
19:37:34,000 ERROR [stderr] Caused by: javax.resource.ResourceException: IJ000460: Error checking for a transaction
19:37:34,000 ERROR [stderr]     at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.getManagedConnection(TxConnectionManagerImpl.java:416)
19:37:34,001 ERROR [stderr]     at org.jboss.jca.core.connectionmanager.AbstractConnectionManager.allocateConnection(AbstractConnectionManager.java:513)
19:37:34,001 ERROR [stderr]     at org.hornetq.ra.HornetQRASessionFactoryImpl.allocateConnection(HornetQRASessionFactoryImpl.java:948)
19:37:34,001 ERROR [stderr]     ... 150 more
19:37:34,002 ERROR [stderr] Caused by: javax.resource.ResourceException: IJ000459: Transaction is not active: tx=TransactionImple < ac, BasicAction: 0:ffff5cde73e5:-459f89fc:555388df:9a3 status: ActionStatus.COMMITTED >
19:37:34,002 ERROR [stderr]     at org.jboss.jca.core.connectionmanager.tx.TxConnectionManagerImpl.getManagedConnection(TxConnectionManagerImpl.java:400)
19:37:34,002 ERROR [stderr]     ... 152 more


The code that is used to send the message is as follows (I used the JndiJMSBackendQueueProcessor as template):

Code:
@Stateless
public class LuceneIndexJMSBackendWorker implements BackendQueueProcessor {

   private static Logger logger = Logger
         .getLogger(LuceneIndexJMSBackendWorker.class);

   public static final String JMS_QUEUE = Environment.WORKER_PREFIX
         + "jms.queue";
   protected static final String JNDI_PREFIX = Environment.WORKER_PREFIX
         + "jndi.";

   private IndexManager indexManager;
   private SearchIntegrator integrator;
   private String indexName;

   private ConnectionFactory connectionFactory;
   private Connection connection;
   private String jmsQueueName;

   @Override
   public void initialize(Properties props, WorkerBuildContext context,
         DirectoryBasedIndexManager indexManager) {

      try {
         InitialContext ctx = JNDIHelper.getInitialContext(props,
               JNDI_PREFIX);
         this.connectionFactory = (ConnectionFactory) ctx
               .lookup("java:/LuceneRemote");
         this.connection = this.connectionFactory.createConnection();

      } catch (NamingException e) {
         // TODO Auto-generated catch block
         e.printStackTrace();
      } catch (JMSException e) {
         // TODO Auto-generated catch block
         e.printStackTrace();
      }

      this.indexManager = indexManager;
      this.indexName = indexManager.getIndexName();
      this.integrator = context.getUninitializedSearchIntegrator();
      this.jmsQueueName = props.getProperty(JMS_QUEUE);
   }

   public SearchIntegrator getSearchIntegrator() {
      return integrator;
   }

   @Override
   public void close() {
      try {
         if (connection != null) {
            connection.close();
         }
      } catch (JMSException e) {
         e.printStackTrace();
      }
   }

   @Override
   @TransactionAttribute(TransactionAttributeType.MANDATORY)
   public void applyWork(List<LuceneWork> workList, IndexingMonitor monitor) {
      if (workList == null) {
         throw new IllegalArgumentException("workList should not be null");
      }

      final List<LuceneWork> runnableWorkList = workList;

      Runnable operation = new Runnable() {
         @Override
         public void run() {
            List<LuceneWork> filteredQueue = new ArrayList<LuceneWork>(
                  runnableWorkList);
            for (LuceneWork work : runnableWorkList) {
               if (work instanceof OptimizeLuceneWork) {
                  // we don't want optimization to be propagated
                  filteredQueue.remove(work);
               }
            }
            if (filteredQueue.size() == 0) {
               return;
            }
            LuceneWorkSerializer serializer = indexManager.getSerializer();
            byte[] data = serializer.toSerializedModel(filteredQueue);
            try {
               logger.debug("Trying to send JMS Message");

               Session session = connection.createSession(false,
                     Session.AUTO_ACKNOWLEDGE);
               Queue jmsQueue = session.createQueue(jmsQueueName);
               ObjectMessage message = session.createObjectMessage();
               message.setObject(data);
               message.setStringProperty(
                     Environment.INDEX_NAME_JMS_PROPERTY, indexName);

               session.createProducer(jmsQueue).send(message);
               logger.debug("JMS Messsage sent");
            } catch (JMSException e) {
               e.printStackTrace();
               logger.error("unableToSendJMSWork for index " + indexName
                     + " using queue " + jmsQueueName);
            }

         }
      };
      operation.run();
   }

   @Override
   public void applyStreamWork(LuceneWork singleOperation,
         IndexingMonitor monitor) {
      applyWork(Collections.singletonList(singleOperation), monitor);
   }

   @Override
   public Lock getExclusiveWriteLock() {
      logger.warn("warnSuspiciousBackendDirectoryCombination " + indexName);
      return new ReentrantLock(); // keep the invoker happy, still it's
                           // useless
   }

   @Override
   public void indexMappingChanged() {
      // no-op
   }


I'm using the following pooled-connection-factory, defined in the standalone.xml file:

Code:
                    <pooled-connection-factory name="hornetq-remote-jms-lucene">
                        <transaction mode="xa"/>
                        <user>jmsuser</user>
                        <password>jmspassword</password>
                        <connectors>
                            <connector-ref connector-name="http-connector-remote-lucene"/>
                        </connectors>
                        <entries>
                            <entry name="java:/LuceneRemote"/>
                        </entries>
                    </pooled-connection-factory>


The connector is bound to an outbound-socket-binding and I'm sure that the connection is established correctly. I'm lacking knowledge about the transaction management in this case and I don't know how to make sure that a transaction is available to send the message.


Top
 Profile  
 
 Post subject: Re: Storing Lucene Index in filesystem for two Wildfly instances
PostPosted: Thu May 14, 2015 6:07 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
It doesn't need to be transactional, but you're requiring it using the annotation and setting it to "MANDATORY".
Could you try by simply removing the @TransactionAttribute annotation?

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 23 posts ]  Go to page 1, 2  Next

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.