Hi there,
I'm new to using Hibernate Search and I'm trying to get it set up with jgroups and infinispan, as described in the manual for 4.2. I've noticed that the log statements seem to suggest that the same cluster is being started twice, and I'm not really sure why.
My Spring 3 configuration contains a number of JPA properties:
Code:
<property name="jpaProperties">
<props>
<prop key="hibernate.search.default.exclusive_index_use">false</prop>
<prop key="hibernate.search.services.jgroups.clusterName">CrisisHubCluster</prop>
<prop key="hibernate.search.services.jgroups.configurationFile">jgroups-config.xml</prop>
<prop key="hibernate.search.infinispan.configuration_resourcename">infinispan-config.xml</prop>
<prop key="hibernate.search.default.worker.execution">sync</prop>
<prop key="hibernate.search.default.directory_provider">infinispan</prop>
<prop key="hibernate.search.default.indexBase">${search.index.dir}</prop>
<prop key="hibernate.search.default.worker.backend">jgroups</prop>
<prop key="hibernate.search.error_handler">log</prop>
<prop key="hibernate.search.lucene_version">LUCENE_36</prop>
<prop key="hibernate.search.generate_statistics">true</prop>
<prop key="hibernate.search.jmx_enabled">true</prop>
</props>
</property>
I've pointed to a custom config file for jgroups and another one for infinispan, even though they mostly identical to the default set up.
The jgroups config file (identical to default):
Code:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<UDP/>
<PING/>
<MERGE2/>
<FD_SOCK/>
<FD_ALL/>
<VERIFY_SUSPECT/>
<pbcast.NAKACK/>
<UNICAST/>
<pbcast.STABLE/>
<pbcast.GMS/>
<UFC/>
<MFC/>
<FRAG2/>
<pbcast.STATE_TRANSFER />
<pbcast.FLUSH timeout="0"/>
</config>
The inifinispan config file. This is similar to the default, but with the custom cluster name and a setup for a file cache store.
Code:
<?xml version="1.0" encoding="UTF-8"?>
<!--
* Hibernate, Relational Persistence for Idiomatic Java
*
* Copyright (c) 2012, Red Hat, Inc. and/or its affiliates or third-party contributors as
* indicated by the @author tags or express copyright attribution
* statements applied by the authors. All third-party contributions are
* distributed under license by Red Hat, Inc.
*
* This copyrighted material is made available to anyone wishing to use, modify,
* copy, or redistribute it subject to the terms and conditions of the GNU
* Lesser General Public License, as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
* or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License
* for more details.
*
* You should have received a copy of the GNU Lesser General Public License
* along with this distribution; if not, write to:
* Free Software Foundation, Inc.
* 51 Franklin Street, Fifth Floor
* Boston, MA 02110-1301 USA
-->
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.1 http://www.infinispan.org/schemas/infinispan-config-5.1.xsd"
xmlns="urn:infinispan:config:5.1">
<!-- *************************** -->
<!-- System-wide global settings -->
<!-- *************************** -->
<global>
<!-- Duplicate domains are allowed so that multiple deployments with default configuration
of Hibernate Search applications work - if possible it would be better to use JNDI to share
the CacheManager across applications -->
<globalJmxStatistics
enabled="true"
cacheManagerName="HibernateSearch"
allowDuplicateDomains="true" />
<!-- If the transport is omitted, there is no way to create distributed or clustered
caches. There is no added cost to defining a transport but not creating a cache that uses one,
since the transport is created and initialized lazily. -->
<transport
clusterName="CrisisHubCluster">
<!-- Note that the JGroups transport uses sensible defaults if no configuration
property is defined. See the JGroupsTransport javadocs for more flags -->
</transport>
<!-- Note that the JGroups transport uses sensible defaults if no configuration property
is defined. See the Infinispan wiki for more JGroups settings:
http://community.jboss.org/wiki/ClusteredConfigurationQuickStart -->
<!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER, DONT_REGISTER.
Hibernate Search takes care to stop the CacheManager so registering is not needed -->
<shutdown
hookBehavior="DONT_REGISTER" />
</global>
<!-- *************************** -->
<!-- Default "template" settings -->
<!-- *************************** -->
<default>
<locking
lockAcquisitionTimeout="20000"
writeSkewCheck="false"
concurrencyLevel="500"
useLockStriping="false" />
<!-- Invocation batching is required for use with the Lucene Directory -->
<invocationBatching
enabled="true" />
<!-- This element specifies that the cache is clustered. modes supported: distribution
(d), replication (r) or invalidation (i). Don't use invalidation to store Lucene indexes (as
with Hibernate Search DirectoryProvider). Replication is recommended for best performance of
Lucene indexes, but make sure you have enough memory to store the index in your heap.
Also distribution scales much better than replication on high number of nodes in the cluster. -->
<clustering
mode="replication">
<!-- Prefer loading all data at startup than later -->
<stateTransfer
timeout="480000"
fetchInMemoryState="true" />
<!-- Network calls are synchronous by default -->
<sync
replTimeout="20000" />
</clustering>
<jmxStatistics
enabled="true" />
<eviction
maxEntries="-1"
strategy="NONE" />
<expiration
maxIdle="-1" />
</default>
<!-- ******************************************************************************* -->
<!-- Individually configured "named" caches. -->
<!-- -->
<!-- While default configuration happens to be fine with similar settings across the -->
<!-- three caches, they should generally be different in a production environment. -->
<!-- -->
<!-- Current settings could easily lead to OutOfMemory exception as a CacheStore -->
<!-- should be enabled, and maybe distribution is desired. -->
<!-- ******************************************************************************* -->
<!-- *************************************** -->
<!-- Cache to store Lucene's file metadata -->
<!-- *************************************** -->
<namedCache
name="LuceneIndexesMetadata">
<clustering
mode="replication">
<stateTransfer
fetchInMemoryState="true" />
<sync
replTimeout="25000" />
</clustering>
</namedCache>
<!-- **************************** -->
<!-- Cache to store Lucene data -->
<!-- **************************** -->
<namedCache
name="LuceneIndexesData">
<clustering
mode="replication">
<stateTransfer
fetchInMemoryState="true" />
<sync
replTimeout="25000" />
</clustering>
<loaders shared="false">
<loader
class="org.infinispan.loaders.file.FileCacheStore"
fetchPersistentState="true" ignoreModifications="false"
purgeOnStartup="false">
<properties>
<property name="location" value="${java.io.tmpdir}" />
</properties>
</loader>
</loaders>
</namedCache>
<!-- ***************************** -->
<!-- Cache to store Lucene locks -->
<!-- ***************************** -->
<namedCache
name="LuceneIndexesLocking">
<clustering
mode="replication">
<stateTransfer
fetchInMemoryState="true" />
<sync
replTimeout="25000" />
</clustering>
</namedCache>
</infinispan>
When I run this in my container (JBoss 7.1) I can see this in the logs:
Code:
15:01:23,921 INFO [org.hibernate.search.Version] (MSC service thread 1-6) HSEARCH000034: Hibernate Search 4.2.0.Final
15:01:24,359 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsChannelProvider] (MSC service thread 1-6) HSEARCH000005: Starting JGroups Channel
15:01:26,283 INFO [stdout] (MSC service thread 1-6)
15:01:26,284 INFO [stdout] (MSC service thread 1-6) -------------------------------------------------------------------
15:01:26,285 INFO [stdout] (MSC service thread 1-6) GMS: address=LDNLPT1271-PC-60880, cluster=CrisisHubCluster, physical address=172.26.10.169:55409
15:01:26,285 INFO [stdout] (MSC service thread 1-6) -------------------------------------------------------------------
15:01:29,306 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsMasterMessageListener] (MSC service thread 1-6) HSEARCH000015: Received new cluster view: [LDNLPT1271-PC-60880|0] [LDNLPT1271-PC-60880]
15:01:29,314 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsChannelProvider] (MSC service thread 1-6) HSEARCH000006: Connected to cluster [ CrisisHubCluster ]. The node address is LDNLPT1271-PC-60880
15:01:29,379 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000078: Starting JGroups Channel
15:01:29,380 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
15:01:33,380 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000094: Received new cluster view: [LDNLPT1271-PC-51512|0] [LDNLPT1271-PC-51512]
15:01:33,433 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000079: Cache local address is LDNLPT1271-PC-51512, physical addresses are [172.26.10.169:54867]
15:01:33,445 INFO [org.infinispan.factories.GlobalComponentRegistry] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.8.Final
15:01:33,485 INFO [org.infinispan.factories.TransactionManagerFactory] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000161: Using a batchMode transaction manager
15:01:33,539 INFO [org.infinispan.factories.TransactionManagerFactory] (CacheStartThread,CrisisHubCluster,LuceneIndexesData) ISPN000161: Using a batchMode transaction manager
15:01:33,546 INFO [org.infinispan.factories.TransactionManagerFactory] (CacheStartThread,CrisisHubCluster,LuceneIndexesLocking) ISPN000161: Using a batchMode transaction manager
15:01:33,563 INFO [org.infinispan.jmx.CacheJmxRegistration] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000031: MBeans were successfully registered to the platform mbean server.
15:01:33,564 INFO [org.infinispan.jmx.CacheJmxRegistration] (CacheStartThread,CrisisHubCluster,LuceneIndexesData) ISPN000031: MBeans were successfully registered to the platform mbean server.
15:01:33,564 INFO [org.infinispan.jmx.CacheJmxRegistration] (CacheStartThread,CrisisHubCluster,LuceneIndexesLocking) ISPN000031: MBeans were successfully registered to the platform mbean server.
This looks like there are two clusters starting up, one here:
Code:
15:01:24,359 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsChannelProvider] (MSC service thread 1-6) HSEARCH000005: Starting JGroups Channel
And one here:
Code:
15:01:29,379 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000078: Starting JGroups Channel
The second one has this message following it:
"Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!"
What's happening here? Why are there two clusters, and why is one starting up with the default configuration?
If I then modify the infinispan config file to point to my jgroups config file, as so:
Code:
<transport clusterName="CrisisHubCluster">
<!-- Point to the custom jgroups config -->
<properties>
<property name="configurationFile" value="jgroups-config.xml"/>
</properties>
<!-- Note that the JGroups transport uses sensible defaults if no configuration
property is defined. See the JGroupsTransport javadocs for more flags -->
</transport>
Then I end up with a bunch of stack traces in the logs:
Code:
15:38:54,448 INFO [org.hibernate.search.Version] (MSC service thread 1-5) HSEARCH000034: Hibernate Search 4.2.0.Final
15:38:54,879 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsChannelProvider] (MSC service thread 1-5) HSEARCH000005: Starting JGroups Channel
15:38:56,763 INFO [stdout] (MSC service thread 1-5)
15:38:56,763 INFO [stdout] (MSC service thread 1-5) -------------------------------------------------------------------
15:38:56,764 INFO [stdout] (MSC service thread 1-5) GMS: address=LDNLPT1271-PC-59322, cluster=CrisisHubCluster, physical address=172.26.10.169:62158
15:38:56,765 INFO [stdout] (MSC service thread 1-5) -------------------------------------------------------------------
15:38:59,784 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsMasterMessageListener] (MSC service thread 1-5) HSEARCH000015: Received new cluster view: [LDNLPT1271-PC-59322|0] [LDNLPT1271-PC-59322]
15:38:59,786 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsChannelProvider] (MSC service thread 1-5) HSEARCH000006: Connected to cluster [ CrisisHubCluster ]. The node address is LDNLPT1271-PC-59322
15:38:59,849 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000078: Starting JGroups Channel
15:39:01,072 INFO [stdout] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata)
15:39:01,072 INFO [stdout] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) -------------------------------------------------------------------
15:39:01,072 INFO [stdout] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) GMS: address=LDNLPT1271-PC-27204, cluster=CrisisHubCluster, physical address=172.26.10.169:62159
15:39:01,073 INFO [stdout] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) -------------------------------------------------------------------
15:39:01,144 INFO [org.hibernate.search.backend.impl.jgroups.JGroupsMasterMessageListener] (Incoming-1,CrisisHubCluster,LDNLPT1271-PC-59322) HSEARCH000015: Received new cluster view: [LDNLPT1271-PC-59322|1] [LDNLPT1271-PC-59322, LDNLPT1271-PC-27204]
15:39:01,154 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000094: Received new cluster view: [LDNLPT1271-PC-59322|1] [LDNLPT1271-PC-59322, LDNLPT1271-PC-27204]
15:39:01,216 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000079: Cache local address is LDNLPT1271-PC-27204, physical addresses are [172.26.10.169:62159]
15:39:01,223 INFO [org.infinispan.factories.GlobalComponentRegistry] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000128: Infinispan version: Infinispan 'Brahma' 5.1.8.Final
15:39:01,250 INFO [org.infinispan.factories.TransactionManagerFactory] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000161: Using a batchMode transaction manager
15:39:01,293 INFO [org.infinispan.factories.TransactionManagerFactory] (CacheStartThread,CrisisHubCluster,LuceneIndexesData) ISPN000161: Using a batchMode transaction manager
15:39:01,300 INFO [org.infinispan.factories.TransactionManagerFactory] (CacheStartThread,CrisisHubCluster,LuceneIndexesLocking) ISPN000161: Using a batchMode transaction manager
15:39:01,320 INFO [org.infinispan.jmx.CacheJmxRegistration] (CacheStartThread,CrisisHubCluster,LuceneIndexesMetadata) ISPN000031: MBeans were successfully registered to the platform mbean server.
15:39:01,327 INFO [org.infinispan.jmx.CacheJmxRegistration] (CacheStartThread,CrisisHubCluster,LuceneIndexesData) ISPN000031: MBeans were successfully registered to the platform mbean server.
15:39:01,325 INFO [org.infinispan.jmx.CacheJmxRegistration] (CacheStartThread,CrisisHubCluster,LuceneIndexesLocking) ISPN000031: MBeans were successfully registered to the platform mbean server.
15:39:01,362 ERROR [org.jgroups.protocols.UNICAST] (OOB-2,CrisisHubCluster,LDNLPT1271-PC-59322) couldn't deliver OOB message [dst: LDNLPT1271-PC-59322, src: LDNLPT1271-PC-27204 (3 headers), size=70 bytes, flags=OOB|DONT_BUNDLE|RSVP]: java.lang.NullPointerException
at org.hibernate.search.backend.impl.jgroups.JGroupsMasterMessageListener.receive(JGroupsMasterMessageListener.java:70) [hibernate-search-engine-4.2.0.Final.jar:4.2.0.Final]
at org.jgroups.JChannel.invokeCallback(JChannel.java:758) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.JChannel.up(JChannel.java:719) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1026) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.pbcast.FLUSH.up(FLUSH.java:441) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.pbcast.STATE_TRANSFER.up(STATE_TRANSFER.java:178) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.FRAG2.up(FRAG2.java:181) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:418) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.FlowControl.up(FlowControl.java:400) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.pbcast.GMS.up(GMS.java:889) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.pbcast.STABLE.up(STABLE.java:244) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.UNICAST.handleDataReceived(UNICAST.java:618) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.UNICAST.up(UNICAST.java:317) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.pbcast.NAKACK.up(NAKACK.java:602) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.VERIFY_SUSPECT.up(VERIFY_SUSPECT.java:143) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.FD_ALL.up(FD_ALL.java:177) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.FD_SOCK.up(FD_SOCK.java:288) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.MERGE2.up(MERGE2.java:205) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.Discovery.up(Discovery.java:359) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.TP.passMessageUp(TP.java:1185) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.TP$IncomingPacket.handleMyMessage(TP.java:1733) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at org.jgroups.protocols.TP$IncomingPacket.run(TP.java:1715) [jgroups-3.0.14.Final.jar:3.0.14.Final]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_17]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_17]
at java.lang.Thread.run(Thread.java:722) [rt.jar:1.7.0_17]
I'm unsure, given the docs, which set up is "correct", why we have two clusters starting up, or what the stack trace after the config change actually means.