-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 13 posts ] 
Author Message
 Post subject: Hibernate Search & Infinispan Performance issue
PostPosted: Tue Oct 18, 2011 1:23 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Hi there -

I have a Spring-based application using Hibernate 3.6.6, Hibernate Search 3.4.1 and Infinispan 4.2.1 running on JBoss EAP 5. We are indexing a lot of data, probably around 20GB of indexes after we've mass-indexed. The application also has a number of JMS listeners using Spring's Message-Driven POJO pattern that receive notices when a particular entity has been updated. The data is being updated externally with a Talend data transformation job from an external system. The data coming in is from that external system, our app provides a more robust searching solution for the data.

The problem I'm running into is the JMS listeners cannot work fast enough to keep up with the data coming in from the external system. When I switch the directory provider to a file system provider, it chews threw the JMS messages like crazy. When I'm using Infinispan as the DirectoryProvider the performance drops significantly. I ran JProfiler against it and can only see the problem area as being the readBytes method inside of Infinispan.

So what we've tried is increasing the chunk size to 10MB from the default of 16KB, increased our eviction max entries to 1000 from 500. We saw a little increase in search performance, but it still tanks on the incoming messages. Here is our Infinispan configuration file:

Code:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xsi:schemaLocation="urn:infinispan:config:4.2 http://www.infinispan.org/schemas/infinispan-config-4.2.xsd"
   xmlns="urn:infinispan:config:4.2">

   <!-- *************************** -->
   <!-- System-wide global settings -->
   <!-- *************************** -->

   <global>

      <!-- Duplicate domains are allowed so that multiple deployments with default
         configuration of Hibernate Search applications work - if possible it would
         be better to use JNDI to share the CacheManager across applications -->
      <globalJmxStatistics enabled="true"
         cacheManagerName="HibernateSearch" allowDuplicateDomains="true"
         mBeanServerLookup="org.infinispan.jmx.JBossMBeanServerLookup" />

      <!-- If the transport is omitted, there is no way to create distributed
         or clustered caches. There is no added cost to defining a transport but not
         creating a cache that uses one, since the transport is created and initialized
         lazily. -->
      <transport
         clusterName="${mpsearch.infinispan.cluster.prefix}-HibernateSearch-Infinispan-Cluster"
         distributedSyncTimeout="50000">
         <!-- Note that the JGroups transport uses sensible defaults if no configuration
            property is defined. See the JGroupsTransport javadocs for more flags -->
         <properties>
            <!-- TODO: Change to udp.xml once streaming transfer requirement has
               been removed. -->
            <property name="configurationFile" value="jgroups-udp.xml" />
         </properties>
      </transport>

      <!-- Used to register JVM shutdown hooks. hookBehavior: DEFAULT, REGISTER,
         DONT_REGISTER. Hibernate Search takes care to stop the CacheManager so registering
         is not needed -->
      <shutdown hookBehavior="DONT_REGISTER" />

   </global>

   <!-- *************************** -->
   <!-- Default "template" settings -->
   <!-- *************************** -->

   <default>

      <locking lockAcquisitionTimeout="20000" writeSkewCheck="false"
         concurrencyLevel="500" useLockStriping="false" />

      <lazyDeserialization enabled="false" />

      <!-- Invocation batching is required for use with the Lucene Directory -->
      <invocationBatching enabled="true" />

      <!-- This element specifies that the cache is clustered. modes supported:
         distribution (d), replication (r) or invalidation (i). Don't use invalidation
         to store Lucene indexes (as with Hibernate Search DirectoryProvider). Replication
         is recommended for best performance of Lucene indexes, but make sure you
         have enough memory to store the index in your heap. Also distribution scales
         much better than replication on high number of nodes in the cluster. -->
      <clustering mode="replication">

         <!-- Prefer loading all data at startup than later -->
         <stateRetrieval timeout="20000" logFlushTimeout="30000"
            fetchInMemoryState="true" alwaysProvideInMemoryState="true" />

         <!-- Network calls are synchronous by default -->
         <sync replTimeout="20000" />
      </clustering>

      <jmxStatistics enabled="true" />

      <eviction maxEntries="-1" strategy="NONE" />

      <expiration maxIdle="-1" />

   </default>

   <!-- ******************************************************************************* -->
   <!-- Individually configured "named" caches. -->
   <!-- -->
   <!-- While default configuration happens to be fine with similar settings
      across the -->
   <!-- three caches, they should generally be different in a production environment. -->
   <!-- -->
   <!-- Current settings could easily lead to OutOfMemory exception as a CacheStore -->
   <!-- should be enabled, and maybe distribution is desired. -->
   <!-- ******************************************************************************* -->

   <!-- *************************************** -->
   <!-- Cache to store Lucene's file metadata -->
   <!-- *************************************** -->
   <namedCache name="LuceneIndexesMetadata">
      <clustering mode="replication">
         <stateRetrieval fetchInMemoryState="true"
            logFlushTimeout="30000" />
         <sync replTimeout="25000" />
      </clustering>

      <loaders passivation="false" shared="false" preload="false">
         <loader class="org.infinispan.loaders.jdbm.JdbmCacheStore"
            fetchPersistentState="true" purgeOnStartup="false">

            <properties>
               <property name="location"
                  value="${mpsearch.infinispan.search.passivation.dir}/LuceneIndexesMetadata" />
            </properties>
         </loader>
      </loaders>
   </namedCache>

   <!-- **************************** -->
   <!-- Cache to store Lucene data -->
   <!-- **************************** -->
   <namedCache name="LuceneIndexesData">
      <eviction wakeUpInterval="5000" maxEntries="1000" strategy="LIRS" />

      <clustering mode="replication">
         <stateRetrieval fetchInMemoryState="true"
            logFlushTimeout="30000" />
         <sync replTimeout="25000" />
      </clustering>

      <loaders passivation="false" shared="false" preload="false">
         <loader class="org.infinispan.loaders.jdbm.JdbmCacheStore"
            fetchPersistentState="true" purgeOnStartup="false">

            <properties>
               <property name="location"
                  value="${mpsearch.infinispan.search.passivation.dir}/LuceneIndexesData" />
            </properties>
         </loader>
      </loaders>
   </namedCache>

   <!-- ***************************** -->
   <!-- Cache to store Lucene locks -->
   <!-- ***************************** -->
   <namedCache name="LuceneIndexesLocking">
      <clustering mode="replication">
         <stateRetrieval fetchInMemoryState="true"
            logFlushTimeout="30000" />
         <sync replTimeout="25000" />
      </clustering>
   </namedCache>
</infinispan>


We're still stuck at using a single node right now, although we have proven replication works. What are the recommended settings for eviction and chunk size in the Hibernate Search config with Infinispan? Any ideas that I can try?

Thanks much everyone.


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Wed Oct 19, 2011 5:18 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
The problem I'm running into is the JMS listeners cannot work fast enough to keep up with the data coming in from the external system. When I switch the directory provider to a file system provider, it chews threw the JMS messages like crazy.

You mean performance is good enough, right?

Quote:
When I'm using Infinispan as the DirectoryProvider the performance drops significantly.

That's very unexpected as in all tests we did the Infinispan one is faster; of course we might not have done enough tests but still it's surprising. Consider as well that the filesystem one might look faster initially as your OS might fail to sync to disk.

Quote:
I ran JProfiler against it and can only see the problem area as being the readBytes method inside of Infinispan.

The readBytes in which class? could you describe the frames you see, or share some screenshots or JProfiler snapshot?

Quote:
So what we've tried is increasing the chunk size to 10MB from the default of 16KB, increased our eviction max entries to 1000 from 500. We saw a little increase in search performance

Yes that's very reasonable. I've been considering to default to 7MB as a better starting point instead of 16KB, but unless people configure their network stack for these large values that might be troublesome as you would get lots of fragmentation in network packets. You can also experiment with the Lucene parameters, like those at http://docs.jboss.org/hibernate/stable/ ... parameters . These apply on both the filesystem and Infinispan directories, but some parameters might benefit of different tuning: in the Infinispan case it's generally a good idea to prevent chunking to happen by playing with low enough max_merge_docs, as an example. Still this is supposed to affect mostly query performance.

Quote:
Here is our Infinispan configuration file: [...]

What caught my attention is that you're using a JdbmCacheStore, this one is not well known for it's performance. Could you try a run without using any CacheStore? If we find that's the cause, you could then try different cachestores and configure them as async cachestores: you're still going to be fault tolerant in a clustered configuration as while you write on disk asynchronously, the values will be replicated in sync to other nodes.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Wed Oct 19, 2011 5:22 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Quote:
You mean performance is good enough, right?

Running the standard Hibernate Search file-based DirectoryProvider, the JMS throughput is great. When we switch to InfinispanDirectoryProvider with the file store we're running into the throughput issues.

Quote:
That's very unexpected as in all tests we did the Infinispan one is faster

I believe it was faster when we had used the FileCacheStore in Infinispan. Using a more reliable store like BDBJE or JDBM has been slower.

Quote:
The readBytes in which class? could you describe the frames you see, or share some screenshots or JProfiler snapshot?

I don't have a snapshot, but I can easily make one. I can't remember where specifically the method was, I just remember it being in the Infinispan code. This is what lead us to increasing the chunk size and setting eviction to a higher maxEntries.

Quote:
...generally a good idea to prevent chunking to happen by playing with low enough max_merge_docs, as an example.

I don't think we've messed with the max merge docs parameters yet.

Quote:
What caught my attention is that you're using a JdbmCacheStore, this one is not well known for it's performance. Could you try a run without using any CacheStore? If we find that's the cause, you could then try different cachestores and configure them as async cachestores: you're still going to be fault tolerant in a clustered configuration as while you write on disk asynchronously, the values will be replicated in sync to other nodes.

We will try to run without a CacheStore at all and see what performance is like. The drawback is, however, our indexes are around 20GB so not having a Store behind them makes them not persistent. But for testing purposes, we're going to try and see if there is a difference. We have turned on async with 40 threads and saw a little bit of an increase in performance for a short burst, then it seemed to hang once those 40 threads got busy. May need more threads as we have 200+ listener threads in JMS.

Ultimately we want to upgrade to JBoss 7 but being stuck in JBoss EAP 5 has made going past Infinispan 4 very difficult. I tried for almost a day getting it working but ran into problems with JBoss log manager being a pain. That's another post but it is why we're at Infinispan 4 not 5.

We'll try different CacheStores. Which one do you recommend? Have you had any experience with in-memory or alike dbs with the JDBC CacheStore? Our servers have a lot of storage compared to our Oracle RAC instance so we'd have to stand up a standalone db for the JDBC. We tried Sleepycat and were very disappointed with the record locking issues and frequent corruption with synchronization.


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Wed Oct 19, 2011 6:32 pm 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
For faster CacheLoader performance you could try the Cassandra cachestore, or FileCacheStore; the FileCacheStore is usually not recommended but I'm not sure why; I think it's considered a very basic implementation and we're not sure how much it was tested, but maybe you can test it more and if you're able to identify issues we'll fix them; it has the nice advantage that it's code is quite simple. That's the process to make it stable, it needs a motivated user to carefully try it out ;)

How much slower are BDBJE or JDBM relatively to your target neet? Did you consider that once you're going to actually have multiple Infinispan nodes the write activity will be spread across the nodes? Assuming you have a replication=2, if you have 6 nodes only 2 of them will contain the key and only those nodes will write it to the CacheLoader; in theory assuming the hash distribution is not broken that makes for an average of 33% load on each machine, or 3X throughput: linear scalability.

Quote:
Have you had any experience with in-memory or alike dbs with the JDBC CacheStore?

What's the point of adding another in-memory store? if you have more memory, keep more values in Infinispan having it evict less aggressively and enable passivation to have it overflow to the disk only what can't stay in memory; it can also be clever as it will of course passivate the less-frequently-used segments first.

Personally I've had good results with PostgreSQL or MySQL instances on each local node; having a good DBA they can be tuned pretty well and using a real database makes it easy to make backups.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Wed Oct 19, 2011 7:12 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Quote:
For faster CacheLoader performance you could try the Cassandra cachestore, or FileCacheStore...

That's pretty much where I am leaning towards as well since it worked decently. When we originally were using the FileCacheStore, our chunk size was at the default 16KB. I suspect it'll perform better for synchronization across our nodes at the bigger chunk size with better search performance.

Quote:
What's the point of adding another in-memory store?

I meant experience with adhoc databases not necessarily in-memory :). Preferably the db should be out of JVM but it's a potential limitation of our infrastructure (and business politics) to bring up another db process to hold the indexes on each node.

Quote:
if you have more memory, keep more values in Infinispan having it evict less aggressively and enable passivation...

I didn't want to enable passivation because I wanted to ensure the contents of the caches in memory match what's in the store on disk. I wanted to ensure that if the system is shut down (gracefully or not) that the indexes remain in a good state for the next startup. Is that a proper assumption with how passivation works?


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Wed Oct 19, 2011 7:33 pm 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Quote:
I didn't want to enable passivation because I wanted to ensure the contents of the caches in memory match what's in the store on disk. I wanted to ensure that if the system is shut down (gracefully or not) that the indexes remain in a good state for the next startup. Is that a proper assumption with how passivation works?


Yes a specific node might fail to save some data if it's not shutdown gracefully, but to be both resilient and scalable the solution with Infinispan is that you shouldn't care about the single node. If a node fails, you can kill it and wipe it's cache store, the remaining nodes will have enough replicas to rebuild the correct state and will figure it out starting a rehash process, rebalancing the keys across the remaining nodes. When the node is started again, it will fetch it's share of data from it's peers.

This assumes of course that you don't kill ALL nodes at the same time, and that there is enough time between a kill/failure and the next one as the rehash process might take a while too.. that's where the number of replicas becomes a factor: you don't lose data if you lose less nodes than your number of replicas in the time needed for a rehash to complete.

In all cases, it's wise to be prepared to rebuild the index from scratch; that's the only way to be 100% safe, and you might need it too in cases of Lucene upgrade, they sometimes change the index format and you might need to upgrade because of a bug. So, since you need to be ready for this the replication of Infinispan is usually considered safe enough for most cases.

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Tue Oct 25, 2011 1:33 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Quote:
This assumes of course that you don't kill ALL nodes at the same time...

What if I am deploying a new EAR/WAR out to my cluster? I need to shut down every node gracefully to deploy the code. In this scenario, there will be a single server left at the end before it is shut down. How is the state of the indexes guaranteed if all other nodes are down and then I shut the final node down?

I did switch back to a FileCacheStore and the performance took a huge drop. I switched back to the FSDirectoryProvider eliminating Infinispan and what took over 24 hours to mass index took 25 minutes with the default setup. I had asynch turned on in the FileCacheStore with 10 worker threads. Is is possible I'm evicting too much or maybe too often?


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Tue Oct 25, 2011 1:43 pm 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
24h to 25minutes is a very significant difference, not a justifiable difference for some threads too much (or less).

Did you monitor garbage collection, are you having enough memory? It looks like it's starving on something; can you profile it and have a look?

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Tue Oct 25, 2011 2:01 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Here are some graphics I generated out of our profiler for the time period of when I indexed with Infinispan in FileCacheStore and async workers:
Image
Garbage Collections

Image
% Memory used (heap and non-heap)

We were garbage collecting during the whole process but it seemed to not be a significant portion of the time. On average, 300ms out of a minute for major ones; peaked at 3800ms during one hour segment.


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Tue Oct 25, 2011 2:07 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Also, I have the JVM set to
Code:
-Xms1303m -Xmx20480m -XX:MaxPermSize=512m -XX:+UseParallelGC
which I think should be enough. It was not enough memory to hold all of the indexes without turning on eviction.


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Wed Oct 26, 2011 11:19 am 
Hibernate Team
Hibernate Team

Joined: Fri Oct 05, 2007 4:47 pm
Posts: 2536
Location: Third rock from the Sun
Ok so it seems it's not GC. Can you identify the hot spot?

_________________
Sanne
http://in.relation.to/


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Fri Oct 28, 2011 3:03 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Sorry for the delay; been swamped working on the project.

I cannot find a hotspot other than it complaining about InfinispanIndexInput.readBytes taking a long time. I tried going async with the FileCacheStore and everything from 1, 5, and 40 threads ended up causing EOF exceptions being thrown:

Code:
2011-10-28 13:43:22,970 INFO  [MPSearchIndexingProgressMonitor] (Hibernate Search: indexwriter-11) Indexing speed: 1674.4457 documents/second; progress: 31.4875%
2011-10-28 13:43:23,262 ERROR [org.hibernate.search.exception.impl.LogErrorHandler] (Lucene Merge Thread #6) Exception occurred org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: Read past EOF

org.apache.lucene.index.MergePolicy$MergeException: java.io.IOException: Read past EOF
   at org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:517)
   at org.hibernate.search.backend.impl.lucene.overrides.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:49)
   at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
Caused by: java.io.IOException: Read past EOF
   at org.infinispan.lucene.SingleChunkIndexInput.readByte(SingleChunkIndexInput.java:77)
   at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:92)
   at org.apache.lucene.index.SegmentTermPositions.readDeltaPosition(SegmentTermPositions.java:75)
   at org.apache.lucene.index.SegmentTermPositions.nextPosition(SegmentTermPositions.java:71)
   at org.apache.lucene.index.SegmentMerger.appendPostings(SegmentMerger.java:591)
   at org.apache.lucene.index.SegmentMerger.mergeTermInfos(SegmentMerger.java:528)
   at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:463)
   at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:111)
   at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3938)
   at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3614)
   at org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:388)
   at org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:456)


I've tried Cassandra and it works great on my laptop but deploying it to the Redhat servers in our data center I eventually get an error:
Code:
2011-10-28 13:09:17,381 INFO  [MPSearchIndexingProgressMonitor] (Hibernate Search: indexwriter-9) Indexing speed: 477.06577 documents/second; progress: 1.0198741%
2011-10-28 13:09:18,046 ERROR [org.infinispan.loaders.AbstractCacheStore] (LuceneIndexesData-CassandraCacheStore-0) Problems encountered while purging expired
org.infinispan.loaders.CacheLoaderException: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
   at org.infinispan.loaders.cassandra.CassandraCacheStore.purgeInternal(CassandraCacheStore.java:445)
   at org.infinispan.loaders.AbstractCacheStore$2.run(AbstractCacheStore.java:84)
   at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
   at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
   at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
   at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:128)
   at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
   at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:314)
   at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:262)
   at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:192)
   at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:794)
   at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:777)
   at org.infinispan.loaders.cassandra.CassandraCacheStore.purgeInternal(CassandraCacheStore.java:443)
   ... 4 more
Caused by: java.net.SocketTimeoutException: Read timed out
   at java.net.SocketInputStream.socketRead0(Native Method)
   at java.net.SocketInputStream.read(SocketInputStream.java:129)
   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
   at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:126)
   ... 11 more

This is running synchronously with the default configuration with eviction on (similar to my original post).

I'm kind of at a loss since the performance is so tremendously different. We're supposed to be getting a Redhat consultant in here Monday to help with tuning Infinispan. We'll see what he comes up with and I'll post my findings here.


Top
 Profile  
 
 Post subject: Re: Hibernate Search & Infinispan Performance issue
PostPosted: Fri Oct 28, 2011 3:05 pm 
Beginner
Beginner

Joined: Fri Sep 02, 2011 12:13 pm
Posts: 20
Location: Milwaukee, WI USA
Also, JDBMCacheStore gave us zero errors even with 20+ GB in it, it was just slow.


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 13 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.