-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 4 posts ] 
Author Message
 Post subject: Clustered OSCache - comments?
PostPosted: Wed Jul 28, 2004 5:20 pm 
Beginner
Beginner

Joined: Tue Aug 26, 2003 6:24 pm
Posts: 45
Hi Folks -
I've been searching for validation on whether we could reliably use oscache clustering. We're currently using hibernate 2.1.2, but will be upgrading to 2.1.4 shortly.

We would like clustered invalidation, and NOT replication, since it is likely, (but not guaranteed) that data accessed on one app server will not be accessed on another app server. (Using replication would cause all app servers to load much useless data, in most but not all cases for us)

Also note that we are using JTA transactions via WeblogicTransactionManagerLookup, and our caches (currently ehcache) are set to "nonstrict-read-write" usage.

I've seen various posts (search for "oscache cluster" in the forums) suggesting that oscache should NOT be used for a clusted cache.

I've checked out SwarmCache, and its implementation does not have full support for "regions" - that is, a separate cache with separate configurations for each cachable class. For example, if I specify swarmcache.lru.size=1001 in my swarmcache.properties, that value is used for ALL regions.

OSCache seems to have better support for regions with this type of configuration.

From looking at the OSCache and OSCacheProvider code, I am guessing it is safe enough to use clusters with nonstrict-read-write, but NOT read-write - is this the basis for recommending against use of OSCache for clusters? If not, can anyone give some more details here?


At some point, we may cluster http sessions in our app servers, but until we do this, I would prefer a "clustered invalidation" approach in a clustered cache, which is why I haven't checked into JBoss TreeCache...

Of course, my other option would be to update swarm cache to enhance its configuration options to support region-specific configs...

Thanks for any info!
Tyson


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 29, 2004 4:47 pm 
Regular
Regular

Joined: Fri Dec 12, 2003 2:09 pm
Posts: 84
Location: San Francisco, USA
I assume the JBoss TreeCache is out of the question? Or commercial options like Tangosol or GigaSpace (see the wiki for more info).

Out of curiosity, how did EHCache perform for you in a single-VM environment? We're deciding which cache implementation to use in our app, which doesn't need to be clustered (yet), and EHCache seems to be the easiest (and sufficiently capable) place to start.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 29, 2004 4:53 pm 
Beginner
Beginner

Joined: Tue Aug 26, 2003 6:24 pm
Posts: 45
We are currently using EHcache, and it works great.

My only complaint is VERY minor: if you do not specifically configure a cache for a specific region (i.e. class name [+collection name]), you will get a warning during startup. I don't really think this should be a "warning" level statement, but an "info" or maybe even "debug".

Anyhow, its not a problem at all, as far as usage goes.

tyson


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 29, 2004 5:44 pm 
Regular
Regular

Joined: Fri Dec 12, 2003 2:09 pm
Posts: 84
Location: San Francisco, USA
Thanks for the reply. Gives me a little more confidence going in to the evaluation.


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 4 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.