-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 18 posts ]  Go to page 1, 2  Next
Author Message
 Post subject: 1st Level Cache Eviction by 2nd Level Cache (Swarmcache)
PostPosted: Thu Jul 15, 2004 4:30 am 
Newbie

Joined: Wed May 19, 2004 1:25 pm
Posts: 11
I'm using swarmcache as my 2nd level cache.

I've got a whole lot of distributed processes that only ever read from a session. These use JMS for bulk processing and only ever perform writes on stuff that isnt cached.

What I want is to have swarmcache 2nd level cache invalidation cause 1st level session cache invalidation.

Any ideas how to do this?


Top
 Profile  
 
 Post subject: ... possibly...
PostPosted: Thu Jul 15, 2004 4:53 am 
Newbie

Joined: Wed May 19, 2004 1:25 pm
Posts: 11
possibly, if i could get a list of everything that has been invalidated i could manually perform eviction myself? messy. what i wouldnt want to happen is have objects being invalidated and detached from the session in the middle of some process.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 15, 2004 6:14 am 
Hibernate Team
Hibernate Team

Joined: Mon Aug 25, 2003 9:11 pm
Posts: 4592
Location: Switzerland
It's not a good idea to evict something from your currently running unit of work from "below".

_________________
JAVA PERSISTENCE WITH HIBERNATE
http://jpwh.org
Get the book, training, and consulting for your Hibernate team.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 15, 2004 8:02 am 
Newbie

Joined: Wed May 19, 2004 1:25 pm
Posts: 11
The problem is that I'm trying to keep a session open as long as possible.

Since I'm a message based process (JMS), the 'work' unit is undefinable... but at some point the required data may changed due to some writer process somewhere else. Ideally I use a 2nd level invalidating cache that suites my needs, BUT, this has the effect that only the 2nd level cache is invalidated. Ideally I would, at some point between processing JMS messages, evict any objects that have been invalidated by the 2nd level cache.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 15, 2004 8:04 am 
Hibernate Team
Hibernate Team

Joined: Mon Aug 25, 2003 9:11 pm
Posts: 4592
Location: Switzerland
Still, your unit of work expects a consistent view on the data. I would rethink this. It's probably not something we can help you with on the forum.

_________________
JAVA PERSISTENCE WITH HIBERNATE
http://jpwh.org
Get the book, training, and consulting for your Hibernate team.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 15, 2004 8:04 am 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
Quote:
The problem is that I'm trying to keep a session open as long as possible.


why?

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
 Post subject: Options..
PostPosted: Thu Jul 15, 2004 8:50 am 
Newbie

Joined: Wed May 19, 2004 1:25 pm
Posts: 11
(a) I could open and close a session for each JMS message.

I think the above would be inefficient if data is changed infrequently and if there is 200k-400k messages to process ... possibly counterproductive opening and closing 200k sessions. (I'm not certain of the cost of creating a session per connection, but i'm sure there must be better ways to do things).

(b) I could open and close a session every minute. (configured - what i'm about to do).

(c) I could (hopefully) listen to '2nd level invalidation' requests and open and close on data change. (acceptable)

(d) I can listen to '2nd level invalidation' requests and evict from the 1st level cache just the information that has changed. (efficient and effective).

Now... unlike a typical update process where the transaction is a database transaction and it makes sense to open/close a session per scope.... with a JMS system you're typically handling many more thousands of updates/movements within the system.

I would only need to open/close a session when data has actually changed, and even then its preferable to evict just the particular piece of information. (if possible). It is obvious that you want a consistant view of the DB, so there is no point in Hibernate evicting behind the scenes (bad voodoo), but it would be nice to be able to - at a safe point, say, between JMS message processes - check if something has changed (2nd level cache).


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 15, 2004 8:52 am 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
creating a session is not expensive, keeping one alive a long time is maybe less expensive but also unsecure...

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
 Post subject: ...
PostPosted: Thu Jul 15, 2004 10:25 am 
Newbie

Joined: Wed May 19, 2004 1:25 pm
Posts: 11
Quote:
creating a session is not expensive, keeping one alive a long time is maybe less expensive but also unsecure...


I understand, but in my use case, I am not convinced.

In terms of DB access, the 2nd level cache will hide any shared operations. In terms of needlessly opening and closing 200k-400k sessions... I am certain that there must be a better alternative to open/close x 200k... when typically most data will never be modified. (i say most) and the only need to evict the 1st level cache is only needed when the underlying data has changed.

In a way, the issue is something that runs into how the 2nd level cache works. I expect that the 2nd level cache will hide updates by applications that dont use hibernate to interact with the database. That same issue exists between long lived sessions and the 2nd level cache.


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 15, 2004 10:27 am 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
Quote:
I expect that the 2nd level cache will hide updates by applications that dont use hibernate to interact with the database


what do you mean? i don't understand this

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
 Post subject: ...
PostPosted: Thu Jul 15, 2004 10:48 am 
Newbie

Joined: Wed May 19, 2004 1:25 pm
Posts: 11
Quote:
I expect that the 2nd level cache will hide updates by applications that dont use hibernate to interact with the database



Read from DB (X = 1)
Put into Cache2 (X = 1)
Read from Session (read from Cache2) (X = 1)

meanwhile... some other JDBC app updates X=2

The 2nd level cache will contain X=1 until the cache decides that the value should be removed from the cache. (timeout, etc). This is correct semantics.

Other methods of indicating that it should be removed from the cache are swarmcache invalidation, whatever TreeCache uses.... and various plays on Timeout parameters of other non-clustered caches.

I am sure that other JDBC apps could update the DB and the 2nd level cache would _not_ 'see' this change until its cached copy is 'timed out' or removed for some reason. In swarmcache's case, the copy will _never_ be removed... (there are probably timeout options i'm missing, so never is probably too strong a word).

Everything must use the same 2nd level cache or there are issues...

The same issue exists between 1st level cache and 2nd level cache.

If it were possible to know that something has changed in the DB then you could update the 2nd level cache. Likewise - If you could know that something changed in the 2nd level cache then you could make information available to the session (1st level cache).

Its obviously _bad_ to change stuff in the session cache without the application knowning. Transaction consistancy, etc, etc. Which is why sessions should be shortlived.

... then i'd be able to make a long lived session.

gtg[/quote]


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jul 15, 2004 10:56 am 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
Quote:
Read from DB (X = 1)
Put into Cache2 (X = 1)
Read from Session (read from Cache2) (X = 1)

meanwhile... some other JDBC app updates X=2

The 2nd level cache will contain X=1 until the cache decides that the value should be removed from the cache. (timeout, etc). This is correct semantics.


generally you don't use cache on systems updated by more than one app (ie: webApp1 + batch + trigger + webApp2).
You have 3 choices:
1- force other apps to update the cache --> i think it's impossible or extremelly difficult
2- know which objects can be updated by other apps and don't cache them (this is what i use is i really need cache on other objects)
3- forget cache to avoid staleData

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
 Post subject: Too much overhead per JMS message...
PostPosted: Fri Jul 16, 2004 12:38 pm 
Newbie

Joined: Wed May 19, 2004 1:25 pm
Posts: 11
Too much overhead doing session per JMS message...

If I keep the session alive for a minutes worth of processing, things are ALOT quicker compared to keeping a session alive per JMS message transaction.

The problem is obviously now a balance between the cost of rebuilding the 1st level cache. If a minute, why not three?

Gonna try and create a swarmcache based on the particular entities in the same app process. Then catch and evict from the 1st level appropriately... (messy, bad voodoo, just a test).


Top
 Profile  
 
 Post subject:
PostPosted: Fri Jul 16, 2004 4:09 pm 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
Quote:
Read from DB (X = 1)
Put into Cache2 (X = 1)
Read from Session (read from Cache2) (X = 1)

meanwhile... some other JDBC app updates X=2

The 2nd level cache will contain X=1 until the cache decides that the value should be removed from the cache. (timeout, etc). This is correct semantics.


you mean, you never update your objects using hibernate session?
you're only using it for read?
try select new().... --> it will avoid all the overhead necessary for persistence features.

set flushmode=never...

but your design and use of cache/jdbc/hibernate seems very strange, sorry to say it.

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
 Post subject:
PostPosted: Tue Sep 07, 2004 3:45 pm 
Newbie

Joined: Tue Sep 07, 2004 3:37 pm
Posts: 1
This doesn't seem like an odd request at all. I'm writing an app which has legacy code updating the database via jdbc. I can't afford to not cache this data as it is used extensively throughout the hibernate-based application. Whenver the database has been updated, I get a JMS message that tells me the fact. I'd like an easy way to refresh the 2nd level cache when this happens. I'm not really concerned about the session cache in my case, but the principle is the same.


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 18 posts ]  Go to page 1, 2  Next

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.