-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 
Author Message
 Post subject: Second level cache - Out of Memory
PostPosted: Wed Jun 09, 2004 11:14 am 
Beginner
Beginner

Joined: Mon Jun 07, 2004 4:21 pm
Posts: 44
Location: Boston
I have a contrived use case to understand second level cache behaviour.

I'm making a query on a table that has ~million rows ( "Select * from table Service") . I have enabled the second level cache (EHCache) for the class that maps to this table. Per the EHCache configuration I expect that after 1000 objects are read to memory, the rest are written to disk. Therefore, if a million rows are being read from the table, 1000 objects should be in memory and 999,000 should be written to disk. I don't find this happening and get an out of memory exception.

My expectation was that the second level cache will prevent "out of memory exceptions from occurring" - at least in this specific use case. Is my expectation wrong?

The code is as follows:

public void testHibLoadService() throws Exception {

Transaction tx = session.beginTransaction();
String queryString = "from Service";

Query query = session.createQuery(queryString);

//query.setMaxResults(1000);

List list = query.list();

Iterator iter = list.iterator();

int count = 0;

while (iter.hasNext()) {

count++;

Service serv = (Service) iter.next();

String patientid = serv.getPatientid();

//System.out.println(patientid);


}


tx.commit();

log.info("<<< testHibLoadService()");

}


The relevant section in EHCache.xml is as follows;

<cache name="com.mycompany.persistence.Service"
maxElementsInMemory="1000"
eternal="false"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
overflowToDisk="true"
/>



The second level cache itself works when the the fetch size is small. For example if I set:

query.setMaxResults(10000);

I can see data being written to disk for 9000 objects i.e 1000 are in memory.

Thanks,
Neel


Top
 Profile  
 
 Post subject:
PostPosted: Wed Jun 09, 2004 2:56 pm 
Hibernate Team
Hibernate Team

Joined: Tue Aug 26, 2003 12:50 pm
Posts: 5130
Location: Melbourne, Australia
it is the first level cache that is not releasing memory


Top
 Profile  
 
 Post subject:
PostPosted: Wed Jun 09, 2004 3:25 pm 
Beginner
Beginner

Joined: Mon Jun 07, 2004 4:21 pm
Posts: 44
Location: Boston
Yes - that's the conclusion I arrived at ...but wasn't sure. Thanks for confirming it, Gavin.

It would be nice for the first level cache to have the ability to write to disk - just like the second level cache....

Neel


Top
 Profile  
 
 Post subject:
PostPosted: Wed Jun 09, 2004 5:38 pm 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
1rst level should not write to disk, it would be a performance killer... in my opinion.

Now think about what would happen if you do the same in jdbc --> exactly the same.

You can use paging on your query... it's done for this.

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
 Post subject:
PostPosted: Wed Jun 09, 2004 8:07 pm 
Beginner
Beginner

Joined: Mon Jun 07, 2004 4:21 pm
Posts: 44
Location: Boston
I don't see why providing a configurable option - just as in the second level cache - to write to the disk would be a performance issue.

You would write to the disk only if the memory/object usage reaches high levels. This would be better than getting an OME :)

Neel


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jun 10, 2004 2:55 am 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
in my opinion, it's just my opinion, Hibernate doesn't have to care about "unsecure" user code.
Hibernate Session is a business unit cache, if this session start to cover or prevent all kind of exception (outOfMemory for example) it will be more difficult to configure and i think less performant.

Take a look at sources, you'll see that it's not as easy at you say.
But why don't you modify it and provide an update? if it's clean and performant, sure the team will accept it ;)

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jun 10, 2004 8:43 am 
Beginner
Beginner

Joined: Mon Jun 07, 2004 4:21 pm
Posts: 44
Location: Boston
It just seemed odd to me that the second level cache provides some nice features to handle huge data - but we can't even get to use it because the first level cache breaks down.


In our case - for a warehouse application (anybody use Hibernate for a warehouse application? ) - this is a genuine problem because I have to retrieve a huge chunk of data pertaining to a single individual in order to do some processing on it before writing it back to the db. Even with straight JDBC there are the usual issues of running out of memory. But there are caches that one can use with JDBC that will write to disk if the result set is huge - I believe Borland has one such cache (JDatastore ? ). Some of the light weight Java databases also have such a facility - at least their documentation says so.

So - it would be nice if the first level cache could do that too.

Neel


Top
 Profile  
 
 Post subject:
PostPosted: Thu Jun 10, 2004 8:50 am 
Hibernate Team
Hibernate Team

Joined: Thu Dec 18, 2003 9:55 am
Posts: 1977
Location: France
second level is different than fist level cache, otherwise it would have the same name ;)))
second level cache provides such a feature because it handles large amount of data, it is "connected" to a lot of fisrt level cache (hibernate session).

It's not exactly what you are saying.

_________________
Anthony,
Get value thanks to your skills: http://www.redhat.com/certification


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 8 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.