-->
These old forums are deprecated now and set to read-only. We are waiting for you on our new forums!
More modern, Discourse-based and with GitHub/Google/Twitter authentication built-in.

All times are UTC - 5 hours [ DST ]



Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 4 posts ] 
Author Message
 Post subject: Redirecting write operations to another database.
PostPosted: Mon Nov 20, 2006 5:47 pm 
Newbie

Joined: Mon Nov 20, 2006 5:16 pm
Posts: 7
Location: PARIS
Hi guys,

I have this project on hand that has multiple databases and is using Hibernate to access the contents of the DB.

In order to reduce concurrent modification issues, I've changed the configuration of the DB to use Oracle Streams in a master/slave mode and made the application put a functional lock on the objects before changing them (not an Oracle lock because the object can be "locked" for hours)

So basically, for performance, the application is reading in a slave, but when writing it should send the contents to the master - the master will then replicate to several slaves around the world.

We've already separated the reads/writes by using two distinct SessionFactories: this will allow for different hibernate strategies if need be.

Now the remaining issue to be dealt with is to read in the slave and write in the master.

The intial idea that we implemented, before writing to the master was to reload the object from the master, change it and save it. users are reading a lot and not necessarily doing a lot of updates.

However we are hitting a performance issue here, so i'm looking for a way to read on the slave and to write on the master based on the knowledge, via the object version, that the slave is in sync with the master.

We have identified several solutions to do this:

1/ Read with s1, change the object, evict and save with s2. => issue encountered here are lazy init exceptions when commiting s2 for collections opened in s1 that we have not changed, so we don't really want to save them. i guess we could avoid this by not cascading the evicts?

2/ Write an interceptor that takes SQL statements generated by Hibernate and transfer/redirect them to the master DB. i'm not to favourable for this one because we are very low within hibernate and you might change this behaviour in the futur.

3/ Write a custom JDBC Driver that gives a connection for reading and a connection for writing. Will have to right the implementations of PrepStmt and so on, of course.

4/ Read with s1, change the object and save all modifications with a stateless session onto the master. rollback s1 & close, wait Oracle Streams to sync DBs and reload changed data. => avantage, stateless sessions don't care much for collections so no lazy init issue.

What we already have done is:

1/ Distinct HF for reading & writing
2/ lock objects functionaly to avoid concurrent access issues.
3/ track objects that have been changed so we know what to insert, updat e & delete

Any other ideas or suggestions, recommendations ?

Thanks in advance,

Cédric

additional info: in fact the master is in Paris and the slaves are around the world.
second point, in ideal situation I would have put a J2EE server or something and have done some caching to handle the situation - whatever but 2-tier archtecture ...
However we have to provide a fix fast because of all the bugs and then we can review the architecture. so right now, I'm not in a position to change much of the architecture ... :o(


Top
 Profile  
 
 Post subject: And the answer is
PostPosted: Thu Nov 23, 2006 8:30 pm 
Newbie

Joined: Mon Nov 20, 2006 5:16 pm
Posts: 7
Location: PARIS
Hi guys,

After giving it some rest, I now realize that the most appropriate solution is probably none of the above.
In fact, I think that the most appropriated solution is to use the connect()/disconnect() capabilities of the Hibernate session.

something like this:

Session session = HF.openSession(myReadConn);

do some stuff in read-only mode


session.disconnect();
session.connect(myWriteConn);
session.beginTransaction();
session.commit(); // optional I guess.
session.close();

// Application waits for data replication to complete and continues.

What do you think?

regards,

Cédric


Top
 Profile  
 
 Post subject:
PostPosted: Tue Nov 28, 2006 8:30 am 
Senior
Senior

Joined: Mon Oct 23, 2006 5:12 am
Posts: 141
Location: Galicia, Spain
What do you think of using a second level cache for read-only purposes, for example?

_________________
andresgr (--don't forget to rate)


Top
 Profile  
 
 Post subject:
PostPosted: Wed Nov 29, 2006 4:19 am 
Regular
Regular

Joined: Tue May 16, 2006 3:32 am
Posts: 117
You could also have a look at this:

http://www.hibernate.org/hib_docs/v3/re ... eplicating


Top
 Profile  
 
Display posts from previous:  Sort by  
Forum locked This topic is locked, you cannot edit posts or make further replies.  [ 4 posts ] 

All times are UTC - 5 hours [ DST ]


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
© Copyright 2014, Red Hat Inc. All rights reserved. JBoss and Hibernate are registered trademarks and servicemarks of Red Hat, Inc.