I am posting this at the risk of sounding anti-Hibernate. But this is the result of my study of different persistence strategies, and I'd be happy to be proved wrong.
I would again reiterate that I am playing the devil's advocate only so that I can understand hibernate better.
These are the issues I see with Hibernate:
1. Stateful nature of the Session Object :
Lets take the example of “session” object (Hibernate Session Manager) in hibernate.
We have two business classes MyClass and SomeOtherClass:
public class MyClass {
public static void myMethod(String[] args) throws java.text.ParseException {
Session s1 = HibernateUtil.currentSession();
Transaction t1 = s1.beginTransaction();
UserMaster user1 = (UserMaster) s1.load(UserMaster.class, "UserName22458");
user1.setFname("FName s1");
SomeOtherClass someOtherClass = new SomeOtherClass();
someOtherClass.dosomething(user1);
t1.commit();
}
}
public class SomeOtherClass {
public void dosomething(UserMaster user){
Session s = HibernateUtil.currentSession();
UserMaster user2 = (UserMaster) s.load(UserMaster.class, "UserName22458");
System.out.println("compare the objects: -> " + (user==user2));
}
}
This code would give different results if MyClass and SomeOtherClass are put on the different nodes while clustering.
What I mean to say is that the code is not cluster ready as session is stateful.
2. The usage of the Cache is forced on us:
ORM tools heavily depend on object caches. In Hibernate caches are used to:
a. Ensure that objects are unique within memory and
b. Improve application performance
The primary reason caches are universal in ORM tools is to ensure that objects are unique within memory and that has nothing to do with the primary reason why the caches should be used in the first place. A data cache is useful only when we have data that is accessed frequently but is rarely changed. That’s the only case when an application can benefit by the usage of a cache and this situation is not true for all applications.
The problem becomes more apparent when we try to use ORM tools in clustered environments or if there is another application accessing the database which does not goes through the same cache.
In clustered environments we’d have to live with the disadvantages associated with data cache synchronization. It does not make sense if I am not getting any benefit out of cache.
Am I right in saying that ORM tools should only be used for:
a. Applications that will not use clustering for scaling.
b. Applications that have data that is accessed frequently but is rarely changed so that the overheads of cache synchronization are justified.
Other disadvantages:
3. Less Control on SQL Queries
4. Data and behavior are not separated
5. Façade needs to be built if the data model is to be made available in a distributed architecture
6. Each ORM technology/product has a different set of APIs and porting code between them is not easy
Points 2, 3 and 4 can be overcome by using DAO layer to hide the ORM tool. But that would also mean that the DAO layer does not exposes the same set of persistent objects that the ORM layer works on. This would have two disadvantages:
a. Persistent objects have to be translated back and forth to the objects exposed by the DAO layer.
b. The fact that ORM layer does not works on the domain objects directly, seems to take a certain edge out of ORM solution. After all, one of the benefits of an ORM solution is that a complicated domain model with all its relationships can be made persistent.
Nikhil Bajpai (
http://www.geocities.com/nikhilb020875/)