Hello all, i have a question...:)
I use hibernate 3.2
Example:
I would like to perform a copy from table 1 ("InterfaceTable") to table 2 ("BaseTable").
On both tables i have defined business objects (with hibernate tools on eclipse, InterfaceObj and BaseObj).
BaseObj possesses plausibility (validate) methods with which his attributes are examined for correctness.
The copying process runs like that:
Code:
//open Session
conf = new Configuration().configure();
factory = conf.buildSessionFactory();
session = factory.openSession();
//read InterfaceData
Criteria criteriaObjectsInInterface = session.createCriteria(InterfaceObj.class);
dataObjectsInInterface = criteriaObjectsInInterface.list();
//read BaseData
Criteria criteriaObjectsInBasis = session.createCriteria(BaseObj.class);
dataObjectsInBasis = criteriaObjectsInInterface.list();
int nSizeBasisTable = dataObjectInBasis.size();
int nSize = dataObjectsInInterface.size();
//transfer data
Iterator iteratorDPInt = dataObjectInBasis.iterator();
trans = session.beginTransaction();
while(iteratorDPInt.hasNext())
{
dpInt = (InterfaceObj) iteratorDPInt.next();
if(nSizeBasisTable > 1) //here possible updates
{
if(locateInBasisData())//locate BaseObj with primary keys from dpInt , update if a BaseObj was found
{
dataObjectBasis.setValuesFromInterface(dpInt );
}
}
else//here inserts
{
dataObjectBasis =new BaseObj() ;
dataObjectBasis.setValuesFromInterface(dpInt);
}
//Validate Data
try
{
dataObjectBasis.validate();
}
catch(Exception e)
{
//do something with validating failures
}
}
//and now commitChanges
trans.commit();
session.close();
//end
in section //read BasisData are gotten all data from the "BaseTable" so that locateInBasisData becomes fast (otherwise i have for every object from "InterfaceTable" a select query on "BaseTable")
With such procedure I can process problem-free 30,000 data records (on both tables) under 10 sec without a large memory capacity.
but...
It can occur in extreme cases by far more data records. 500,000, 1,000,000… and so on
Of course, i would like use my procedure for small data sets and for large data sets. So that the memory in case of large data sets does not run out, i need a possibility to cache the data local in files.
Is that possible with Hibernate? I mean, is possible to set Hibernate(property?) so that their ObjectStore becomes out in files and work with that?
I have read the tips in docu (see chapter 13 "Batch processing") , but it is not a solution for very large datasets (that memory can run out) and the property "hibernate.cache.use_second_level_cache"="true" not helps...
i hope my question is clear, if not, please mail me;)
Thanks at all,
Nikolai
P.S. I use standard environment, not a app. or web server