Hello!
I have this problem:
1. Traverse 147 million rows in one table, join it with another significantly smaller table
2. Apply some business logic for each row, possibly accessing the db several times doing this
3. Generating response objects for a few (meaning thousands) of these rows, again accessing the db several times
This will be done by a cluster of Websphere servers with several cpu:s. Several streams will be working in parallell, so the 147 million rows may be processed by say 20 threads, each doing 7-8 million rows.
Is is desirable or even possible to use Hibernate doing this? One thing I'm thinking of is the automatic caching. The big table couldn't (and shouldn't) be cached at all. Is it possible to disable caching or must I call evict for each object?
Could I use the default caches or should I build my own?
Any ideas, anyone?
Best regards,
P Poluha
|