Hello,
I am looking for some suggestions about loading huge tables from DB. It's about a CRM applications and once at 24 hours it must generate some reports. The problem is that at some point a table with +2.000.000 rows, 60 columns and around 10 associations (which also are huge tables) must be loaded into a Map<Integer, Object>. Another problem is that i need every data from it (including associated objects). Setting default-lazy to false is pure suicide. Getting 100 rows = 2 minutes. Also, i tried to make a multi-threaded request to the DB since it's accessed only by the Updater-Robot (the user will access data from that map) and get the data into pieces. However, the number of simultaneous connections to the DB is limited either by Postgre either by Windows kernel (around 120-130 threads... if i set more everything will crash). I got a significant time save (about 10 time faster) but still not fast enough. Is this a good way to do it? If the updater will run once at 24 hours on a well equipped PC will be safer? Also, when involving big amount of simultaneous threads Linux is handling this better than Windows?
It's any way to do this with default-lazy TRUE? The problem is the DB contains around 35 tables and each one haves at least a bidirectional relationship with another table. Setting default-lazy FALSE it practically loads the entire DB into memory if i am not wrong.
|