Hi All,
I am responsible for maintaining a small application (a collection of jobs) which is used to publish data from our internal database to an external web application. Next month, my company is planning to upgrade this database from Oracle 9i to 10g, and asked me to perform some benchmarking to verify that no performance issues arise as a result of the upgrade.
It looks to me like the simplest way verify this question would be to run each job normally, and use SessionFactory's getStatistics() method to compare average execution times for each query before and after the upgrade.
However, in my preliminary testing, I've noticed that enabling statistics in Hibernate causes a significant performance hit. This is especially true in jobs that execute a large number of small queries (i.e. queries that return 0 or 1 record). This led to a few concerns that I was hoping somebody might clarify for me...
1) Where is this overhead coming from? Is Hibernate computing these statistics "in memory" from its own point of view? Or is it requesting additional information from the DBMS?
2) Is this additional overhead that I am observing being reflected inside of the statistics themselves? I.e., by choosing to enable statistics, am I inflating those statistics?
3) Is the amount of overhead generated by enabling statistics liable to change if the DBMS changes?
4) Most importantly, can I expect that if - according to Hibernate statistics - the average execution time for a query is faster on DBMS-X than on DBMS-Y, the same will hold true in production with statistics disabled?
Thanks very much for your help.
|