Quote:
concept is typically 4th-NF
Trying to respond to fast... Of course, 1st-NF
Quote:
Get 10 rows of 493 columns will take longer then 10 rows of 10 columns
You'll notice I said "From a database perfomance perspective"... In pure database performance terms, it actually has more to do with the number joins that have to be performed. Very rarely in SQL are you returning all rows in a table. So the total number of columns defined for a table is largely irrelavent. What is far more relevant is how you partition this data if you decide to break it up to avoid those joins as often as possible.
Quote:
Get 10 rows of 493 columns will take longer then 10 rows of 10 columns
Not if those 10 columns are spread across 10 different tables. The quickest access to data is going to be sans joins. Period. After about 5-6 join tables on most RDBMSs, you are going to start exponentially decreasing performance.
Quote:
I would put $5 down that with lazy initialization and hibernate, the normalized set would perform better, but I can not back that with hard data.
Of course, again "From a database perfomance perspective"... You'll notice my two suggestions included either breaking apart the data for Hibernate, or at the very least mapping lightweight views of the data so that not all the columns are returned