I've recently come across an interesting performance problem using SQL server and want to see whether anyone out there can help.
We have a large table (> 1,000,000 rows) which is mapped to an NHibernate entity and we need to locate the entity by a property which maps to a varchar database column.
We have an index on the column, so we expected that this index would be used by the execution planner...and it is, however rather than doing an index seek it's doing an index scan. This had me stumped for some time until I came across the following post:
http://sqljunkies.com/WebLog/sqlbi/arch ... 24249.aspx
This points out that if a column is of type varchar and an index is created (also varchar, obviously) then if a query is executed against this column but the value passed is an *nvarchar* then an index *scan* will be performed (rather than a seek) so that the values in the index can be converted to nvarchar before the comparison is performed, causing a pretty serious performance degradation.
The real issue here is that NHibernate executes it's queries using sp_executesql and all string parameters are passed as nvarchar, which triggers the above performance problem.
Obviously we *could* change the column data to nvarchar, but this really isn't what we want in our DB schema.
What I really want to know is whether there's some way that the parameter type passed to sp_executesql can be explicitly set to varchar.
Hope that's clear and (more importantly) that someone can help!
Cheers,
Symon.