SQL Server datatypes are not always what they seem to be. Martin Schmidt recently had an interesting blog post (in danish) regarding implicit decimal conversion that sparked my interest.
Let me sketch up the scenario. We have a simple table with a decimal column like so:
Note that the DecimalColumn has a precision of five and a scale of two. That basically boils down to 999.99 being the largest number we can store and -999.99 being the smallest. The precision of five defines the maximum number of digits in the number, scale defines the number of digits to the right of the decimal point. If we insert an integer value of 999 it’ll have .00 stored implicity, thus we can’t insert neither 1000 nor 10000 without any decimal digits. Knowing the configured precision and scale is important, as we’ll see in just a moment.
Let us insert a single row into the tblDecimalTest table.
This is the result if we perform a select on the table:
Both decimal variables were declared as dec(5,2) so it matches the column in the table. Calculating 20.5 / 27.52 on a standard calculator gives a result of 0.7449127, but as we’re storing this with a scale of two, the value is rounded off to 0.74.
We just inserted a value of 20.5 / 27.52 into a dec(5,2) column. Let’s make a select using those same variables:
What is that? No results! Why does this happen? After all, we just inserted @Decimal1 / @Decimal2, so surely we should be able to select that row again? The key lies in how SQL Server converts decimal datatypes during math operations. What we’re looking for is the divison operator which defines the following precision and scale calculations:
Let’s input our values into that formula.
Thus, our division of two dec(5,2) variables is implicitly converted into a dec(13,8) value! Similar conversions are made for addition, subtraction and multiplication.
Let’s try and check the division result directly:
When performing the WHERE clause, we’re in fact comparing a dec(5,2) column with a dec(13,8) value. Behind the scenes, SQL Server will implicitly convert the values to a common datatype that fits both - which is dec(13,8). With a precision of 13 and a scale of 8, 0.74 and 0.74491279 are not equal, and thus we don’t get any results back. If we were to cast the divison as a dec(5,2) explicitly, we would find the row:
While testing in SQL Server Management Studio, this might be an obvious problem. When encountering the same problem from code, it’s much more difficult to notice - especially if you don’t know the precise schema you’re working against. Observe the following code working on an empty tblDecimalTest table.
The result:
Without knowledge of the schema, and how SQL Server treats decimal math operations, this could’ve been a tough bug to track down.