thats what i'm trying to get a grasp on, what postgres is doing with
calculation as it truncates or rounds the number when committing the
records to the physical table. I just start digging into this as we are having problems where some fields in the database are precision of 2 and other go all the way to 10 decimal places. The table layout we have is not consistent and the result are hundred to thousandths of pennies off but those pennies start become dollars every 100 to 1000 transactions. It seems the pg rounding is favoring the lower side of the number when being committed to the table. I've been going over transactions in WIP and compared to values in the Generial Ledger i'm off 6 cents and thats only on 36 transactions that i have handcheck. GL has a lower value compared to the records in WIP tables which have 4 and 6 decimals precision versues GL 2 decimal precision in the tables I going through the tables and making all the numeric fields all the same. I have run into problems as some of columns are referenced by views and other constraints and its not letting me change them. :'( WE have several columns in table defined with numeric (20,10) thats is just insanity. Unless your doing scientific calculations which we do, do. Having that many decimal points for an accounting package is just nonsense and then its rounded to 4 or 6 in Inventory and Wip tables then 2 when the numbers finally hit the GL tables. Who ever laid these tables out has never had to try and get numbers to balance and agree across tables :-( . Every time i dig a little deeper i keep finding stupid things like this. Some people may think i'm crazy trying to track this down but when you're only consume 0.003186 lbs of a metal per part that cost 22.7868 per lb and the work order calls fro 1148 parts. how the machine rounds becomes a big problem (.00318611*1148) = 3.65765 lbs consumed * 22.7868 = $83.3462 . Sense GL tables are 2 decimal the value is stored as $83.35 But the problem is far worse than that. BOM allows for greater precision of 8 wip Inventory Movements shows only 6, Wip tables has 6 and 4. The question quickly becomes what number is the correct number. Wip truncates the material consumed to .003186*1148 = 3.6575 * 22.7868 = 83.3434 which is rounded = 83.34 Multiply this by 1000 transactions a day and we start having major problems. Sam Mason wrote: On Mon, May 12, 2008 at 06:37:02PM -0400, Justin wrote:I guess i have not been very clear. lets take this select (9/10), (9/10)::numeric, (9::numeric/10::numeric), (9./10), (9*.1) With the given select statement i expected the results all to be same, especially sense it cast 4 of the 5 to numeric either with explicit cast or by containing a decimal. Instead postgresql cast the first 2 calculations to integer, it then uses integer math so the result is 0. To Add further conversion to my small brain there is a specific type cast to the second calculation but it still returned 0. Not what i would have expected. After thinking about it for say 10 seconds, i see that Postgresql is following the order of operation in the 2nd calculation where it does integer math then cast the results to numeric.PG does very similar things to what C does. '9' is an integer literal, and so is '10', there is a '/' operator that takes two integers and returns an integer and this gets used, resulting in an integer. If you happen to cast the thing to a value of numeric type this will happen after the division (i.e. it follows the syntax, like C does). Casting the integers to values of numeric type is similar, just the numeric version of the division operator gets used. The last example exercises a different code path, in that '9.' is a value of numeric type and '10' is still of integer type. There's some magic somewhere in PG that says that values of numeric type are more expressive than values of integer type causing the parser (I'm guessing here) to insert a cast to numeric type. The types now unify and one value can be divided by the other. The magic seems somewhat arbitrary; what if I wanted to go to the less precise type or generally be told when things didn't unify.I made the incorrect assumption Postgresql would have casted all the arguments to numeric then done the math. After thinking this through for a short bit i see why postgresql is casting the arguments to integer type as numeric/floating point math can be a pretty heavy hit performance wise.I don't think it's accurate to say the behaviour is there because of performance reasons, it's just evaluating your code as you've written it. The behaviour you describe is closer to an untyped (i.e. dynamically checked, or as they seem to be popularly known "weakly typed") scripting language. Either that or something like Haskell which treats types much more rigorously than PG, where the _expression_ (9.0 / (10::Int)) would fail to type check, and 9.0/10 (or even 9/10) would do what you wanted and parse 10 as any value that implements the fractional type class (probably a floating point number). The easiest way to understand what's going on is generally playing with a single _expression_, then changing the literals to represent values of different types and seeing how the result changes. You may get some mileage out of using EXPLAIN VERBOSE (you can see the cast being inserted in the 9./10 case, when compared to 9/10---function OID 1740 takes an int4 and returns a numeric) but it's somewhat difficult to read. Sam |