Kyotaro HORIGUCHI <horiguchi.kyotaro@xxxxxxxxxxxxx> writes: > At Tue, 21 Mar 2017 07:52:25 -0700, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote in <375c9e5a-960f-942c-913f-55632a1f0a90@xxxxxxxxxxx> >> On 03/21/2017 07:42 AM, Tom Lane wrote: >>> It looks like the problem is overflow of the result of interval_cmp_value, >>> because it's trying to compute >>> =# select '32618665'::int8 * 30 * 86400 * 1000000; >>> ERROR: bigint out of range >>> It's not immediately obvious how to avoid that while preserving the >>> same comparison semantics :-( > Detecting the overflow during the conversion can fix it and > preserving the semantics (except value range). The current code > tells a lie anyway for the cases but I'm not sure limting the > range of value is acceptable or not. I don't think it is. It'd cause failures in attempting to enter very large interval values into btree indexes, for instance. A possible solution is to manually work in wider-than-64-bit arithmetic, that is compute the comparison values div and mod some pretty-large number and then compare the two halves. I seem to recall that we did something similar in a few cases years ago, before we were willing to assume that every machine had 64-bit integer support. Of course, for machines having int128, you could just use that type directly. I'm not sure how widespread that support is nowadays. Maybe a 95%-good-enough solution is to use int128 if available and otherwise throw errors for intervals exceeding 64 bits. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general