At Thu, 30 Mar 2017 10:57:19 -0400, Tom Lane <tgl@xxxxxxxxxxxxx> wrote in <2087.1490885839@xxxxxxxxxxxxx> > Kyotaro HORIGUCHI <horiguchi.kyotaro@xxxxxxxxxxxxx> writes: > > At Tue, 21 Mar 2017 07:52:25 -0700, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote in <375c9e5a-960f-942c-913f-55632a1f0a90@xxxxxxxxxxx> > >> On 03/21/2017 07:42 AM, Tom Lane wrote: > >>> It looks like the problem is overflow of the result of interval_cmp_value, > >>> because it's trying to compute > >>> =# select '32618665'::int8 * 30 * 86400 * 1000000; > >>> ERROR: bigint out of range > >>> It's not immediately obvious how to avoid that while preserving the > >>> same comparison semantics :-( > > > Detecting the overflow during the conversion can fix it and > > preserving the semantics (except value range). The current code > > tells a lie anyway for the cases but I'm not sure limting the > > range of value is acceptable or not. > > I don't think it is. It'd cause failures in attempting to enter > very large interval values into btree indexes, for instance. As for btree on intervals, it uses the same conversion function with bare comparisons so it works for btree, too. The following correctly fails with the patch. | =# insert into ti values ('32618665 years'::interval); | ERROR: interval out of range during comparison But, strange behavior is seen on creating an index. | =# insert into ti values ('32618665 years'::interval); | INSERT 0 1 | postgres=# create index on ti using btree (i); | ERROR: interval out of range during comparison So, restricting the domain on reading (interval_in or such) might be better. Since we don't have big-bigint, extract(usec) will overflow for certain range of interval values anyway. Or allow returning them in numeric? If we don't mind such inconsistency, just using wider integer will useful. > A possible solution is to manually work in wider-than-64-bit > arithmetic, that is compute the comparison values div and mod > some pretty-large number and then compare the two halves. > I seem to recall that we did something similar in a few cases > years ago, before we were willing to assume that every machine > had 64-bit integer support. > > Of course, for machines having int128, you could just use that > type directly. I'm not sure how widespread that support is > nowadays. Maybe a 95%-good-enough solution is to use int128 > if available and otherwise throw errors for intervals exceeding > 64 bits. int128 is seen in numeric.c. It is doable in the same manner. In that case it will be a bit slower on the platforms without int128. By the way is it right that we don't assume this as a bug-fix which should be done in the Pg10 dev cycle, but an improvement for 11? regards, -- Kyotaro Horiguchi NTT Open Source Software Center -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general