Sam Wong wrote: >>>>> I am investigating a performance issue involved with LIKE 'xxxx%' >>>>> on an index in a complex query with joins. >>>>> Q1. >>>>> SELECT * FROM shipments WHERE shipment_id LIKE '12345678%' >>>>> >>>>> Q2. >>>>> SELECT * FROM shipments WHERE shipment_id >= '12345678' AND >>>>> shipment_id < '12345679' [Q1 and Q2 have different row estimates] Merlin wrote: >> Right -- I didn't visualize it properly. Still, you're asking >> the server to infer that >> since you're looking between to adjacent textual characters range bounded >> [) it convert the 'between' to a partial >> string search. That hold up logically but probably isn't worth >> spending cycles to do, particularly in cases of non-ascii mappable unicode >> characters. > Postgresql did that already. Refer to the analyze result of Q1 and Q2, it > gives > "Index Cond: ((shipment_id >= '12345678'::text) AND (shipment_id < > '12345679'::text))" > (I also just realized they did it just now) > > Yet, with additional Filter (ref Q1 analyze), it's surprisingly that it > estimates Q1 will have more rows that Q2. > > FYI, I made a self-contained test case and submitted a bug #7610. Did you try to increase the statistics for column "shipment_id"? This will probably not make the difference go away, but if the estimate gets better, it might be good enough for the planner to pick the correct plan. Yours, Laurenz Albe -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance