Jon Nelson <jnelson+pgsql@xxxxxxxxxxx> writes: > On Fri, Nov 19, 2010 at 12:14 PM, Tom Lane <tgl@xxxxxxxxxxxxx> wrote: >> Hard to comment about this with such an incomplete view of the situation >> --- in particular, data types would be a critical factor, and I also >> wonder if you're admitting to all the columns involved. > Here is an example that, while super ugly, does show the problem: Hm. In the UNION case, we have a measured average width for the inet column from ANALYZE for each table, and we just use that. In the sub-select case, it seems to be falling back to a default estimate for the datatype, which surprises me a bit --- at least in HEAD it seems like it should be smarter. I'll go look at that. As for the rowcount estimates, these aren't the same query so there's no reason for them to be the same. In the UNION case, it's basically taking the pessimistic assumption that there are no duplicate rows; given the lack of cross-column stats there's no way to be much smarter. In the GROUP BY case, the question is how many distinct values of 'a' you suppose there are. This is also pretty hard to be rigorous about --- we have an idea of that for each table, but no idea how many cross-table duplications there are. The 200 is just a default estimate when it has no idea. We could probably come up with some better though not rigorous estimate, but nobody's worked on it. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general