Search Postgresql Archives

Re: Limit Heap Fetches / Rows Removed by Filter in Index Scans

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Victor:


On Fri, Aug 19, 2016 at 7:02 PM, Victor Blomqvist <vb@xxxxxxxx> wrote:
> What I want to avoid is my query visiting the whole 1m rows to get a result,
> because in my real table that can take 100sec. At the same time I want the
> queries that only need to visit 1k rows finish quickly, and the queries that
> visit 100k rows at least get some result back.

You are going to have problems with that. If you just want to limit it
to max 100k rows, max 10 results my solution works, probably better as
nested selects than CTEs, but someone more knowledgeable in the
optimizer will need to say something ( or tests will be needed ). This
is because "the queries that visit 100k rows at least get some result
back." may be false, you may need to visit the whole 1M to get the
first result if you are unlucky. Just set ap=999 where id=1M and ask
for ap>=999 and you've got that degenerate case, which can only be
saved if you have an index on ap ( even with statistics, you would
need a full table scan to find it ).

If you are positive some results are in the first 100k rows, then my
method works fine, how fast will need to be tested with the real data.
You can even try using *10, *100, *1k of the real limit until you have
enough results if you want to time-limit your queries.


Francisco Olarte.


-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux