Richard Huxton wrote:
Since it's expecting 7914 rows for "kouber" it thinks it will find the
20 rows you want fairly quickly by just looking backward through the
login_attempt_pkey index.
Try increasing the stats on the username column.
ALTER TABLE login_attempt ALTER COLUMN username SET STATISTICS 100;
ANALYZE login_attempt;
You can try different values of statistics up to 1000, but there's no
point in setting it too high.
Hmmm, that did the trick, thank you. I updated the statistics of the
column to 300, so now the query plan changed to:
Limit (cost=127.65..127.70 rows=20 width=38) (actual time=0.085..0.086
rows=3 loops=1)
-> Sort (cost=127.65..129.93 rows=910 width=38) (actual
time=0.084..0.085 rows=3 loops=1)
Sort Key: login_attempt_sid
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on login_attempt (cost=7.74..103.44 rows=910
width=38) (actual time=0.075..0.078 rows=3 loops=1)
Recheck Cond: ((username)::text = 'kouber'::text)
-> Bitmap Index Scan on login_attempt_username_idx
(cost=0.00..7.51 rows=910 width=0) (actual time=0.069..0.069 rows=3 loops=1)
Index Cond: ((username)::text = 'kouber'::text)
Total runtime: 0.114 ms
Now the planner believes there're 910 rows, which is a bit closer to the
real data:
swing=# select avg(length) from (select username, count(*) as length
from login_attempt group by username) as freq;
avg
----------------------
491.6087310427555479
(1 row)
--
Kouber Saparev
http://kouber.saparev.com
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance