cluster wrote: >> The indexes don't contain visibility information, so Postgres has to >> look up the row on disk to verify it isn't dead. > > I guess this fact drastically decreases the performance. :-( The number > of rows with a random_number will just grow over time while the number of > questions with status = 1 will always be somewhat constant at about > 10.000 or most likely much less. > > I could really use any kind of suggestion on how to improve the query in > order to make it scale better for large data sets The 6-7000 ms for a > clean run is really a showstopper. Need to get it below 70 ms somehow. > Here is a suggestion that I have not tried. This might not make sense, depending on how often you do this. Make two tables whose DDL is almost the same. In one, put all the rows with status = 1, and in the other put all the rows whose status != 1. Now all the other queries you run would probably need to join both tables, so maybe you make a hash index on the right fields so that would go fast. Now for the status = 1 queries, you just look at that smaller table. This would obviously be faster. For the other queries, you would get stuck with the join. You would have to weigh the overall performance issue vs. the performance of this special query. -- .~. Jean-David Beyer Registered Linux User 85642. /V\ PGP-Key: 9A2FC99A Registered Machine 241939. /( )\ Shrewsbury, New Jersey http://counter.li.org ^^-^^ 16:55:01 up 2 days, 22:43, 0 users, load average: 4.31, 4.32, 4.20 ---------------------------(end of broadcast)--------------------------- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate