Hi, I have a table with some 50 millions rows in PG 8.2. The table has indexes on relevant columns. My problem is that most everything I do with this table (which are actually very basic selects) is unbearable slow. For example: select max(payment_id) from transactions This takes 161 seconds. The plan looks like this: "Result (cost=0.37..0.38 rows=1 width=0) (actual time=184231.636..184231.638 rows=1 loops=1)" " InitPlan" " -> Limit (cost=0.00..0.37 rows=1 width=8) (actual time=184231.620..184231.622 rows=1 loops=1)" " -> Index Scan Backward using trans_payment_id_index on transactions (cost=0.00..19144690.58 rows=51122691 width=8) (actual time=184231.613..184231.613 rows=1 loops=1)" " Filter: (payment_id IS NOT NULL)" "Total runtime: 184231.755 ms" As shown, in the plan, the index on the requested column "payment_id" is being used, but the query still takes quite a lot of time. If I use a where clause in a similar query, the query seemingly runs forever, e.g. select min(time) from transactions where payment_id = 67 There are indexes on both the time (a timestamp with time zone) and payment_id (a bigint) columns. About 1 million rows satisfy the condition payment_id = 67. This query takes a totally unrealistic amount of time for execution (I have it running for >30 minutes now on a machine with 8GB and 4 cores@xxxxxxx, and it still isn't finished). With mpstat it becomes clear that the query is totally IO bound (what is expected of course). The machine I'm running this on has a fast RAID that can do about 320 MB/s. Are these normal execution times for these amount of rows and this hardware? Is there anything I can do to speed up these kind of simple queries on huge tables? Thanks in advance for all suggestions Express yourself instantly with MSN Messenger! MSN Messenger |