On Wed, Aug 10, 2005 at 03:31:27PM -0400, Sven Willenberger wrote: > Right off the bat (if I am interpreting the results of your explain > analyze correctly) it looks like the planner is basing its decision to > seqscan as it thinks that it needs to filter over 1 million rows (versus > the 29,000 rows that actually are pulled). Perhaps increasing stats on > msgtime and then analyzing the table may help. Depending on your > hardware, decreasing random_page_cost in your postgresql.conf just a > touch may help too. Thanks for the pointers. I tried increasing the stats from the default of 10 to 25 with no change. How high would you bring it? Also, I've never played with the various cost variables. The database sits on a raid5 partition composed of 4 15k u320 SCSI drives, dual xeon 2.8(ht enabled) 2gb ram. I suppose this might actually increase the cost of fetching a random disk page as it may well be on another physical disk and wouldn't be in the readahead cache. Any idea as to what it should be on this sort of system? ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org