On Wed, 2005-08-10 at 12:58 -0700, Shane wrote: > On Wed, Aug 10, 2005 at 03:31:27PM -0400, Sven Willenberger wrote: > > Right off the bat (if I am interpreting the results of your explain > > analyze correctly) it looks like the planner is basing its decision to > > seqscan as it thinks that it needs to filter over 1 million rows (versus > > the 29,000 rows that actually are pulled). Perhaps increasing stats on > > msgtime and then analyzing the table may help. Depending on your > > hardware, decreasing random_page_cost in your postgresql.conf just a > > touch may help too. > > Thanks for the pointers. > > I tried increasing the stats from the default of 10 to 25 > with no change. How high would you bring it? Also, I've > never played with the various cost variables. The database > sits on a raid5 partition composed of 4 15k u320 SCSI > drives, dual xeon 2.8(ht enabled) 2gb ram. I suppose this > might actually increase the cost of fetching a random disk > page as it may well be on another physical disk and > wouldn't be in the readahead cache. Any idea as to what it > should be on this sort of system? > > > ---------------------------(end of broadcast)--------------------------- Try increasing stats to 100 on just the msgtime column, not the default (changing the default will only have an effect on newly created columns -- you may want to change the default back to 10): ALTER TABLE seen ALTER msgtime SET STATISTICS 100; After running that command, analyze the table again and see if that helps. I am assuming the culprit is this particular column as your index and search criteria is based on that one. The default random_page_cost I believe is 4.0; on your system you could probably easily drop it to 3, possibly lower, and see how that performs. Sven ---------------------------(end of broadcast)--------------------------- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq