On Mon, Nov 17, 2008 at 9:36 AM, Howard Cole <howardnews@xxxxxxxxxxxxx> wrote: > Scott Marlowe wrote: >> >> The problem is most likely you're I/O bound. If one query is hitting >> a table it can pull in data (sequentially) at 40 to 80 megabytes per >> second. Since most of your queries are small, they don't run into >> each other a lot, so to speak. As soon as your big reporting query >> hits it's likely hitting the drives much longer and getting in the way >> of all the other queries. >> >> > > Thanks for the input Scott. You are correct - I am IO bound, but only for > the query described. 99% of the time, my IO runs at 3% or less, even during > peak times, only this one query, which happens approximately 10 times a day > grinds the system to a halt. > > I am considering running this query in the background every couple of > minutes or so to force the tables/indexes into cache. Once the tables are > cached the query runs very quickly and there is no significant IO > impact.This is a bodge but hopefully should work. Simpler to probably just do: select count(*) from sometable; to force it into cache. Buy some more memory and it'll likely stay in memory without such tricks. Best of luck on this. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general