Sergei Shelukhin escribió: > * What other non-default configuration settings do you have? > I played w/shared buffers, setting them between 16k and 32k,~ 24k > seems to be the best but the difference is minimal. The work_mem > setting is 256kb, and I increased effective cache size to ~700Mb (~35% > ram) based on a tip from some article. > max_fsm_* is increased too to accomodate vacuum analyze runs. work_mem = 256 MB is probably too much; you might be filling your RAM with it, causing the system to swap. Try decreasing it. This would be particularly noticeable under load. Keep in mind that work_mem is per sort/hash and per backend, i.e. if you have a query which does 3 sorts and is being executed by 5 processes in parallel, you will have 3 * 5 * 256 MB = 3840 MB of RAM in active use. > * My "system has to go down" remark was probably a bit exaggerated; > the system is noticeably slower when the vacuum is running and that is > basically without workload; it will probably become unusable under > load. You can set the vacuum_cost_* settings in order to reduce the impact of vacuum in the available I/O. > * Specific queries that were slow: there are generally no specific > queries, everything runs slowly, mostly due to estimates being grossly > inaccurate (like 800k cost for a complex query based on Explain and > then 5, actual cost based on e.analyze) You might want to try increasing statistic targets for the problematic columns. It would be useful to choose one of these and send them along for dissection here on the lists, to investigate the actual problems. Misestimations are a problem at times, but most of the time there are workarounds. -- Alvaro Herrera http://www.advogato.org/person/alvherre "Oh, great altar of passive entertainment, bestow upon me thy discordant images at such speed as to render linear thought impossible" (Calvin a la TV)