Search Postgresql Archives

Re: VACUUM ANALYZE extremely slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi. Sorry for being a bit emotional, I was pretty constructive in my
earlier posts (the earlier, the more constructive if you care to
search) but I am progressively getting pissed off :(

Thanks for the initial tip, running ANALYZE w/o vacuum is faster. Are
frequent vacuums even necessary if there are virtually no deletes in
the system and updates are much less frequent compared to inserts?

Now for the answers.
* What other non-default configuration settings do you have?
I played w/shared buffers, setting them between 16k and 32k,~ 24k
seems to be the best but the difference is minimal. The work_mem
setting is 256kb, and I increased effective cache size to ~700Mb (~35%
ram) based on a tip from some article.
max_fsm_* is increased too to accomodate vacuum analyze runs.

* What version of PostgreSQL are you using and on what OS?
8.1 and Debian

What kind of disks and controllers do you have?
Umm... Unfortunately I do not know, and do not have access to check :)
THe development server probably has a fast consumer grade HDD, not a
RAID I suppose.

* My "system has to go down" remark was probably a bit exaggerated;
the system is noticeably slower when the vacuum is running and that is
basically without workload; it will probably become unusable under
load.

* Specific queries that were slow: there are generally no specific
queries, everything runs slowly, mostly due to estimates being grossly
inaccurate (like 800k cost for a complex query based on Explain and
then 5, actual cost based on e.analyze)

I was not able to find any special bottlenecks in the queries. I come
from MSSQL (and a bit of MySQL) background, and the only thing I
noticed is that there are a lot of seqscans in the places where
indexes should be used (imo), when I turn seqscans off indexes are
ridiculously slow (I have already posted about it), presumably due to
random access used to build a big result set; however, result set is
not in fact big in real queries and postgres supposedly doesn't do
some magic that MSSQL does (join order or something) that makes the
join that should utilize the index narrower and thus faster.

There are also some weird development-stage-specific bottlenecks like
adding a column to 34m row table and updating it to default value (I
wasn't able to wait till the end of that query, it probably runs for
days); I think I understand why it is slow based on forum posts, but
again, MSSQL and MySQL do it better.

Also there's a general impression of everything being slow even after
MySQL on the same server; starting from seqscans to aggregate
operations; I won't be able to supply concrete evidence tho (not yet
anyway), it's just the gut feeling.





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux