Search Postgresql Archives

Re: Vacuums taking forever :(

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Phoenix Kiula <phoenix.kiula@xxxxxxxxx> writes:

> autovacuum_vacuum_cost_delay = 20
> vacuum_cost_delay            = 20

These say to sleep 20ms every few pages.

> These cron jobs are taking over 35 minutes for a vacuum! What's the
> use of a vacuum if it takes that long, and the DB performance is
> tragic in the meantime?

The fact that vacuum takes 35 minutes really shouldn't be a concern. As long
as it isn't making it hard to manage vacuuming frequently enough what do you
care when vacuum finishes? You're not waiting on any results from it. In fact
the point of the above parameters is to ensure vacuum goes *slowly* enough to
avoid causing i/o slowdowns in the rest of the system.

The real question is why your performance is tragic while vacuum is running.
Sleeping 20ms periodically should really be enough to avoid causing any
performance impact. Unless your system is running extremely close to the
maximum throughput already and the small additional i/o is enough to tip it
over the edge?

Another option is to set the delay to 0 which *will* cause performance to be
tragic, but for as short a time as possible. I don't recommend this approach.

You could try raising the delay parameters or decreasing the vacuum_cost_limit
parameters which would make the "few pages" fewer. That would lessen the i/o
impact at the expense of lengthen vacuum's run time. But if you're already at
35% of the time between vacuums being necessary then that seems like it might
not be an option.

Upgrading to 8.3.x would reduce the need for vacuum at all if your updates
qualify for HOT updates. And 8.4 will lessen the impact of vacuums further.

But if you're already running that close to the red-line then you're going to
have problems soon even with less i/o from vacuum. It sounds like you need to
quantify just how much i/o your system is capable of handling and how close to
that level you're already at. 

Keep in mind that random i/o is a *lot* more expensive than sequential i/o.
Typical consumer drives can handle 60MB/s+ of sequential i/o but only about
1-2MB/s of random i/o! It's easy to misjudge your capacity by basing it on
purely sequential i/o.


(Hmmmm. That makes me think that raising the vacuum_cost_limit parameter
dramatically and the vacuum_cost_delay parameter proportionally might make it
actually run faster with less i/o impact. The defaults process only a few
kilobytes before sleeping which probably cause a lot of random seeks. If you
multiple both by 10 then you'll process close to a megabyte of data and then
sleep for a long while. Just a thought -- I haven't tried this on a test box.)

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training!

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux