Re: autovacuum disk IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 2, 2016 at 8:45 AM, Scott Marlowe <scott.marlowe@xxxxxxxxx> wrote:
> On Wed, Mar 2, 2016 at 8:25 AM, Artem Tomyuk <admin@xxxxxxxxxxxxxx> wrote:
>> Hi.
>>
>> I've noticed that autovac. process worked more than 10 minutes, during this
>> zabbix logged more than 90% IO disk utilization on db volume....
>>
>> ===========>29237   2016-03-02 15:17:23 EET 00000 [24-1]LOG:  automatic
>> vacuum of table "lb_upr.public._reference32": index scans: 1
>> pages: 0 removed, 263307 remain
>> tuples: 298 removed, 1944753 remain, 0 are dead but not yet removable
>> buffer usage: 67814 hits, 265465 misses, 15647 dirtied
>> avg read rate: 3.183 MB/s, avg write rate: 0.188 MB/s
>> system usage: CPU 5.34s/6.27u sec elapsed 651.57 sec
>>
>> Is it possible to log autovac. io impact during it execution?
>> Is there any way to limit or "nice" autovac. process?
>
> I'll assume you're running a fairly recent version of postgresql.
>
> There are a few settings that adjust how hard autovacuum works when
> it's working.
>
> autovacuum_max_workers tells autovacuum how many threads to vacuum
> with. Lowering this will limit the impact of autovacuum, but generally
> the default setting of 3 is reasonable on most machines.
>
> autovacuum_vacuum_cost_delay sets how to wail between internal rounds.
> Raising this makes autovacuum take bigger pauses internally. The
> default of 20ms is usually large enough to keep you out of trouble,
> but feel free to raise it and see if your IO utilization lowers.
>
> autovacuum_vacuum_cost_limit sets a limit to how much work to do
> between the pauses set by the cost delay above. Lowering this will
> cause autovac to do less work between pauses.
>
> Most of the time I'm adjusting these I'm making vacuum more
> aggressive, not less aggressive because vacuum falling behind is a
> problem on the large, fast production systems I work on. In your case
> you want to watch for when autovacuum IS running, and using a tool
> like vmstat or iostat or iotop, watch it for % utilization. You can
> then adjust cost delay and cost limit to make it less aggressive and
> see if your io util goes down.
>
> Note though that 90% utilization isn't 100% so it's not likely
> flooding the IO. But if you say raise cost delay from 20 to 40ms, it
> might drop to 75% or so. The primary goal here is to arrive at numbers
> that left autovacuum keep up with reclaiming the discarded tuples in
> the database without getting in the way of the workload.
>
> If your workload isn't slowing down, or isn't slowing down very much,
> during autobvacuum then you're OK.

Just to add a point here. If you're machine can't keep up with
production load AND the job of vacuuming, then your IO subsystem is
too slow and needs upgrading. The difference between a pair of
spinning 7200RPM drives and a pair of enterprise class SSDs (always
with power off safe writing etc, consumer SSDs can eat your data on
power off)  can be truly huge. I've seen improvements from a few
hundred transactions per second to thousands of transactions per
second by a simple upgrade like that.


-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux