Search Postgresql Archives

Re: autovac hung/blocked

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Nov 14, 2006, at 1:02 PM, Ed L. wrote:
On Tuesday November 14 2006 12:56 pm, Jim C. Nasby wrote:
You don't have the vacuum cost delay settings set unreasonably
high, do you?
On Tuesday November 14 2006 12:56 pm, you wrote:
You don't have the vacuum cost delay settings set unreasonably
high, do you?

I'm not sure.  Here's what we're running:

#vacuum_cost_delay = 0                  # 0-1000 milliseconds
#vacuum_cost_page_hit = 1               # 0-10000 credits
#vacuum_cost_page_miss = 10             # 0-10000 credits
#vacuum_cost_page_dirty = 20            # 0-10000 credits
#vacuum_cost_limit = 200                # 0-10000 credits
autovacuum = on # enable autovacuum subprocess? #autovacuum_naptime = 60 # time between autovacuum runs, in secs autovacuum_naptime = 600 # changed by CW 9/11/06 to minimize interference with application autovacuum_vacuum_threshold = 1000 # min # of tuple updates before
                                        # vacuum
autovacuum_analyze_threshold = 500 # min # of tuple updates before
autovacuum_vacuum_scale_factor = 0.4    # fraction of rel size before
                                        # vacuum
autovacuum_analyze_scale_factor = 0.2   # fraction of rel size before
autovacuum_vacuum_cost_delay = 500 # default vacuum cost delay for
                                        # vacuum_cost_delay
autovacuum_vacuum_cost_limit = 200 # default vacuum cost limit for
                                        # vacuum_cost_limit

The table on which it seems to be struggling is updated
many times per second and has 7.2M rows over 15GB total.

I'm trying to figure out how often it would be napping
on that...?  Maybe we're just seeing a very conservative
vacuum on a large table...?

Well, it's going to take some time to vacuum 15G, but it appears there's also confusion about what some of these settings do.

_naptime simply controls how often autovacuum checks to see if there's work to do. There's really no reason at all to set it to 10 minutes.

If you're worried about the impact vacuum has on your application, you want to be using the *_cost_* settings. Those work by incrementing a cost counter for every page that vacuum touches (vacuum_cost_page*). Once that counter hits vacuum_cost_limit, vacuum will stop what it's doing and sleep for vacuum_cost_delay milliseconds.

So in broad terms, if you want to reduce the impact of vacuum/ autovacuum, you should increase vacuum_cost_delay/ autovacuum_vacuum_cost_delay to something (20 ms seems to be a decent starting point). If you monitor disk activity, you can adjust things so that you're at 90% or so of your drive capability with vacuum and a normal workload running. Depending on how much of your data is normally in cache, you might want to increase both page_dirty and the cost_limit.
--
Jim Nasby                                            jim@xxxxxxxxx
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux