Re: VACUUMs take twice as long across all nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 26 Oct 2006 18:09:37 -0400
Andrew Sullivan <ajs@xxxxxxxxxxxxxxx> wrote:

> On Thu, Oct 26, 2006 at 09:35:56PM +0100, Gavin Hamill wrote:
> > 
> > I'm absolutely certain. The backups run from only one slave, given
> > that it is a full copy of node 1. Our overnight traffic has not
> > increased any, and the nightly backups show that the overall size
> > of the DB has not increased more than usual growth.
> 
> A couple things from your posts:
> 
> 1.	Don't do VACUUM FULL, please.  It takes longer, and blocks
> other things while it's going on, which might mean you're having
> table bloat in various slony-related tables.

I know it takes longer, I know it blocks. It's never been a problem

> 2.	Are your slony logs showing increased time too?  Are your
> targets getting further behind?

Nope, the slaves are keeping up just great - once the vacuums are
finished, all machines are running at about 50%-75% of full load in
duty.
 
> 3.	Your backups "from the slave" aren't done with pg_dump,
> right?

Em, they are indeed. I assumed that MVCC would ensure I got a
consistent snapshot from the instant when pg_dump began. Am I wrong?

> But I suspect Slony has a role here, too.  I'd look carefully at the
> slony tables -- especially the sl_log and pg_listen things, which
> both are implicated.

Slony is an easy target to point the finger at, so I tried a
little test. I took one of the 'light' slaves (only 10 tables..),
stopped its slon daemon, removed it from the load-balancer, and
restarted postgres so there were no active connections.

With the removal of both replication overhead and normal queries from
clients, the machine should be completely clear to run at full tilt.

Then I launched a 'vacuum verbose' and I was able to see exactly the
same poor speeds as before, even with vacuum_cost_delay = 0 as it was
previously...

2006-10-27 08:37:12 UTC INFO:  vacuuming "public.Allocation"
2006-10-27 08:37:21 UTC INFO:  "Allocation": found 56449 removable, 4989360 nonremovable row versions in 47158 pages
2006-10-27 08:37:21 UTC DETAIL:  0 dead row versions cannot be removed yet.
        Nonremovable row versions range from 64 to 72 bytes long.
        There were 1 unused item pointers.
        Total free space (including removable row versions) is 5960056 bytes.
        13 pages are or will become empty, including 0 at the end of the table.
        5258 pages containing 4282736 free bytes are potential move destinations.
        CPU 0.16s/0.07u sec elapsed 9.55 sec.
2006-10-27 08:44:25 UTC INFO:  index "allocation_pkey" now contains 4989360 row versions in 102198 pages
2006-10-27 08:44:25 UTC DETAIL:  56449 index row versions were removed.
        1371 index pages have been deleted, 1371 are currently reusable.
        CPU 1.02s/0.38u sec elapsed 423.22 sec.

If I've read this correctly, then on an otherwise idle system, it has taken seven minutes to perform 1.4 seconds-worth of actual work. Surely that's nonsense? 

That would suggest that the issue is poor IO; "vmstat 5" output during this run wasn't ripping performance - maybe averaging 3MB/sec in and out. 

I know the peak IO on this machine is rather much better than that:

joltpg2:/root# dd if=/dev/zero of=/tmp/t bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 8.02106 seconds, 131 MB/s

The test "system" is one CPU's-worth (two cores) of a 4 x Opteron 880 machine split up by Xen, and I can confirm the IO on the other Xen partitions was minimal.

I appreciate the time, help and advice people are offering, however I really don't think Slony is the culprit here.

Cheers,
Gavin.


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux