Re: Catching up Production from Warm Standby after maintenance - Please help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We are using v. 8.3.1 at present.  We anticipate a terabyte of data each year starting in November, and I am concerned about what happens maintenance-wise a couple of years down the line.  I think that we won't be able to do vacuuming/reindexing with the machine online and serving users if the database is over a certain size.  Am I wrong?

Our set up allows the users to create and delete ad-hoc tables in their own namespaces (each user has his own schema in addition to some overall schemas for the project).  Since Slony does not automatically handle "create table" and "drop table", I would have to incorporate that into the infrastructure APIs that create & drop tables and sequenes.  I think it would be non-trivial to implement a full-on, total replication scenario.  We are using Slony now for a select group of tables, but it's a separate API to add a table or remove it from replication.

What do you do when you have to do maintenance?  Don't you take your primary offline and clean it?  Or is this old-school thinking?  I am coming from a Sybase environment, and previously I was able to use transaction logs to catch up post-maintenance. 

It seems odd to me to have a fast, powerful machine left solely in warm standby recovery mode that  cannot be used to alleviate the pressures of DB maintenance.  My systems admin has done nothing but complain about a "wasted" machine - he does not see the value of having the standby.  Of course, if it were Slony'd, we could use it, I suppose.

Thanks for your help,
Jennifer


Date: Tue, 7 Jul 2009 07:33:16 -0400
Subject: Re: Catching up Production from Warm Standby after maintenance - Please help
From: scott.lists@xxxxxxxxxxxxxxxx
To: jenniferm411@xxxxxxxxxxx
CC: scott.marlowe@xxxxxxxxx; pgsql-admin@xxxxxxxxxxxxxx

On Tue, Jul 7, 2009 at 5:12 AM, Jennifer Spencer <jenniferm411@xxxxxxxxxxx> wrote:

>
> If you've moved on, so to speak, with the new primary, you restart the
> old primary, now warm standby, the same way you initially created the
> warm standby. issue the start hot backup command to the primary, copy
> over all the data dir and start shipping WAL files to it before you
> start continuous recovery.

If I do that, the primary will not be clean anymore.  It will be as unvacuumed and index-bloated as the warm standby.  Or am I missing something?

   I think that Scott's point was that once you have brought the standby 'alive', you have no other option but to start over.  Warm-Standby isn't for reindex type operations, i.e. it's a failover mechanism, not to be confused with a switchover mechanism which lets you move back and forth easily.  Once you cut to the standby, you have to do a full re-sync to the old primary system.  What you're looking for is a replication system like Slony.


    Are indexing and vacuuming hurting so much that you can't do them online?   Why not use 'create index concurrently' and set vacuum_cost_delay to help keep these operations from impacting your production system?  What version of PG are you using?

-- Another Scott :-)
   


Lauren found her dream laptop. Find the PC that’s right for you.

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux