Re: PG_DUMP backup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Josh,

Thank you again for the links you sent to me, really appreciated.
I actually was thinking of that to backup the server, here is the idea, maybe one of you guys can tell me if it would work.

Backup idea, pick holes please?
Create a script which does at the beggning:
        SELECT pg_start_backup('label');
                Then tar -cf backup.tar /usr/local/pgsql/data
                Restore the tar file to a secondary DB server
                Rsync -avf /usr/local/pgsql/data to remote machine
        SELECT pg_stop_backup();

I think rsync will sinc the files and as it can copy deltas it will keep the files in sync, will this work?
I must bring the database window to below 24 hours so I can backup daily.
The server hardware is old and we will not change it right now.

Do you guys think I have any hope to achieve this? ;-)

Thank you very much

I would welcome ideas and any help.

Really appreciated.

Renato




Renato Oliveira
Systems Administrator
e-mail: renato.oliveira@xxxxxxxxxxx

Tel: +44 (0)1763 260811
Fax: +44 (0)1763 262410
http://www.grant.co.uk/

Grant Instruments (Cambridge) Ltd

Company registered in England, registration number 658133

Registered office address:
29 Station Road,
Shepreth,
CAMBS SG8 6GB
UK

-----Original Message-----


From: Josh Kupershmidt [mailto:schmiddy@xxxxxxxxx]
Sent: 12 February 2010 15:30
To: Renato Oliveira
Cc: pgsql-admin@xxxxxxxxxxxxxx
Subject: Re:  PG_DUMP backup
Importance: High


On Feb 12, 2010, at 4:58 AM, Renato Oliveira wrote:

> Dear all,
>
> I have a server running 8.2.4 and has a database 170GB in size.
> Currently I am backing it up using pg_dump and it takes around 28 hours, sadly.

That's suspiciously slow for a pg_dump alone. I have a ~168 GB database which gets pg_dumped nightly, taking about 2.5 hours, all on 2+ year-old commodity hardware.

> I was asked to check and compare the newly created DUMP file to the live database and compare records.
>

If you really must run this comparison, maybe you can check out "pg_comparator" (I think you would restore first, then use pg_comparator to run the diffs). However, it sounds like your assignment really is more about making sure that your backup server is functional and ready to take over if the master dies. There are easier, and better, ways to establish this than doing a row-by-row comparison of your backup and live server

> I personally cannot see an easy or quick way of doing this, and even the point in doing so.
> I am already restoring the full database to a separate server and no errors were reported.
>

There's probably a much easier way of ensuring the validity of your backup server without running this diff, but that'll of course depend on your environment and your boss' wishes.

> My question is:
> 1 - Is there a more efficient way of backing up such large database, using pg_dump or any other tool?

Only other ways, other than PITR which you rule out, are documented here, but I doubt you'll like them:
http://developer.postgresql.org/pgdocs/postgres/backup-file.html

> 2 - Is there an easy way to compare the live database with the DUMP file just created?

Take another dump, and compare the two dumps? This borders on absurdity, of course.

> Idea:
> Pg_dump to split the file into smaller usable chuncks, which could be restored one at time, is that possible?

You can dump a table at a time, or a few at a time, using pg_dump --table=... I doubt this will speed the restore up, though. If you can upgrade to 8.4, or upgrade the backup server to 8.4, your pg_restore should be faster with parallel restores.

Also, I would look into tuning your backup server to make pg_restore as fast as possible. See e.g.
http://wiki.postgresql.org/wiki/Bulk_Loading_and_Restores


Josh


-----Original Message-----


P Please consider the environment before printing this email
CONFIDENTIALITY: The information in this e-mail and any attachments is confidential. It is intended only for the named recipients(s). If you are not the named recipient please notify the sender immediately and do not disclose the contents to another person or take copies.

VIRUSES: The contents of this e-mail or attachment(s) may contain viruses which could damage your own computer system. Whilst Grant Instruments (Cambridge) Ltd has taken every reasonable precaution to minimise this risk, we cannot accept liability for any damage which you sustain as a result of software viruses. You should therefore carry out your own virus checks before opening the attachment(s).

OpenXML: For information about the OpenXML file format in use within Grant Instruments please visit our http://www.grant.co.uk/Support/openxml.html


-- 
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux