Re: Suggestions needed about how to dump/restore a database

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/19/06, Arnau <arnaulist@xxxxxxxxxxxxxxxxxx> wrote:
Hi Jeff,

> On Tue, 19 Dec 2006, Arnau wrote:
>
>>  I've got a DB in production that is bigger than 2GB that dumping it
>> takes more than 12 hours.

thats strange , we dump +20GB data in 2 hrs or so.

I have a new server to replace this old one
>> where I have restore the DB's dump. The problem is I can't afford to
>> have the server out of business for so long,

if the two biggest tables are *not critical* for application
availability i can dump  out their data separately
into two SQL files and later restore it.
once you dump out the data you can drop the tables
from the production DB before dumping out and see
how long it takes.

pg_dump -t schema.table  -h hostname -U user <dbname>

can dump out  a specific schema.table.
(the exact options are version dependent, which version btw u using?)

it is always desired to know the root cause of why pg_dump is
taking so long in your machine , but in worst case you could take the
approach you suggested.


so I need your advice about
>> how you'd do this dump/restore. The big amount of data is placed in two
>> tables (statistics data), so I was thinking in dump/restore all except
>> this two tables and once the server is running again I'd dump/restore
>> this data. The problem is I don't know how exactly do this.
>
> Arnau,
>


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux