Search Postgresql Archives

Re: [pgadmin-support] Best backup strategy for production systems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Another question please.
I've had to modify pg_hba.conf and postgresql.conf for doing work pg_basebackup. About max_wal_senders, it would have 0 value, I've changed it to 1 for doing work to pg_basebackup. Continuous archiving was working with the value to 0, so I understand that max_wal_senders can be to 1 only for base backups and continuous archiving doesn't need that parameter with >0? I understand well?
Thanks beforehand.

Cheers...


2014-06-27 13:55 GMT+01:00 Oliver <ofabelo@xxxxxxxxx>:
Thank you very much for your reply.
I've spoken with my boss, databases aren't so important, so if there is a little of data lost, there isn't problem .. so I'm configuring this with continuous archiving and base backups. If you are expert, please, I would like make you some questions ..
I only have a server (there isn't slave server). I have doing continuous archiving to another filesystem (of other physical disks), I think that it is working well .. One question is, original wal files are keeped how many time? 2 days? I only have control archived wal files, about how many time I want keep them, isn't it?
When I do a base backup, is it normal that it doesn't copy pg_xlog data, isn't? Because it is doing continuous archiving :-?
The command that I'm using is (I would like that it copies all tablespaces, but I'm not sure if it is doing it):

pg_basebackup -D 27062014 -Ft -z -Z 5 -l "base_backup_27062014" -P

Output:

20290/20290 kB (100%), 1/1 tablespace
NOTICE:  pg_stop_backup completado, todos los segmentos de WAL requeridos han sido archivados

I have now only default databases (postgres and templates), is it normal that it puts only 1/1 tablespace? Why it says that all required wal segmens have been archived if continuous archiving is enabled? I'm seeing now that it has copied some files and there is one file named .backup in the archived wal filesystem. 
Thanks beforehand.

Cheers...


2014-06-25 14:49 GMT+01:00 Andreas <maps.on@xxxxxxx>:

Hi
you might also set up a slave server that replicates all write commands as hot standby.
It's pretty easy to do that and you won't lose any data if your productive server goes up in flames.
It won't slow down your system.

In case you don't want to run another PC as the slave server you can have it as a virtual machine on your workstation. Obviously you shouldn't run it as guest on the db machine.   ;)
I had a VBox do this for some time. It had only 1GB virtual RAM.

I do hourly full db-dumps as a cron job just to be sure.   :-}

Every other strategy like regular dumps or automatic log shipping has a certain amount of time where all modifications to the db would be lost in case something happens to your server.

PG supports automatic log shipping but the log segments are 16MB big. Depending on how much write access your db gets it can take a while before the segment is full and gets shipped out.

You might also consider to use a RAID 1 to store the data directoy on.
In my case there are 3 drives. One for the OS and 2 as a software RAID. Again that is easy to set up and saves the db in case the harddrive dies.
This even speeds up read access to the db a bit.



Am 17.06.2014 11:51, schrieb Oliver:

Hi,
I'm a newbie in postgresql. I've mounted my first postgresql instance, it is empty now, only with default postgres DB.
It is under Linux, with 2 filesystems, one for data and another for archiving (I've enabled archiving as it will be for production).
Could someone recommend me a strategy for backups, scripts and so on?
Can base backup be done with the system up (postgres up), isn't it?
Would it be ok if I do a base backup each week and archiving backup each day?
As I've not configured backups (and archiving deletion), I've had my first problem and it is that my archiving filesystem (FS) is full and archiver process is showing "failed" with the last wal file copy (normal as archiving FS is full).
Please, recommend me what I should make now .. I should create another network FS for base backups and archiving backups? When I have my first base backup, could I then delete archiving files, isn't it?
My archiving FS has 20GB, I don't understand as with a system without load (it will be for production, but it hasn't databases now .. only postgres), how it full the FS in a few days ... Is it normal?
Thanks beforehand.

Cheers...




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux