On 15 Únor 2012, 10:38, Venkat Balaji wrote: > > Data loss would be an issue when there is a server crash or pg_xlog crash > etc. That many number of pg_xlog files (1000) would contribute to huge > data > loss (data changes not synced to the base are not guaranteed). Of-course, > this is not related to the current situation. Normally we calculate the > checkpoint completion time, IO pressure, CPU load and the threat to the > data loss when we configure checkpoint_segments. So you're saying that by using small number of checkpoint segments you limit the data loss when the WAL gets corrupted/lost? That's a bit like buying a Maseratti and then not going faster than 10mph because you might crash at higher speeds ... The problem here is that the WAL is usually placed on more reliable drives (compared to the data files) or a RAID1 array and as it's just writing data sequentially, so the usage pattern is much less likely to cause data/drive corruption (compared to data files that need to handle a lot of random I/O, etc.). So while it possible the WAL might get corrupted, the probability of data file corruption is much higher. And the corruption might easily happen silently during a checkpoint, so there won't be any WAL segments no matter how many of them you keep ... And by using low number of checkpoint segments it actually gets worse, because it means more frequent checkpoints -> more I/O on the drives -> more wearout of the drives etc. If you need to protect yourself against this, you need to keep a WAL archive (prefferably on a separate machine) and/or a hot standby for failover. kind regards Tomas -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general