* Maxim Boguk (maxim.boguk@xxxxxxxxx) wrote: > On Sun, May 10, 2015 at 12:30 PM, Yuri Budilov <yuri.budilov@xxxxxxxxxxx> > wrote: > > database and transaction log backup compression? not available? > > Transaction log backup compression not available (however could be easily > archived via external utilities like bzip2). > Both built-in backup utilities (pg_dump and pg_basebackup) support > compression. External utilities can provide backup compression (eg: pgBackRest, and I believe Barman either has it or is also getting it). In 9.5, we now support compression of full page impages in WAL too. > > - recovery from hardware or software corruption - > > > > suppose I am running a mission critical database (which is also relatively > > large, say > 1TB) and I encounter a corruption of some sort (say, due to > > hardware or software bug) on individual database pages or a number of pages > > in a database > > > > How do I recover quickly and without losing any transactions? MS-SQL and > > Oracle can restore individual pages (or sets of pages) or restore > > individual database files and then allow me to roll forward transaction log > > to bring back every last transaction. It can be done on-line or off-line. > > How do I achieve the same in PostgreSQL 9.4? One solution I see may be via > > complete synchronous replication of the database to another server. I am > > but sure what happens to the corrupt page(s) - does it get transmitted > > corrupt to the mirror server so I end up with same corruption on both > > databases or is there some protection against this? > > > > It's depend where a corruption happen, if pages become corrupted due to > some problems with physical storage (filesystem) in that case a replica > data should be ok. Correct, it largely depends on the corruption. PostgreSQL 9.4 does have page-level checksums to help identify any corruption that happened outside of PG. > There are no facility to recover individual database files and/or page > ranges from base backup and roll forward the transaction log (not even > offline). PostgreSQL certainly supports point-in-time-recovery, which you could do off-line and then grab whatever data was lost, but not individual file or table at this point. Combined with ZFS snapshots and other technologies, you can make it happen quite quickly though. > >From my practice using a PostgreSQL for the terabyte scale and/or > mission-critical databases definitely possible but require very careful > design and planning (and good hardware). I'd argue that's true for any database of this type. :) Thanks! Stephen
Attachment:
signature.asc
Description: Digital signature