---------- Forwarded message ---------- From: Marco Bizzarri <marco.bizzarri@xxxxxxxxx> Date: Jul 12, 2006 9:03 PM Subject: Re: [GENERAL] Long term database archival To: "Karl O. Pinc" <kop@xxxxxxxx> Long term archival of electronic data is a BIG problem in the archivist community. I remember, a few years ago, a paper describing the problem of historical (20+ years old) data which were running the risk of being lost simply because of lacking of proper hardware. What I would suggest is to explore the problem trying to search first with experience and research already done on the topic. The topic itself is big, and it is not simply a matter of how you dumped the data. A little exploration in the archivist community could produce some useful result for your problem. Regards Marco On 7/6/06, Karl O. Pinc <kop@xxxxxxxx> wrote:
Hi, What is the best pg_dump format for long-term database archival? That is, what format is most likely to be able to be restored into a future PostgreSQL cluster. Mostly, we're interested in dumps done with --data-only, and have preferred the default (-F c) format. But this form is somewhat more opaque than a plain text SQL dump, which is bound to be supported forever "out of the box". Should we want to restore a 20 year old backup nobody's going to want to be messing around with decoding a "custom" format dump if it does not just load all by itself. Is the answer different if we're dumping the schema as well as the data? Thanks. Karl <kop@xxxxxxxx> Free Software: "You don't pay back, you pay forward." -- Robert A. Heinlein ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
-- Marco Bizzarri http://notenotturne.blogspot.com/ -- Marco Bizzarri http://notenotturne.blogspot.com/