Wouldn't you run into driver problems if you tried to restore a 20 year old image? After all, you probably won't be using the same hardware in 20 years... -----Original Message----- From: pgsql-general-owner@xxxxxxxxxxxxxx [mailto:pgsql-general-owner@xxxxxxxxxxxxxx] On Behalf Of Jan Wieck Sent: Wednesday, July 12, 2006 9:26 AM To: Karl O. Pinc Cc: Florian G. Pflug; pgsql-general@xxxxxxxxxxxxxx; thm@xxxxxxxx Subject: Re: [GENERAL] Long term database archival On 7/6/2006 8:03 PM, Karl O. Pinc wrote: > On 07/06/2006 06:14:39 PM, Florian G. Pflug wrote: >> Karl O. Pinc wrote: >>> Hi, >>> >>> What is the best pg_dump format for long-term database >>> archival? That is, what format is most likely to >>> be able to be restored into a future PostgreSQL >>> cluster. > >> Anyway, 20 years is a _long_, _long_ time. > > Yes, but our data goes back over 30 years now > and is never deleted, only added to, and I > recently had occasion to want to look at a > backup from 1994-ish. So, yeah we probably do > really want backups for that long. They > probably won't get used, but we'll feel better. The best way is to not only backup the data. With todays VM technology it should be easy enough to backup a virtual disk that contains a full OS and everything install for every major Postgres release. Note that you would have troubles configuring and compiling a Postgres 4.2 these days because you'd need to get some seriously old tools running first (like bmake). And 4.2 is only what, 12 years old? That way, you would be sure that you can actually load the data into the right DB version. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@xxxxxxxxx # ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings