> -----Original Message----- > From: pgsql-general-owner@xxxxxxxxxxxxxx [mailto:pgsql-general- > owner@xxxxxxxxxxxxxx] On Behalf Of Ron Johnson > Sent: Thursday, July 06, 2006 5:26 PM > To: Postgres general mailing list > Subject: Re: [GENERAL] Long term database archival > > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > Agent M wrote: > > Will postgresql be a viable database in 20 years? Will SQL be used > > anywhere in 20 years? Are you sure 20 years is your ideal backup > duration? > > SQL was used 20 years ago, why not 20 years from now? > > I can't see needing data from 10 years ago, but you never know. > Thank $DEITY for microfilm; otherwise, we'd not know a whole lot > about what happened 150 years ago. The company I work for does lots of business with OpenVMS systems running RMS, Rdb, and DBMS and IBM Mainframes running VSAM, IMS, etc. along with many other 'ancient' database systems. We have customers with Rdb version 4.x (around 15 years old, IIRC) and RMS and VSAM formats from the 1980s. Suppose, for instance, that you run a sawmill. The software for your sawmill was written in 1985. In 1991, you did a hardware upgrade to a VAX 4100, but did not upgrade your Rdb version (since it was debugged and performed adequately). Your software can completely keep up with the demands of the sawmill. It even runs payroll. The workers got tired of the RS232 terminals and so you did a client server upgrade using PCs as terminals in 1999, but kept your VAX 4100 minicomputer running Rdb with no changes. You upgraded from Xentis to Crystal Reports in 2003, but using OLEDB drivers means you did not have to touch anything on your server. Sound far-fetched? It's not uncommon in the least. Furthermore, a million dollar upgrade to a shiny new system and software might not increase productivity at all. It's the data that contains all the value. The hardware becomes obsolete when it can no longer keep up with business needs.