> Maybe it's an opportunity to introduce the users to backups. Yes, we do backups for the user, but the problem with Apple's migration is that it happens not on a schedule that meshes with the backup schedule. Our applications have fairly frequently changing data. > Honestly, though, PostgreSQL doesn't seem to be designed for application > bundling and embedding, where the user never knows there's a database > engine present. I'm under the impression that there's no consideration > of what happens if you move from 32 to 64 bit hosts, big endian to > little endian, etc; it's expected that you'll dump and reload. Agreed that PGSQL isn't designed for embedding, but it's actually very close to being supportable in that kind of use model. The binaries and database files are nicely contained, the server/libraries can be easily built as a Universal (i.e. multi-architecture) binary for Macs, and the server is actually quite small (26MB for a complete install as PPC/Intel binaries not stripped) compared to commercial databases. If the data files themselves were portable or convertible, then it would be an almost perfect solution. > It's a pity the system cloning/migration tools don't have hooks for > applications to do pre-migration and post-migration tasks, so you could > just dump then initdb and reload. Yes, that's exactly the problem. For the migration, you actually shut down the old Mac that's the source of the data and boot it in a special FireWire disk mode, and connect it like a hard disk to the new Mac. As a result, there's no code able to run on the source computer during the migration. For our kind of users (non-technical often), it's almost impossible to have them plan out stuff or even consider what needs to be done in terms of advance tasks. I had hoped that there would be a way to "rescue" the database, even if it took a lot of processing... Chris -- Chris Saldanha Parliant Corporation http://www.parliant.com/