On Tue, 2004-11-30 at 14:55 -0500, Neal D. Becker wrote: > Sean Middleditch wrote: > Are you sure? What if you: > 1) duplicate the directory (using hardlinks to files) > 2) atomic rename directory I don't believe that actually works. You can't rename one directory onto another already existing directory. You'd have to move the original out of the way, creating a window for disaster to strike. Now, you could lock the entire hierarchy, start the move, and if it fails in the middle (power outage, whatever) have the next process that attempts to access the DB to "fix it." Something like: lock db copy(hardlinks) db to db-work modify db-work rename db-work db-ready # begin danger rename db db-old rename db-ready db delete db-old unlock db Now, if at any point between the begin/end danger lines, if the system power shuts off or the process doing the modification crashes, a later process can "fix" the system. Basically, if it sees a db-ready directory, finish up replacing db with it. Assuming that all works as intended and doesn't have some other race I'm not seeing, then yes, I was wrong - you *can* atomically modify multiple files. Sort of. Assuming that everything that accesses them does so using the entire process above. Modifying even a single file would require locking the whole DB. Reading a file would likewise require it. That could potentially result in a lot of over-head. The locking could be a huge problem for some systems running over NFS. All in all, I'm fairly sure it's not nearly robust enough - not compared to just a single rename of a single file. If you're gonna go through all that trouble, deny users the ability to just edit any of the files directly, and so on, why not just use an existing, tested, stable database for the backend? BDB, SQLite, whatever - they do the same thing the multiple text files do, plus they're a lot more efficient about it. -- Sean Middleditch <elanthis@xxxxxxxxxxxxxxx> AwesomePlay Productions, Inc.