On Tue, 2004-09-14 at 18:08 -0400, Steve Coleman wrote: > The remote on-again-off-again network distributed file system is exactly > what CODA is designed for. And it handles laptops in disconnected mode, > caches files, and merges changes back in on reconnect. Lots of good > stuff in there, and many lessons learned. You may find slight > differences in your requirements but it deserves a close look because it > addresses most of the problems I have read about in this thread so far. > Right, the .pdf mentions Coda (and also more generally the idea of using a filesystem). The major downside of a filesystem approach is that filesystems are hard, and in the kernel, and supporting another one isn't trivial. The upside is that you can do some clever things on the filesystem level that are tricky with rsync-type userspace hacks, even rsync-type userspace hacks assisted by LVM. One of the points of the broad term "stateless Linux" though is to emphasize that our instantiation mode (whether Coda, NFS, AFS, rsync, live CD, or whatever) is only one tunable element of the architecture; the other elements should remain constant as you tune this element. Of course, we'll want to recommend some specific filesystem or other approach or approaches, I'm not saying we just make it configurable and forget the problem. What I'm saying is that it's important to keep the design of the overall system one level "higher" than how the bits end up on the CPU that execs them. One thing some of us would like to avoid is two-way sync (merge), since it seems to be impossible to put a reasonable UI on it no matter how cool your underlying technology, and seems to crank up the complexity of the underlying technology in a big way. If instead we do one-way sync (aka backup) with a very clear and simple user model, it's about as good for the typical desktop user. Of course, nothing in the architecture keeps you from using two-way sync, this is just a matter of what to work on first. Havoc