On Jun 9, 2006, at 18:59, Jon Smirl wrote:
Redhat is looking for a scheme to sync the disk system of their stateless Linux client. They were using rsync and now they are looking at doing it with LVM. What about using git?
The data model is fine in principle, but git as-is isn't suitable for general backup/sync-like schemes. Large (multi-GB) files are not really supported yet. Still, I think the underlying data model, with some modifications to split large files on content-determined boundaries, would be really great for distributed filesystems. Many people using laptops these days connect to different filesystems on their office networks, home networks, digital cameras and even their PDA, cellphone and MP3-player. What is commonly described as "synching", really is just a merge between different branches. All arguments in favor of using a distributed SCM hold here too. Right now I'm using a hodge-podge of different manual and semi-automated methods to keep my local filesystem with 1.5M files totalling 90GB somewhat in synch with various homedirectories on different remote systems and backup disks. IMO, git is tantalizing close to be able to handle this, just needs to get a bit more scalable. Probably you'd want to use a different user interface as well, but all the underlying data structures and merge strategies may be equally valid. -Geert - : send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html