On Wed, Apr 13, 2011 at 1:47 PM, George Spelvin <linux@xxxxxxxxxxx> wrote: > I think the answers are yes, but I have to make a vouple of things clear: > * You can *definitely* control repack behaviour. Â.keep files are the > Âsimplest way to prevent repacking. Good. > * Are you talking about hosting only a "bare" repository, or one with > Âthe unpacked source tree as well? ÂIf you try to run git commands on > Âa large network-mounted source tree, things can get more than a bit > Âsluggish; git recursively stats the whole tree fairly frequently. > Â(There are ways to precent that, notably core.ignoreStat, but they > Âmake it less friendly.) Bare. Developers use local disk for local repos and working tree. > * You can clone from a repository mounted on the file system just as > Âeasily as you can from a network server. ÂSo there's no need to set > Âup a server if you find it onconvenient. Are there advantages to using rsync for the initial clone? Will I get better restartability in the case that the network is less than 100% reliable? I do remember trying to use a DFS-file system in the past, before I understood pack management properly and I seem to recall issues with network reliability. > Indeed, you could easily do everything via DFS. ÂGive everyone a personal > "public" repo to push to, which is read-only to everyone else, and let > the integrator pull from those. > I'll probably use ssh-secured peer to peer for publishing purposes. The main thing I want the DFS-hosted repo for is to provide a single, always up, go-to point for the shared tag set. > > Anyway, I hope this helps! > Yep, thank you. jon. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html