On Fri, 24 Aug 2007, Jon Smirl wrote: > > We're going something wrong in git-daemon. Nope. Or rather, it's mostly by design. > I can clone the tree in five minutes using the http protocol. Using the > git protocol would take 24hrs if I let it finish. The http side doesn't actually do any global verification, the way git-daemon does. So to it, everything is just temporary buffers, and you don't need any memory at all, really. git-daemon will create a packfile. That means that it has to generate the *global* object reachability, and will then optimize the object packing etc etc. That's a minimum of something like 48 bytes per object for just the object chains, and the kernel has a *lot* of objects (over half a million). In addition to the object chains yourself, the native protocol will also obviously have to actually *look* at and parse all the tree and commit objects while it does all this, so while it doesn't necessarily keep all of those in memory all the time, it will need to access them, and if you don't have enough memory to cache them, that will add its own set of IO. So I haven't checked exactly how much memory you really want to have to serve big projects, but with some handwavy guesstimate, if you actually want to do a good job I'd guess that you really want to have at least as much memory as the size of largest project you are serving, and probably add at least 10-20% on top of that. So for the kernel, at a guess, you'd probably want to have at least 256MB of RAM to do a half-way good job. 512MB is likely nicer and allows you to actually cache the stuff over multiple accesses. But I haven't actually tested. Maybe it might be bearable at 128M. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html