On Sat, Mar 01, 2008 at 06:30:13PM +0100, Eyvind Bernhardsen wrote:
Okay, as a git n00b I'm probably on completely the wrong track, but if you made a git repository out of a kernel tarball (cd linux-2.6.24 && git init && git add .) and then did a shallow fetch from kernel.org into that repository, wouldn't the blobs you added get reused (assuming the tarball you downloaded was fairly recent), thus reducing the amount of data fetch has to transfer?
Git uses the commit history to determine what objects you might already have. For normal use cases, this works quite well, however in this instance it doesn't help at all. You'll ending up transferring everything, even for objects you already have. Git will detect that you already have the object only after the transfer. Think about it, though. In order to do this generally, the client would have to send the hash of every object it has. Perhaps this would be a useful thing to do when git detects that there are no common commits, but it would only really help the case of pulling from multiple repos that manage the same files with separate histories. There are some cases where this happens, such as when changes go through another revision control system, but probably not normal usage. David -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html