On Fri, 19 Oct 2007, Sam Ravnborg wrote: > On Fri, Oct 19, 2007 at 12:12:41PM -0400, Nicolas Pitre wrote: > > This is even more wrong. > > > > Agreed, indexing objects might not be the best description. It probably > > will become "receiving objects" along with a bandwitth meter. > > The term 'objects' here always confuses me. What is often my first > thing to check the number of individual commits being added after > a git pull. Wether a commit touches one or several files is less > important (to my way of using git). Let me unconfuse you. Git storage is made of, well, objects. You might think that objects are related to the number of files concerned by a set of commits during a pull, but this is not the case. It is well possible to have a commit touching 100 files and have much fewer new objects created than that. Reverting a patch, for example, would only restore a reference to older objects in the database. The same is true if you move an entire directory around. The opposite is also true: you can have more new objects than modified files for a single commit, depending on the directory depth. So the number of objects has no exact relationship what so ever with the number of objects. However the number of objects has a much more direct influence on the time to perform a fetch, and that is what we're displaying here. After all when you issue a pull and wait for it to complete, you wait for X amount of objects to be transferred and not Y amount of commits. The important metric is therefore measured in "objects". But you're free to ignore it and only look at the percentage if you prefer. Nicolas - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html