"Santi Béjar" <sbejar@xxxxxxxxx> writes: > On Nov 23, 2007 5:05 PM, Junio C Hamano <gitster@xxxxxxxxx> wrote: > ... >> Maybe I am missing something from the discussion, but what >> information loss are you referring to? > > Because you create an incremental bundle, so all the objects in the > old bundle will > not be in the new bundle. But it can be considered the natural > behavior of bundles. It might be natural if you are thinking within the limit of "git bundle create", but it is not natural for "git push" at all. I think treating a bundle as if it is a bare repository with a funny representation wouldn't be so wrong, at least from the user interface point of view. IOW, I think it is natural for these: $ git push $remote $refspec $ git fetch $remote $refspec to work as "expected" when $remote is actually a local file that is a bundle, and in fact, "git fetch" should already work that way. What's "expected" for a push? You push from your repository the objects needed to complete LHS of given $refspec into the $remote, and then update the refs in the $remote specified by the $refspec. There is no deletion of existing objects from the $remote. That is what's expected for a push. So if you want to implement "pushing into a bundle", the implementation would be: * Find the required objects in the existing bundle. If the bundle file does not exist, it might be natural to treat it as if you are pushing into an empty but initialized regular repository. If we choose to do this, for a nonexistent bundle file, the set of required objects is an empty set. * Find the recorded heads in the existing bundle. Add or replace them with the RHS of $refspecs being pushed to come up with the new set of heads for the updated bundle. We would want to perform the ordinary "fast-forward" safety and reject a push as needed. * If there are HEADs in the updated bundle that the pushing repository does not have, fetch them (and their required objects) first, as it is necessary for the next step. * Run this pipeline in the pushing repository to generate a packdata stream: $ git rev-list --objects <heads in the updated bundle> \ --not <required objects in the bundle> | git pack-objects --stdout This packdata stream will be the payload of the updated bundle. * The updated bundle will require the same set of objects as the bundle before the update. This is quite different from the way how the other "transports" are implemented internally to push into usual repositories, but that is perfectly fine. What the end user sees will be consistent if you implement "push into bundle" that way and that is what matters. Note. I am not saying that we _should_ allow pushing into a bundle to update. I am just saying that if we were to implement "git push" into a bundle, that should behave as close as other push transports from the end uesr's point of view. If you want different "object losing" semantics, "git bundle create" to create a new bundle is already there for you. You just shouldn't overload that different semantics to "git push", because that would confuse users without much gain. - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html