On Tue, May 28, 2024 at 5:54 PM Junio C Hamano <gitster@xxxxxxxxx> wrote: > > Christian Couder <christian.couder@xxxxxxxxx> writes: > > "it would be nice if there was a capability for the client to say > > that it would like the server to give it information about the > > promisor that it could use, so that the user doesn't have to pass all > > the "remote.my_promisor.XXX" config options on the command like." > > > > and by saying that this could be added later. > > Can such an update happen transparently, or does it need changes to > end-user experience? It is of dubious value to leave the initial > series be incomplete if the latter is the case. I think adding the capability could happen transparently. I think the capability should be a generic one about the server suggesting some config options and perhaps some command line options, and the client agreeing (or not) to setting the config options and using the command line options that the server suggests. Of course the important thing when implementing this capability is to properly limit the config and command line options (and perhaps their values) that the client accepts to set and use. But I think it could be a generic capability that could be extended to other options in the future. That's why I think it could be a separate patch series. > >> Without knowing that, it cannot safely decide that it does not > >> have to send objects that can be obtained from X to C. > > > > In the above command C is asking for a partial clone, as it uses a > > --filter option. This means that C knows very well that it might not > > get from S all the objects needed for a complete object graph. > > Hmph. If C asks a partial clone and S is willing to be the promisor > for C, S is essentially saying that it will serve C any objects on > demand that are reachable from any object it served C in the past, > forever, no? It might not get from S initially all the objects, but > if it wants later, S promises to let C have them. This promise is not broken, as S can still get the missing objects from X and then pass them to C. There is even the following test case in the patch that shows that it works when uploadpack.missingAction is unset (and thus default to "error" which is the same as how things currently work): +test_expect_success "fetch without uploadpack.missingAction=allow -promisor" ' + git -C server config --unset uploadpack.missingAction && + + # Clone from server to create a client + GIT_NO_LAZY_FETCH=0 git clone -c remote.server2.promisor=true \ + -c remote.server2.fetch="+refs/heads/*:refs/remotes/server2/*" \ + -c remote.server2.url="file://$(pwd)/server2" \ + --no-local --filter="blob:limit=5k" server client && + test_when_finished "rm -rf client" && + + # Check that the largest object is not missing on the server anymore + check_missing_objects server 0 "" +' The only difference with the case when uploadpack.missingAction is set to "allow-promisor" is that then C gets the missing objects directly from X without S getting them first. This is why there is: + # Check that the largest object is still missing on the server + check_missing_objects server 1 "$oid" at the end of the test case when uploadpack.missingAction is set to "allow-promisor". This is to check that S didn't get the missing object (which means that the missing object went directly from X to C). So S keeps its promise to let C have any object through S if C wants. It's just that having large objects through S (instead of directly from X) is not a good idea because it copies the large objects to S first which uses up space (and memory and CPU when creating packfiles) on S while the objects are already on X where disk space is probably cheaper (and where less CPU and memory will be used to send the objects). > > Again when using a regular partial clone omitting the same set of > > objects, C also requests some objects that S doesn't have. And > > this is not considered an issue or something irresponsible. It > > already works like this. > > "S doesn't have" is because S has pruned objects that it shouldn't > have in order to keep the promise it made earlier to C, right? If > that is the case, I would very much say S is being irresponsible in > that case. Yes, S has pruned some objects, but these objects have been moved to X first and S has access to X, so S can still get the objects. So I don't understand why you say S shouldn't have pruned them and S is irresponsible. S is still capable of fulfilling the promise it made. And even if S was not capable of directly fulfilling the promise it made, it would still not be clear to me that S is irresponsible, as S and X can be seen as a single entity from the client point of view, and S and X together would still be capable of fulfilling the promise that was made to C. When bundle-uri is used, for example, the server is not fulfilling the promise alone. It needs the bundle server to work properly for the client to get everything. When using Git LFS, the server is not fulfilling the promise alone either. For things to work properly both the Git server and the LFS server have to work together. So if it's Ok for other features to require an additional server to fulfill the promise, why is it not Ok in the case of S + X? > > And then C still has the possibility to configure X as a > > promisor remote and get missing objects from there. So why is it Ok > > when it's done in several steps but not in one? > > You are right that S can decide to unilaterally break the promise it > made C, so this update is not making it any worse than giving users > a broken implementation of promisor remotes. I wouldn't call it OK, > though. I don't understand why you compare this to a "broken" implementation of promisor remotes. What could then be a non-broken one that would store large blobs on a separate server in your opinion? I am really interested in answers to this question. It's not a rhetorical one. > If somebody identifies that even without this series, S can lead to > repository corruption at C by pruning objects it does need to keep > its promise to C, Objects are not pruned if you consider S + X instead of just S. It's the same as when people are using Git LFS or bundle-uri. Nothing is really pruned on the server side, no promise is broken. Some objects are just moved to a separate server that is sometimes called to provide some objects on top of those the Git server still sends. > the next action I expect from healthy project is > to try coming up with a mechanism to make it less likely that such a > pruning happens by accident When people managing server S decide to move some objects to X and make S access X, it's not accidental pruning. But maybe you are talking about the case when uploadpack.missingAction is set to "allow-any" instead of "allow-promisor"? > (e.g., by noticing allowAnySHA1InWant as > a sign that the repository has promised others to serve anything > that used to be reachable from anything it historically served, > disabling repack "-d" and instead send the currently unreachable > objects to an archived pack, and something equally silly like that). > It certainly is not to add a new mechanism to make it even easier to > configure S to break promised it made to C. This new mechanism has some important benefits. Yes, there are risks, but it's our opinion at GitLab, with some data from GitLab servers (see the link I previously sent to a GitLab issue) to back it up, that the rewards are well worth the added risks. I agree that ideally a Git server should be able to handle every kind of object very well, but that's unfortunately not the case. Git LFS has some drawbacks, bundle-uri also has some drawbacks, but there are reasons why they exist. Multiple promisor remote is a solution that already exists in Git too. Yes, it also has some drawbacks, but they are in many ways similar to those of bundle-uri and Git LFS. Yes, this patch is making multiple promisor remotes easier to use, but it doesn't fundamentally change the risks associated with using multiple promisor remotes.