On Fri, Feb 09, 2018 at 11:04:17PM +0100, Ævar Arnfjörð Bjarmason wrote: > One thing that's not discussed yet, and I know just enough about for it > to tingle my spidey sense, but not enough to say for sure (CC'd Jeff & > Brandon who know more) is that this feature once shipped might cause > higher load on git hosting providers. > > This is because people will inevitably use it in popular projects for > some custom filtering, and because you're continually re-fetching and > inspecting stuff what used to be a really cheap no-op "pull" most of the > time is a more expensive negotiation every time before the client > rejects the refs again, and worse for hosting providers because you have > bespoke ref fetching strategies you have less odds of being able to > cache both the negotiation and the pack you serve. Most of the discussion so far seems to be about "accept this ref or don't accept this ref", which seems OK. But if you are going to do custom tweaking like rewriting objects, or making it common to refuse some refs, then I think things get pretty inefficient for _everybody_. The negotiation for future fetches uses the existing refs as the starting point. And if we don't know that we have the objects because there are no refs pointing at them, they're going to get transferred again. That's extra load no the server, and extra time for the user waiting on the network. I tend to agree with the direction of thinking you outlined: you're generally better off completing the fetch to a local namespace that tracks the other side completely, and then manipulating the local refs as you see fit (e.g., fetching into refs/quarantine, and then migrating "good" refs over to refs/remotes/origin). -Peff