On Tue, Jan 14, 2020 at 1:57 PM Jonathan Tan <jonathantanmy@xxxxxxxxxx> wrote: > > > > Missing promisor objects do not prevent fsck from passing - this is part > > > of the original design (any packfiles we download from the specifically > > > designated promisor remote are marked as such, and any objects that the > > > objects in the packfile refer to are considered OK to be missing). > > > > Is there ever a risk that objects in the downloaded packfile come > > across as deltas against other objects that are missing/excluded, or > > does the partial clone machinery ensure that doesn't happen? (Because > > this was certainly the biggest pain-point with my "fake cheap clone" > > hacks.) > > The server may send thin packs during a fetch or clone, but because the > client runs index-pack (which calculates the hash of every object > downloaded, necessitating having the full object, which in turn triggers > fetches of any delta bases), this should not happen. So if a user does a partial clone, filtering by blob size >= 1M, and if they have several blobs of size just above and just below that limit, then the partial clone will work but probably cause them to still download several blobs above the limit size anyway? (Which, if I'm understanding correctly, happens because the blobs just smaller than 1M likely will delta well against the blobs just larger than 1M.) > But if you create the packfile in some other way and then manually set a > fake promisor remote (as I perhaps too naively suggested) then the > mechanism will attempt to fetch missing delta bases, which (I think) is > not what you want. Well, it's not optimal, but we're currently just dying with cryptic errors whenever we have missing delta bases, and this happens whenever we have an accidental fetch of older branches (although this does have the nice side effect of notifying us of stray fetches in our CI scripts). Your promisor suggestion would at least permit gc's & prunes if we use it in more places, so should be an improvement. I just wanted to verify whether this problem with delta bases would remain. > > > Currently, when a missing object is read, it is first fetched (there are > > > some more details that I can go over if you have any specific > > > questions). What you're suggesting here is to return a fake blob with > > > wrong hash - I haven't looked at all the callers of read-object > > > functions in detail, but I don't think all of them are ready for such a > > > behavioral change. > > > > git-replace already took care of that for you and provides that > > guarantee, modulo the --no-replace-objects & fsck & prune & fetch & > > whatnot cases that ignore replace objects as Kaushik mentioned. I > > took advantage of this to great effect with my "fake cheap clone" > > hacks. Based in part on your other email where you made a suggestion > > about promisors, I'm starting to think a pretty good first cut > > solution might look like the following: > > > > * user manually adds a bunch of replace refs to map the unwanted big > > blobs to something else (e.g. a README about how the files were > > stripped, or something similar to this) > > * a partial clone specification that says "exclude objects that are > > referenced by replace refs" > > * add a fake promisor to the downloaded promisor pack so that if > > anyone runs with --no-replace-objects or similar then they get an > > error saying the specified objects don't exist and can't be > > downloaded. > > > > Anyone see any obvious problems with this? > > Looking at the list of commands given in the original email (fsck, > upload-pack, pack/unpack-objects, prune and index-pack), if we use a > filter by blob size (instead of the partial clone specification > suggested), this would satisfy the purposes of fsck and prune only. > > If we had a partial clone specification that excludes object referenced > by replace refs, then upload-pack from this partial repository (and > pack-objects) would work too. > > But there might be non-obvious problems that I haven't thought of. Cool, sounds like it's at least worth investigating. Maybe Kaushik is interested, or maybe I consider throwing it on my backlog and coming back to it in a year or two. :-)