Re: [WIP RFC 2/5] Documentation: add Packfile URIs design doc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > +when the server sends the packfile, it MAY send a `packfile-uris` section
> > +directly before the `packfile` section (right after `wanted-refs` if it is
> > +sent) containing HTTP(S) URIs. See protocol-v2.txt for the documentation of
> > +this section.
> > +
> > +Clients then should understand that the returned packfile could be incomplete,
> > +and that it needs to download all the given URIs before the fetch or clone is
> > +complete. Each URI should point to a Git packfile (which may be a thin pack and
> > +which may contain offset deltas).
> > [...]
> > +This is the implementation: a feature, marked experimental, that allows the
> > +server to be configured by one or more `uploadpack.blobPackfileUri=<sha1>
> > +<uri>` entries. Whenever the list of objects to be sent is assembled, a blob
> > +with the given sha1 can be replaced by the given URI. This allows, for example,
> > +servers to delegate serving of large blobs to CDNs.
> 
> Okey, so the server advertisement is not just "<urls>" but <oid><url>
> pairs. More on this later...

Actually, the server advertisement is just "<urls>". (The OID is there
to tell the server which object to omit if it sends the URL.) But I see
that the rest of your comments still stand.

> More concretely, I'd like to have a setup where a server can just dumbly
> point to some URL that probably has most of the data, without having any
> idea what OIDs are in it. So that e.g. some machine entirely
> disconnected from the server (and with just a regular clone) can
> continually generating an up-to-date-enough packfile.

Thanks for the concrete use case. Server ignorance would work in this
case, since the client can concisely communicate to the server what
objects it obtained from the CDN (in this case, through "have" lines),
but it does not seem to work in the general case (e.g. offloading large
blobs; or the CDN serving a pack suitable for a shallow clone -
containing all objects referenced by the last few commits, whether
changed in that commit or not).

In this case, maybe the batch job can also inform the server which
commit the CDN is prepared to serve.

> I don't see how this is compatible with the server needing to send a
> bunch of "<oid> <url>" lines, or why a client "cannot declare that the
> fetch is complete until all URIs have been downloaded as
> packfiles". Can't it fall back on the normal dialog?

As stated above, the server advertisement is just "<url>", but you're
right that the server still needs to know their corresponding OIDs (or
have some knowledge like "this pack contains all objects in between this
commit and that commit").

I was thinking that there is no normal dialog to be had with this
protocol, since (as above) in the general case, the client cannot
concisely communicate what objects it obtained from the CDN.

> Other thoughts:
> 
>  * If there isn't such a close coordination between git server & CDN, is
>    there a case for having pack *.idx files on the CDN, so clients can
>    inspect them to see if they'd like to download the full referenced
>    pack?

I'm not sure if I understand this fully, but off the top of my head, the
.idx file doesn't contain relations between objects, so I don't think
the client has enough information to decide if it wants to download the
corresponding packfile.

>  * Without the server needing to know enough about the packs to
>    advertise "<oid> <url>" is there a way to e.g. advertise 4x packs to
>    clients:
> 
>        big.pack, last-month.pack, last-week.pack, last-day.pack
> 
>    Or some other optimistic negotiation where clients, even ones just
>    doing regular fetches, can seek to get more up-to-date with one of
>    the more recent packs before doing the first fetch in 3 days?
> 
>    In the past I'd toyed with creating a similar "not quite CDN" setup
>    using git-bundle.

I think such optimistic downloading of packs during a regular fetch
would only work in a partial clone where "holes" are tolerated. (That
does bring up the possibility of having a fetch mode in which we
download potentially incomplete packfiles into a partial repo and then
"completing" the repo through a not-yet-implemented process, but I
haven't thought through this.)



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux