Re: [WIP RFC 2/5] Documentation: add Packfile URIs design doc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 04 2018, brian m. carlson wrote:

> On Mon, Dec 03, 2018 at 03:37:35PM -0800, Jonathan Tan wrote:
>> Signed-off-by: Jonathan Tan <jonathantanmy@xxxxxxxxxx>
>> ---
>>  Documentation/technical/packfile-uri.txt | 83 ++++++++++++++++++++++++
>>  Documentation/technical/protocol-v2.txt  |  6 +-
>>  2 files changed, 88 insertions(+), 1 deletion(-)
>>  create mode 100644 Documentation/technical/packfile-uri.txt
>>
>> diff --git a/Documentation/technical/packfile-uri.txt b/Documentation/technical/packfile-uri.txt
>> new file mode 100644
>> index 0000000000..6535801486
>> --- /dev/null
>> +++ b/Documentation/technical/packfile-uri.txt
>> @@ -0,0 +1,83 @@
>> +Packfile URIs
>> +=============
>> +
>> +This feature allows servers to serve part of their packfile response as URIs.
>> +This allows server designs that improve scalability in bandwidth and CPU usage
>> +(for example, by serving some data through a CDN), and (in the future) provides
>> +some measure of resumability to clients.
>> +
>> +This feature is available only in protocol version 2.
>> +
>> +Protocol
>> +--------
>> +
>> +The server advertises `packfile-uris`.
>> +
>> +If the client replies with the following arguments:
>> +
>> + * packfile-uris
>> + * thin-pack
>> + * ofs-delta
>> +
>> +when the server sends the packfile, it MAY send a `packfile-uris` section
>> +directly before the `packfile` section (right after `wanted-refs` if it is
>> +sent) containing HTTP(S) URIs. See protocol-v2.txt for the documentation of
>> +this section.
>> +
>> +Clients then should understand that the returned packfile could be incomplete,
>> +and that it needs to download all the given URIs before the fetch or clone is
>> +complete. Each URI should point to a Git packfile (which may be a thin pack and
>> +which may contain offset deltas).
>
>
> First, I'd like to see a section (and a bit in the implementation)
> requiring HTTPS if the original protocol is secure (SSH or HTTPS).
> Allowing the server to downgrade to HTTP, even by accident, would be a
> security problem.

Maybe I've misunderstood the design (I'm writing some other follow-up
E-Mails in this thread which might clarify things for me), but I don't
see why.

We get the ref advertisement from the server. We don't need to trust the
CDN server or the transport layer. We just download whatever we get from
there, validate the packfile with SHA-1 (and in the future SHA-256). It
doesn't matter if the CDN transport is insecure.

You can do this offline with git today, you don't need to trust me to
trust that my copy of git.git I give you on a sketchy USB stick is
genuine. Just unpack it, then compare the SHA-1s you get with:

    git ls-remote https://github.com/git/git.git

So this is a case similar to Debian's where they distribute packages
over http, but manifests over https: https://whydoesaptnotusehttps.com

> Second, this feature likely should be opt-in for SSH. One issue I've
> seen repeatedly is that people don't want to use HTTPS to fetch things
> when they're using SSH for Git. Many people in corporate environments
> have proxies that break HTTP for non-browser use cases[0], and using SSH
> is the only way that they can make a functional Git connection.

Yeah, there should definitely be accommodations for such clients, per my
reading clients can always ignore the CDN and proceed with a normal
negotiation. Isn't that enough, or is something extra needed?

> Third, I think the server needs to be required to both support Range
> headers and never change the content of a URI, so that we can have
> resumable clone implicit in this design. There are some places in the
> world where connections are poor and fetching even the initial packfile
> at once might be a problem. (I've seen such questions on Stack
> Overflow, for example.)

I think this should be a MAY not a MUST in RFC 2119 terms. There's still
many users who might want to offload things to a very dumb CDN, such as
Debian where they don't control their own mirrors, but might want to
offload a 1GB packfile download to some random university's Debian
mirror.

Such a download (over http) will work most of the time. If it's not
resumable it still sucks less than no CDN at all, and client can always
fall back if the CDN breaks, which they should be doing anyway in case
of other sorts of issues.

> Having said that, I think overall this is a good idea and I'm glad to
> see a proposal for it.
>
> [0] For example, a naughty-word filter may corrupt or block certain byte
> sequences that occur incidentally in the pack stream.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux