Re: [WIP 0/7] CDN offloading of fetch response

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Feb 26, 2019 at 10:12 AM Ævar Arnfjörð Bjarmason
<avarab@xxxxxxxxx> wrote:
>
> On Tue, Feb 26 2019, Christian Couder wrote:
>
> > On Tue, Feb 26, 2019 at 12:45 AM Jonathan Nieder <jrnieder@xxxxxxxxx> wrote:

> >> But the same thing can happen with redirects, with embedded assets in
> >> web pages, and so on.
> >
> > I don't think it's the same situation, because the CDN offloading is
> > likely to be used for large objects that some hosting sites like
> > GitHub, GitLab and BitBucket might not be ok to have them store for
> > free on their machines. (I think the current limitations are around
> > 10GB or 20GB, everything included, for each user.)
> >
> > So it's likely that users will want a way to host on such sites
> > incomplete repos using CDN offloading to a CDN on another site. And
> > then if the CDN is not accessible for some reason, things will
> > completely break when users will clone.
> >
> > You could say that it's the same issue as when a video is not
> > available on a web page, but the web browser can still render the page
> > when a video is not available. So I don't think it's the same kind of
> > issue.
> >
> > And by the way that's a reason why I think it's important to think
> > about this in relation to promisor/partial clone remotes. Because with
> > them it's less of a big deal if the CDN is unavailable, temporarily or
> > not, for some reason.
>
> I think all of that's correct. E.g. you can imagine a CDN where the CDN
> serves literally one blob (not a pack), and the server the rest of the
> trees/commits/blobs.
>
> But for the purposes of reviewing this I think it's better to say that
> we're going to have a limited initial introduction of CDN where those
> more complex cases don't need to be handled.
>
> That can always be added later, as far as I can tell from the protocol
> alteration in the RFC nothing's closing the door on that, we could
> always add another capability / protocol extension.

Yeah, it doesn't close the door on further improvements. The issue
though is that it doesn't seem to have many benefits over implementing
things in many promisor remotes. The main benefit seems to be that the
secondary server locations are automatically configured. But when
looking at what can happen in the real world, this benefit seems more
like a drawback to me as it potentially creates a lot of problems.

A solution, many promisor remotes, where:

- first secondary server URLs are manually specified on the client
side, and then
- some kind of negotiation, so that they can be automatically
selected, is implemented

seems better to me than a solution, CDN offloading, where:

- first the main server decides the secondary server URLs, and then
- we work around the cases where this creates problems

In the case of CDN offloading it is likely that early client and
server implementations will create problems for many people as long as
most of the workarounds aren't implemented. While in the case of many
promisor remotes there is always the manual solution as long as the
automated selection doesn't work well enough.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux