On Thu, 2008-01-24 at 10:50 -0600, Les Mikesell wrote: > James Antill wrote: > > >> I think you are missing my point, which is that it would be a huge win > >> if yum automatically used typical existing caching proxies with no extra > >> setup on anyone's part, so that any number of people behind them would > >> get the cached packages without knowing about each other or that they > >> need to do something special to defeat the random URLs. > > > > HTTP doesn't define a way to do this, much like the Pragma header > > suggestion is a pretty bad abuse of HTTP ... > > It's worked for years in browsers. If you have a stale copy in an > intermediate cache, a ctl-refresh pulls a fresh one. Sure, the _user_ has a single URL to work with and can make semi-intelligent decisions with only themselves to blame. Not quite the same as a program with multiple URLs to the same data. Now if you wanted to add ETag support in various places, patches are very likely to be accepted, then any program could make intelligent decisions. > > In neither case is "work around HTTPs design in yum" a good solution, > > IMNSHO. > > I'd rather call it "using existing infrastructure and protocols > intelligently" - instead of cluttering everyone's caches with randomized > URLs to get duplicate files. So you think the best thing to do is remove mirrorlist entirely, and just rely on proxies ... you are obviously free to your opinion, and you can do that today. -- James Antill <james.antill@xxxxxxxxxx> Red Hat
Attachment:
signature.asc
Description: This is a digitally signed message part
-- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list