James Antill wrote:
In neither case is "work around HTTPs design in yum" a good solution,
IMNSHO.
I'd rather call it "using existing infrastructure and protocols
intelligently" - instead of cluttering everyone's caches with randomized
URLs to get duplicate files.
So you think the best thing to do is remove mirrorlist entirely, and
just rely on proxies ...
No, I think the best thing would be a mechanism that would make any set
of yum's behind the same proxy use the cached mirrorlist and always pick
the same first choice from it, with some appropriate alternates to retry
only if the first choice doesn't respond. This doesn't 'rely' on the
proxy but it will reduce load on the mirrors by the number of machines
that happen to share a working proxy as well as speeding up all but the
first update.
you are obviously free to your opinion, and you
can do that today.
I can't do it usefully since I don't know what URL anyone else who is so
inclined might have chosen. Or, for that matter, who else behind the
same proxy might be using fedora.
--
Les Mikesell
lesmikesell@xxxxxxxxx
--
fedora-devel-list mailing list
fedora-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/fedora-devel-list