James Antill wrote:
Again, you are assuming that "mirrorlists" is a static set of data
in the repo. ... this is _not true_.
Isn't this specific to each particular disto/repo? Which one(s) are you
talking about?
It is absolutely fine for two machines on the same network, using the
same proxy, to get two completely different "mirrorlists" (or to have
some of the same data in a different order).
Why don't you permit this to be cached for some reasonable length of
time so things behind my cache will see the same version (because it
will only ask once)?
Even with the move to
metalink data, we can't make a syncronized DB out of the data we
have. And I'm sure I've explained exactly the above to you before.
I don't see how that is relevant to repeatable behavior behind a single
caching proxy.
As I'm _sure_ you know, MirrorManager has options to allow the user
to pick a "best" mirror for IP ranges they own.
If it can do that, it could provide repeatable behavior on its own.
Hash the source IP into ranges, give out the same mirror order to
everyone in the same range.
If you would permit caching to work the way it is intended, distros
probably wouldn't need all those mirrors anyway and other people
wouldn't have had to invent a dozen different ways to work around what
yum does when updating multiple machines.
Sure, scaling out from a single point of reference is very easy in
HTTP ... we/Fedora/CentOS/etc. are just too dumb to do it. As are all
of Akami's customers.
Feel free to enlighten us/Fedora/CentOS/etc.
Starting from scratch, I'd have required mirrors to use the same
relative locations and returned a bunch of IP addresses in DNS (possibly
with some intelligent handler like the F5 GTM) the way every other large
scale http distribution works. Centos3 worked just great that way.
--
Les Mikesell
lesmikesell@xxxxxxxxx
_______________________________________________
Yum mailing list
Yum@xxxxxxxxxxxxxxxxx
http://lists.baseurl.org/mailman/listinfo/yum