Harnish, Joseph writes: > Maybe I should rephrase my thoughts. > > Would it be a good idea to add support for distributed yum repository > cache awareness to yum?? It could be similar to a function of Y.O.U. (Yast > Online Updater) where it can be configured as a local mirror.? Or with > mDNSResponder a tool like yum could find local copies of the packages > before attempting to go to the sites out on the internet.? > > This is probably irrelevant to most users but when I run yum and wait for > packages to come down and then I move on to another machine and wait again > I feel like there could be a better way to do it.? There are a lot of reasons to be cautious about such an approach. First, note that right NOW one can have multiple repositories, fallback repositories, local mirror repositories, remote repositories -- repository freedom is really quite broad. There are also plenty of Clever Tricks you can play -- for a single example, NFS export your yum package directories (with root enabled) from a primary update host and you'll never have to download twice in e.g. a home LAN. For a second, it is very simple to use rsync or various tools that have been discussed on list to create a mirror of a primary archive at home so that you download only once (creating the mirror or updating its contents) and then distribute to LAN hosts over a fast LAN connection. This is what I do -- although I used to play the NFS trick and it works as well -- as it makes it easy to do LAN kickstart installs, and install/update when the DSL link is loaded and slow or down. For the most part, if e.g. DSL bandwidth is your bottleneck disk is cheap enough and linux flexible enough that you can easily find a way to minimize your consumption of it that are BETTER (and far more secure) than bittorrent. Second, the way things now stand, you have SOME degree of control over e.g. the gpg signatures and reliability of the repositories you accept. At some point, barring the use of SSL or some other method of hard/reliable delivery of keys, you will end up trusting nameservice and the integrity of the routes between your host and the providing repository, at least to get the keys themselves. Many people will have difficulty getting and installing the keys (which isn't terribly well documented and requires a degree of unix expertise that makes it a list FAQ). They will often resort to using gpgcheck=0. This isn't ideal, but truthfully isn't MUCH worse than using the web to grab the keys and then using gpgcheck=1 -- it just gives crackers MORE (very small) chances to become man-in-the-middle, which actually isn't all that easy as the network backbone/repository sysadmins tend to be both competent and vigilant. ISP sysadmins for small commercial ISPs are perhaps a more mixed lot, but still USUALLY if they're compromised you're dead anyway, sooner or later. However, using something like bittorrent (or ANY distribution scheme that actually REQUIRES sites with completely uncontrolled expectations of competence, honesty, security to become man-in-the-middle) you would HAVE to use gpgcheck or you'd simply be begging to be cracked. It makes me nervous whenever my kids do a World of Warcraft update because it does something like this to "distribute the server load". One can only pray that they use something like key checking to ensure that any executable files that are retrieved are indeed part of the update and distributed from the originating site. So I'd suggest looking into simple NFS or local repository mirror (disk is cheap) solutions to minimize your bottlenecking without resorting to distributed/uncontrolled networks of yum repositories. One COULD make the latter secure, of course, but it would be a serious bit of work. rgb -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: not available Url : http://lists.dulug.duke.edu/pipermail/yum/attachments/20050621/9c71a6cb/attachment.bin