If the goal is to save bandwidth, installing a local caching proxy like squid is a good idea, just make sure that you set the maximum file size to something large enough to cover the biggest RPM in the repository. Typically machines will do a number of things in day-to-day operation, check for the latest headers, and download updates. Both of those operations will be cached by the local proxy. The downside of rsync is that you need to run it periodically, and there will be quite a large amount of storage space required as well.
I've done something compromising in my situation. I've created one server as a "cache" for yum. I then have my various servers' (which are local) /var/cache/yum directories nfs mounted to their version's appropriate yum cache. The first server will usually fetch most of the packages/headers/etc. and then the subsequent machines that night will pull the packages from their "local" cache vs. downloading them again.
For machines local to this NFS server, this works quite well. Performance is a bit slower than it would be if they were local, and doing this over our wide-area network is prohibitively slow. It also cannot be done easily for machines on the "wrong" side of the firewall.
In some ways this is more efficent than rsyncing the entire tree since most of my "servers" don't have "all" packages installed. The downside is that the header.info files are retrieved each time, so it may or may not balance out in the end.
-Rick -- Rick Johnson, RHCE #807302311706007 - rjohnson@xxxxxxxxxx Linux/Network Administrator - Medata, Inc. (from home) PGP Public Key: https://mail.medata.com/pgp/rjohnson.asc
-- fedora-legacy-list@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/fedora-legacy-list