On Wed, 2003-07-30 at 08:09, Robert G. Brown wrote: > which doesn't mirror the WHOLE repository, but only the distro_9 part. > IIRC it will do something moderately horrible with the path on your > mirror site -- you might get ./pub/linux/distro_9. It doesn't really > behave like a cp. I didn't mind the path stuff, but it did seem to go up the tree. Also, wget would never remove old files. I guess until there is a definite standard of getting the "contents of an http directory" (say, index.xml or something to do the enumeration), it's always going to be an html parse-fest. Also, wget needs to do a http head or whatever to get the time stamp, for each file it already has individually. In the end, I just hacked together something in python -- see my previous email. > rsync is actually by far the preferred tool. It is designed to do > precisely what you need (synchronize two images, perfectly), efficiently > (copying compressed images of just what has changed), and safely (where > you can select whether or not to delete files that are no longer in the > images being sync'd. The issue of whether or not they support it on [snip] Yeah, rsync is nice, but it does require: - another daemon on the back end - more holes to punch through the firewall(s) - more client tools There is no reason a general tool couldn't simply augment http or http directories some how - seems to be a simple software problem. However, from a programmers perspective I do tend to disagree with the "everything over http" mentality that have taken over with that SOAP/XML-RPC stuff. I'm less concerned about the security though, as I use gpgcheck=1 on all my yum clients. This way, I can be sure that the file has not been been tampered with. Secure point to point communication guarantees only that one line of communication. -- // Aleksander.Demko@xxxxxxxxxxxxxx ademko@xxxxxx scopira.org //