On Wed, 2003-09-24 at 12:47, C.Lee Taylor wrote: > >> This has been suggested before, and of course you can do it by hand, but > >> bandwidth is bandwidth and once the headers have been created/compressed > >> all you're really saving is the per-transfer overhead, not the bandwidth > >> per se. > > >with keep-alive and http you're not gaining anything, really. > Sorry, now I am confused, so you mean, even with a copy of my yum cache, I will still not again any bandwidth saving? > Rob was saying compression + protocol overhead. I was saying that compression + using a protocol that does keepalive means that the overhead is mostly trivial. so making a tarball of the hdrs won't help - in your case it is all about the size of the files. > Again, I beleive I understand this, just from watching what yum > check-update and friends do, my first throught is about the first > update, alot of bandwidth can be used just bring an installation up > todate, which maybe this belongs in the HOWTO ( which I have not seen > a url on the mail list [ have not looked on the homepage just yet ] ) > I think what I am getting at, the update could only download headers > for installed packages, unless more packages are needed to update than > what already is installed ... ( don't flame me, it's just an idea, > which once the first update is done, makes very little if no benefit > onces all teh headers are on the computer, I mean, maybe two or three > packages a week are really updated, which would mean almost no gain ) > ... the problem is prediction of those headers. and the bigger problem is that the headers stuff is going to eventually go away. We'll need to get some headers, but only of the ones we're actually going to use - not all the rest. The headers mechanism was a good way to get off the ground quickly, now more work needs to be done to dump them. I'm doing that work as part of something else, but it takes time and I'm not the fastest programmer in the world. -sv