[Yum] Yum "article"

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 19 Nov 2003, Cliff Kent wrote:

>  >> I made the observation in the article because I am aware of at least 
> two people who tried the file:// form by accident ... <<
> 
> You can make that at least three who tried. I did it too.<g>

Did I mention that I was one of them?  No?  Gee, funny how I left that
part out...;-)

Some people have enough experience to persevere and figure it out if it
doesn't work; others (less experienced in the observation that if it
doesn't work, 99% of the time it is your fault and not a bug) might be
tempted to conclude something like "Oh, yum doesn't support
filesystem-based repositories" and quit.  Hence the warning:-)

> Then, I'll be switching to an http:// format pointed at my private web 
> server. I tried a network install of FC1 the other day and will not be 
> going back to CDs. The same file set then makes a [base] for yum.

At home I'm not really sure which way to go.  My repository is on my
home server (with 10 clients I don't really need more than one:-).  It
does NFS and has http up, although really not to much purpose except for
supporting yum and permitting me to test stuff for articles like this.

NFS might actually be more efficient. (I suspect that it is -- depends
on how things like the transport protocol is intertwined with the actual
file access on the server side.  The server has to stat the files and so
forth no matter what, in the case of NFS there is one less level of path
resolution, maybe, and perhaps a more efficient transport layer.)  If I
>>did<< NFS export the repository and simultaneously NFS exported
/var/cache/yum from the server, I could significantly reduce the need to
ever actually transport files.

In fact depending on how yum checks for a file being current, I MIGHT be
able to symlink the RPM directory in the cache back to the RPMs in
repository, so that they are NEVER duplicated and ALWAYS
repository-current.  As I think somebody (Russ?) observed, this MAY
carry a bit of risk with file locking and two hosts trying to grab the
same RPM at the same time.  However, if yum strictly checks the headers
to decide what to do (as I think that it does) it would then still
notice that the revision number had bumped, seek to download the file
but observe before it actually did so that it was already in its cache
(symlinked back to the repository, mind you) and then update it with no
actual transfer taking place.  No transfer also = less trouble with file
locking -- the tiny header files leave a much smaller window for access
overlap.  Of course if one ever entered yum clean, one would trash the
repository unless it were largely RO... hmmm

Probably one of those kids don't try this at home things, at least until
some kid tries it at home and works out all the pain for what may be
pretty much ignorable gain...;-)

> Meanwhile, your explanation is excellent. Too much detail would just 
> confuse most people.

Thanks.  I have a problem with putting in too much detail.  I'm glad
that you don't think that I did.

> This is getting too easy.<g>

That's the hard thing to convey -- yum makes huge blocks of what used to
be true PITA sysadmin chores absolutely trivial.  So trivial that in
lots of cases a system owner does "nothing", and they just happen
(leveraging SOME little effort put in by Seth and Michael and a bit more
effort put in by their local administrator).

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@xxxxxxxxxxxx




[Index of Archives]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux