On Thu, 27 Oct 2005, Robert P. J. Day wrote: > p.s. just thinking out loud but would it be unreasonable for yum to > print a reference to that workaround under these circumstances? > clearly, what i was doing was a bit extreme -- trying to update a > fresh FC4 install months after its official release which represented > an update of 500+ packages, but it's not hard to imagine the > occasional person doing this and failing the same way and not knowing > what to do at that point. > > just a thought. Or better yet, give yum a "disk miser" mode where it groups a large install into dependency ordered sub-installations and instead of downloading all 500 packages at once, it a) runs the disk check on the largest free space requirement for the independent ordered groups and barfs on THAT, since this is the sine qua non -- if you don't have at least this much space you CANNOT safely update even if you do it by hand without first freeing up some space; b) it then downloads the groups, in order, installs them, and cleans them so that the largest footprint occupied is the reported on from a). This has the additional benefit of being a bit safer with regard to interruption mid-task and being restartable; as the number of packages being updated at one time increases, so does the possibility that an interruption will leave the system in an odd state with multiple sets of dependent packages half installed. This in turn increases (or so it seems to me) the probability of experiencing trans-novice problems. There are times with my old laptop (which had a marginally large enough root for everything that I wanted to throw on it as the distros became richer) that I would have killed for a disk miser mode. Without it, I had to do all that juggling by hand, literally uninstalling certain packages, updating other packages, reinstalling. Open office, for example, has a huge update footprint -- order of 100 MB for the compressed rpms, 2-3 times that for the uncompressed rpms, and for a while there the OLD uncompressed rpms are also resident. Somewhere in the ballpark of 300-500 MB highwater in space if you don't do things very cleverly, or first erase the old completely and then install the new while also erasing three other toplevel and hence relatively easily removed/reinstalled rpms first. Yes, this would be a lot of programming work. Yes, it would run slow as molasses and maybe be dangerous at first until it is pretty thoroughly debugged -- but still INFINITELY faster and smarter and safer than humans are likely to be, as a computer can actually work through the dependency trees systematically and (eventually) without error. I otherwise would tend to be uncomfortable ignoring rpm's warnings about space requirements because running out of space in mid-install on an override could leave your system anywhere from mildly messed up to totally borked. Running out of ANY memory resource on a computer running software that assumes that the resource will never be completely exhausted is just a bad idea. rgb > _______________________________________________ > Yum mailing list > Yum@xxxxxxxxxxxxxxxxxxxx > https://lists.dulug.duke.edu/mailman/listinfo/yum > -- Robert G. Brown http://www.phy.duke.edu/~rgb/ Duke University Dept. of Physics, Box 90305 Durham, N.C. 27708-0305 Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@xxxxxxxxxxxx