On 9/7/06, Richard Hally <rhally@xxxxxxxxxxxxxx> wrote:
Have you seen the --depcheck plugin that has recently been added to yum-utils in Extras? That certainly is an improvement over the shell script approach.
Its most definitely not an improvement over the shell script I use. My shell script uses dancing bear interactive spinner ascii artwork!
Perhaps a plugin that separates packages to be updated into separate "yum transactions" would permit the user to make the choice between speed and reliability/robustness/recoverability.
I'm still not convinced that the multiple transactions idea gains you much in the way of recoverability, considering the minefield that is scriplets and triggers. It would certaintly help minimize the amount of duplicate packages listed in the rpmdb when segs happen before the cleanup stage completes. I have to admit that's one of my big pet peeves with large transactions in rpm, whether that transaction is brokered by yum or rpm -F or whatever. When i do a fresh install, I tend to do the backlog of updates in small groups (20 packages or less) to avoid the possibility of that happening. Once I have the backlog of packages installed, I find that recovering from a segged daily or weekly update run is managable, because the number of update packages invovled on these timescales tends to be small enough. What sucks, is running a fresh install of fc5 right now, and then doing all the available updates and having that update process seg, leaving duplicate listing in the rpmdb for some significant fraction of the number of packages in that single transaction. It's tedious to clean out those duplicates by hand, far more tedious than just doing the updates in small groups to begin with. And sadly its one of those things that its freaking difficult to reproduce. It might also serve well to compartmentalize package transactions enough to help people, like myself, narrow down problematic package or scriptlet operations that trigger segs on other people systems, which are again not as reproducible across systems as one would like. I'd be most interested in seeing how doing an optimal calculation of multiple transactions affects system resource consumption on lower ended systems. I have no expectation as to whether or not it would make a difference at all. Just as another point of reference, I believe (and I may be mistaken about this) the now aging up2date codebase made multiple transactions in some cases, like if up2date saw an up2date update it would first do that update and then run a second transaction for other updates. -jef -- fedora-devel-list mailing list fedora-devel-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/fedora-devel-list