On Thu, Mar 8, 2018 at 6:48 AM, Panu Matilainen <pmatilai@xxxxxxxxxx> wrote: > On 03/08/2018 01:39 PM, Neal Gompa wrote: >> >> On Thu, Mar 8, 2018 at 2:53 AM, Panu Matilainen <pmatilai@xxxxxxxxxx> >> wrote: >>> >>> On 03/07/2018 03:10 PM, Neal Gompa wrote: >>>> >>>> >>>> On Wed, Mar 7, 2018 at 5:52 AM, Igor Gnatenko >>>> <ignatenkobrain@xxxxxxxxxxxxxxxxx> wrote: >>>>> >>>>> >>>>> And you forgot: >>>>> 5. Teach DNF to use "target" DNF/RPM stack to perform upgrade (best and >>>>> proper way). >>>>> >>>> >>>> This has been requested for a long time: >>>> https://bugzilla.redhat.com/show_bug.cgi?id=1032541 >>>> >>>> It'd be *really* good if DNF implemented it. >>> >>> >>> Bottom line: either dnf (or something else) grows an dist-upgrade method >>> that runs the transaction on the "target stack" OR Fedora is *forced* to >>> hold back on new rpm package features until all the active versions have >>> a >>> rpm/dnf stack that can handle them. Period. No ifs or buts. >>> >>> P.S. No, updating rpm + dnf first in a separate transaction is not a >>> solution at all. >>> >> >> You're right in that upgrading rpm + dnf + libsolv first won't fix it >> 100% of the time (mainly with features that can't be guarded by >> rpmlib() dependencies, such as the new header size), but I think it'll >> deal with more than 80% of the cases where we have a problem, such >> that distribution upgrades through this mechanism will work as we >> introduce new features in the distribution. >> >> But I would disagree that updating rpm + dnf first is not a solution >> at all. It's not a *perfect* solution, but it would help a hell of a >> lot more. > > > It's not a solution because doing so usually drags half the distro along due > to library dependencies etc. > I suppose part of this is because the way we package libraries makes it so that new library packages _must_ replace older ones as sonames change. In Mageia and openSUSE, it doesn't work like that at all. You'll get your new library dependencies for the RPM stack and then it would re-execute with that target, and then upgrade the remainder of the system. >> Hell, even preupgrade and older mechanisms more or less >> worked by getting the target rpm and package manager code installed >> first and then doing the real thing using that code. > > > No, preupgrade & friends basically created a special boot target that runs > the whole thing with the new version, in an isolated environment. Which > equals "using the target rpm stack" in fact. > In my mind, that's functionally equivalent. Yes, it's not actually on the target yet, but it operates under the newer rpm environment. >> >> And supporting transaction ordering such that transactions can be >> broken up into smaller ones as needed based on various conditions >> would make upgrades more reliable in general, in my opinion. >> > > That's quite a different thing, and creates it's own quirks and issues. And > it doesn't help things at all when the simple "dnf update rpm dnf" drags > along for example a new glibc or python version which snowballs into 70% of > the distro getting pulled into the "just update rpm" transaction. > New glibc versions by itself should not snowball into 70% of distribution, otherwise something is broken with how the dependencies are versioned. As for pulling in a new Python, you're right that it's an issue, but it's an issue no matter what. Even when it's in the same transaction, it's all broken until everything is upgraded anyway. And if we're talking about this being done with system-upgrade or dist-upgrade commands, then we can reasonably expect that we can upgrade a subset first, then restart DNF with the new stack, and pass in the remainder of the transaction to complete. That said, stuff like this with the Python stuff makes me think we should have pythonX.Y-foo packages that have Obsoletes/Provides pythonX-foo instead, so that stuff wouldn't actually break like this. But that's another discussion... And also, PackageKit offline upgrades wouldn't be affected by the Python bit and could happily do this "properly", if the Python bit is what you're worried about. Another way to solve that problem would be to have a minimal C/C++ implementation that DNF would hand off to for system upgrades to avoid the Python issue. Personally, I don't think that's necessary, all things considered, but it's an option. But I think we *really* should have the splitting transactions feature, not just for system upgrades, but for dealing with a litany of conditions that may make it difficult to do "big" transactions. -- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ devel mailing list -- devel@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe send an email to devel-leave@xxxxxxxxxxxxxxxxxxxxxxx