Jesse Keating wrote: > Nor is testing / stability atomic / equal across the branches. While > the f13 package may work fine, the f12 build may have severe problems. Which is something which happens maybe 1 in 1000 times, and would happen even less often (maybe 1 in 10000 times) if some strategic packages (such as SQLite) proactively tracked upstream point releases in updates (which is another thing I've been arguing for all this time). IMHO this risk is negligible compared to the risk of issues missing testing, which cannot be eliminated, no matter how much of a PITA you make testing requirements. So it makes no sense to care about the negligible risk. (It's also quite funny how the people who argue about how that risk is real are the same ones happily using a hash-based SCM which has a non-zero risk of corrupting your repositories or data due to a hash collision…) Testing will NEVER be infallible, whether the risk of failure is, say, 1% or 1.01% doesn't make any practical difference. If you want to show that the risk is not negligible, I challenge you to come up with actual statistical data proving that 1% or more of our updates are affected by branch-specific issues. I believe we're quite far from that, and the rough data I see daily as both a maintainer and user of Fedora confirms this: I don't see anywhere near such a high failure rate. Whereas the actual failure rate of testing, even with the new PITA procedures, is probably much higher than 1%: I see issues missing testing all the time. > They need to be treated individually. You've been there when Fedora Legacy failed. (In fact, it was YOUR project.) This (the excess testing requirements, in particular the requirement to test every single release separately, even if the changes are identical, as a 1- minute proofreading can prove) was why it failed. Why are we now repeating this mistake? Kevin Kofler -- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/devel