On Sat, Nov 20, 2010 at 11:35 PM, Kevin Fenzi <kevin@xxxxxxxxx> wrote: > ok, I dug through the devel list for the last month or two and wrote > down all the various ideas folks have come up with to change/improve > things. > > Here (in no particular order) are the ideas and some notes from me on > how we could enable them. Please feel free to add new (actual/concrete > ideas or notes): > > * Just drop all the requirements/go back to before we had any updates > Âcriteria. Before we go that route, maybe we should understand why we decided on the updates criteria. If I remember correctly, this was decided because some updates introduced regressions in stable Fedora releases, upsetting users. Do we really want to go back? > * Change FN-1 to just security and major bugfix > > This may be hard to enforce or figure out if something is a major bugfix. My vote would be security-only, because people desiring the latest upstream code apparently upgrade to the latest Fedora release as soon as it's baked (and sometimes earlier), therefore there is no need to upgrade the packages in n-1. And the main reason why a user doesn't upgrade his box seems rooted in the wish to avoid unnecessary downtime or extra work - after all, they can skip a release since n-1 is supported. So these users want *stability*, not anything disruptive, and it's OK if the software they use is a bit older than what is available in the latest Fedora. The latest software is only an upgrade away after all. Any major non-security bugfix needed in n-1 wasn't detected for 6 months or so. How "major" is that? Or maybe it was introduced by an update and not caught early enough because we couldn't test the update. > * allow packages with a %check section to go direct to stable > > Bodhi would have to have a list or some way to note these packages, > it would also need to change as they were added/removed. Perhaps it could > just be an AutoQA +1 for having a check section? > On the con side, some checks may be simple and may not note things > that are fedora only issues. More test automation would be an excellent way to validate updates. It's not applicable everywhere, but it should help a lot. > * require testing only for packages where people have signed up to be testers > > Packages without 'official' testers could bypass testing or have some lower karma > requirement. We would need for this a list of packages that have had people sign > up to test. I'm not too keen on that one, although I don't see a way out. I don't have a big enough network at home to extensively test Nagios or 389-DS, for instance. Sanity tests of these are possible, but I won't easily (with my current resources) catch performance regressions or some kind of instability under load. > * Ask maintainers to provide test cases / test cases in wiki for each package? > > Test cases are not easy to make, many maintainers won't can't do so, but > it would be lovely to have even a base checklist of things that should work > in the package everytime. Especially since test cases could form the basis of automated tests later on. And I'm pretty sure any time spent doing test automation or writing test cases would be worth it down the road, as we'd have more confidence in subsequent updates. > * have a way to get interested testers notified on bodhi updates for packages > they care about. > > We would need to add some kind of tester list to pkgdb, and bodhi would need to be > able to get this to mail them when a update changed state. > We may not get many people signing up for some packages, but this might be a > good way to know what packages we have testers for and get them more involved > in testing. Ideally it could mail them on update submission at least. > > * reduced karma requirement on other releases when one has gone stable > > Bodhi would need to note when a update went to stable if the exact same > version (with dist tag differences) was in testing for other releases. > It could then allow less karma to go stable, or add +1 from the other > update going stable. This sounds good on paper, yet we'll have to remember it cannot eliminate testing altogether due to packages using shared libraries that might not be of the same version across releases. Still, given we're short-handed, that's a good solution. > Other concrete ideas? These are all technical ideas. We need to know what experience we want to provide users with in both Fedora n and n-1 before deciding which technical idea(s) to implement. My own vision is that: * updates should not be less tested than what's in an actual release - and the QA/RE teams do an amazing amount of testing prior to releases. I would love to see more maintainers at the Go/NoGo meetings for instance. * updates should not disrupt user experience. Fedora, being community-supported, is probably unsuitable for many business tasks. Let's not invalidate Fedora for less technically minded people. * Fedora releases do have bugs. After any release, we should strive to lower the total bug count, not introducing massive changes that will in most cases introduce more defects. * upgrading at least some software in Fedora n is OK, yet upgrading Fedora n-1 packages shouldn't be our priority. Most of our users who want the latest software should (and probably do) run the latest Fedora anyway. I encourage you all to read MÃirÃn's excellent post on new users and free software. http://mairin.wordpress.com/2010/10/01/you-must-be-this-tall-to-ride-__/ We need to ask ourselves what happens to a user when the software he/she relies on everyday ceases to work after an update, how difficult it is for these people to find answers, and a fix, online. Please note that I am *not* saying all updates are evil. Yet I think that avoiding massive breakage in critpath packages should be one of our priorities. FranÃois -- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/devel