On Wed, Mar 03, 2010 at 11:07:27AM -0500, Seth Vidal wrote: > > > On Wed, 3 Mar 2010, Till Maas wrote: > > > On Wed, Mar 03, 2010 at 08:42:57AM -0500, Seth Vidal wrote: > > > >> On Wed, 3 Mar 2010, Till Maas wrote: > > > >>> Are there even any metrics about how many bad updates happened? For me > >>> bug that can be fixed issuing an update are a lot more than regressions > >>> with updates or new bugs introduced with updates. If updates are slowed > >>> down, this will get even worse. Especially because the proposal is to > >>> use time instead of test coverage as the criterion to push an update to > >>> stable. > >> > >> Actually the proposal is time AND test coverage. > > > > I mind have misunderstood it, but afaics it only says that it will be > > tested, because it spent time in updates-testing, but this is not even > > true nowadays, even if packages stay long in updates-testing. > > Having more time opens us up to more testing days and in the near future > autoqa to help us bounce obviously bad things. This statement fails to address that packages that stays long in updates-testing are subject to testing. Btw. I am also pretty sure that most of the manual testing time is better spent writing automated tests, unless you consider just updating to updates-testing and see if something bad happens sufficient testing. But this is not even enough to find regressions, because one needs to investigate how to reproduce it and then also test the update/release version of the package to know, whether it is a regression or just a bug that was not triggered before. Regards Till
Attachment:
pgpysBCGeT6Zz.pgp
Description: PGP signature
-- devel mailing list devel@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/devel