Re: measuring success

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 02, 2010 at 12:20:21PM -0400, Will Woods wrote:

> Therefore: I propose that we choose a few metrics ("turnaround time on
> security updates", "average number of live updates with broken
> dependencies per day", etc.). Then we begin measuring them (and, if
> possible, collect historical, pre-critpath data to compare that to).
> 
> I'm willing to bet that these metrics have improved since we started the
> critpath policies before F13 release, and will continue to improve over
> the course of F13's lifecycle and the F14 development cycle.

I am interested in these metrics, too. Afaik it will be the first time
in the update testing discussion that there will be metrics that can be
used to evaluate it. But imho the turnaround time is not only
interesting for security updates, but for all updates that fix bugs, so
probably most non-newpackage updates.

Btw. on a related issue:How do provenpackagers properly test for broken
deps manually? The case where two updates in updates-testing are
required to update one of them seems to me hard to ensure manually. But
when only one of both updates is pushed to stable, there will be a
broken dependency. I know that the fix is to bundle the builds of both
updates into one, but how is this tested?

Regards
Till

Attachment: pgpOf5BjmrGtp.pgp
Description: PGP signature

-- 
devel mailing list
devel@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/devel

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Fedora Announce]     [Fedora Kernel]     [Fedora Testing]     [Fedora Formulas]     [Fedora PHP Devel]     [Kernel Development]     [Fedora Legacy]     [Fedora Maintainers]     [Fedora Desktop]     [PAM]     [Red Hat Development]     [Gimp]     [Yosemite News]
  Powered by Linux