On 1 July 2014 12:53, Josh Boyer <jwboyer@xxxxxxxxxxxxxxxxx> wrote:
On Tue, Jul 1, 2014 at 2:33 PM, Stephen John Smoogen <smooge@xxxxxxxxx> wrote:Sounds like a problem with how the Board is composed and operating,
>
>
>
> On 1 July 2014 11:52, Josh Boyer <jwboyer@xxxxxxxxxxxxxxxxx> wrote:
>>
>> On Tue, Jul 1, 2014 at 1:43 PM, Stephen John Smoogen <smooge@xxxxxxxxx>
>> wrote:
>>
>> > This is the problem with asking about success.. you have to deal with
>> > 'failure' as it becomes more likely if you are going to be ambitious.
>>
>> Dealing with failure should be something we do regardless of the reach of
>> goals.
>>
>
> Yes it should, but (1) this is a big ship and (2) it looks like a sieve when
> you are inside it. Speaking as a former board member, we tend towards
> plugging the leak and getting to the next one versus anything that survives
> 1-2 releases. We tend towards saying we want to define success and then
> after the release getting caught up in the latest
> fedora-devel/ambassadors/FUDCON operatics and by the time that is dealt with
> not having gotten back to what we were going to try and measure in the first
> place.
not with the Project's ability to do this in and of itself. I say
that as a both a former and current Board member.
Yes it is. My experience has been that without a written 'constitution' of what the board is meant to do, not do, etc etc it becomes caught up in a lot of 'can we really do that? if we do that how many people will leave? why would we want to do that in that case? etc etc'
So this sounds very desktop-centric. Would you like to see>> > Personally I would prefer us to push more on how we deal with 'failure'
>> > and
>> > embrace it. How do we learn from it, how do we incorporate it into our
>> > structure and push more towards 'we are aiming to be the #1 choice for
>> > developers/system administrators' but knowing we are going to fail a lot
>> > to
>> > get there and most likely fail in the goal itself.. instead we focus
>> > more on
>> > the journey.
>> >
>> > Hope the above makes some sort of sense.
>>
>> It makes sense, but it isn't particularly helpful in answering the
>> question. Saying "deal with failure" without having targets to reach
>> or miss is as ambiguous as talking about success. If you have some
>> goals you think would be measurable, please continue with those.
>>
>
> Well the first thing is figuring out the following:
>
> 1) What can be measured and how is it measured.
> 2) Which things that can be measured mean anything and which ones are
> 'fluff'. {A}.
> 3) A definition of success that can be used towards each of those
> measurements and a mitigation plan for failure.
> 4) Who gets the full time job to manage each goal. {B}
>
> Anyway, what I would like for a goal is a full usability test after each
> release with the results incorporated into the next release. This means that
> a set of tests are defined as what meets 'usability'... they are tested
> using documented methods by a defined 'third' party (mostly to deal with
> 'Well the 'Z group' people would of course say this passed usability and I
> still can't get this to work). The results are published, discussed and a
> plan to work towards fixing whatever 'pain' points came out of the tests are
> to be met. Also each year, the tests are meant to be more comprehensive so
> that if we only test login/logout release A, then A+1 its login/logout, find
> app/launch app/close app etc (overly simplified).
corresponding usability tests for Cloud and Server as well?
My apologies, I was trying to make this generic enough to cover the other groups. On a server, I would want to be able to see how the API of rolekitd meets system administration goals. If I have a new 'systems administrator' and they are to configure a mail server to meet a checklist, how hard was it for them to do so? Were they able to meet the desired goals. On the cloud I would have similar items depending on if they are aiming towards getting new users to spin new clouds or have existing clouds meet some specific need.
Is there anything beyond "Fedora is a decently usable Linux distro"
you would like to see the Project reach for? I'm asking because while
I have no problem with your proposal at face value, it seems very much
in line with what we're already doing and have been doing for a long
time. Churn out another release, adjust for the next one.
I am looking at something different. I want a set of measures after the release where we see if we actually made the fixes we aimed to make or did we throw the baby out with the bathwater as some vocal critics claim every release. We also keep those results and compare them long term versus the 'We fixed A and broke B and oooh kittens'
> {A} Two ones I can give you as almost complete fluff is active accountI agree account numbers as a metric is fluff-ish, but _active_ account
> numbers and web traffic. Account numbers are filled with lots of ghost
> accounts of either people using the account to try and measure how active
> Fedora is for their own analytics company, to spammers, to one time
> contributors who got interested and never went beyond that. [Or the string
> of revenge accounts that seem to have been coming up where some ex-boyfriend
> signs up their ex to various places as the equivalent of signing them up for
> magazines they didn't want] The second fluff is website traffic. It takes a
> lot of work to figure out what is really important and how it is used. We
> have a ton of wiki articles which are 'dead' (old releases, content which
> doesn't make sense with current release, users who no longer exist, etc)
> they actually get a lot of traffic... why? some of it is bots, some of it is
> someone set up an archive cron job of webpages they mirror and forgot about
> it, etc etc. [Or the helpful new user who wants to show that Fedora is more
> popular than Ubuntu and sets up a bunch of amazon machines to push page
> views up... that thankfully hasn't happened in a couple of years but used to
> when we put up stats.]
numbers as defined by a suitable activity metric can be helpful. E.g.
if we can define what it takes for an account to be considered a
"Contributor", then we can measure the number and kinds of
contributors we have which is important to know for long term success.
The main hurdles in the past have been a) people seeing most measures as 'popularity contests' and/or b) people seeing it as an intrusion on their privacy to be measured. [And I am not looking forward on how to deal with 'right to be forgotten' as we measure more.]
> {B} This is the one that has pretty much killed this in the past withPeople don't need to be on the Board to track and drive towards goals.
> boards. We talked about it and then eventually realized that it was going to
> take a full-time person for most goals to actively be met. We tried having a
> board-member assigned to it, but once it gets over 10-15 hours a week of
> work... it falls into the 'I have a real job which needs me to do something
> people'. What we could accomplish was the general... 'did we release a
> distro that was bootable, workable, and generally were happy with' because
> we had people who were already full-time to make sure that goal was met.
I would argue that the reason this failed in the past is because of
how it was approached.
Agreed.
josh
_______________________________________________
board-discuss mailing list
board-discuss@xxxxxxxxxxxxxxxxxxxxxxx
https://admin.fedoraproject.org/mailman/listinfo/board-discuss
Stephen J Smoogen.
_______________________________________________ board-discuss mailing list board-discuss@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/board-discuss