On Tue, Jul 1, 2014 at 2:33 PM, Stephen John Smoogen <smooge@xxxxxxxxx> wrote: > > > > On 1 July 2014 11:52, Josh Boyer <jwboyer@xxxxxxxxxxxxxxxxx> wrote: >> >> On Tue, Jul 1, 2014 at 1:43 PM, Stephen John Smoogen <smooge@xxxxxxxxx> >> wrote: >> >> > This is the problem with asking about success.. you have to deal with >> > 'failure' as it becomes more likely if you are going to be ambitious. >> >> Dealing with failure should be something we do regardless of the reach of >> goals. >> > > Yes it should, but (1) this is a big ship and (2) it looks like a sieve when > you are inside it. Speaking as a former board member, we tend towards > plugging the leak and getting to the next one versus anything that survives > 1-2 releases. We tend towards saying we want to define success and then > after the release getting caught up in the latest > fedora-devel/ambassadors/FUDCON operatics and by the time that is dealt with > not having gotten back to what we were going to try and measure in the first > place. Sounds like a problem with how the Board is composed and operating, not with the Project's ability to do this in and of itself. I say that as a both a former and current Board member. >> > Personally I would prefer us to push more on how we deal with 'failure' >> > and >> > embrace it. How do we learn from it, how do we incorporate it into our >> > structure and push more towards 'we are aiming to be the #1 choice for >> > developers/system administrators' but knowing we are going to fail a lot >> > to >> > get there and most likely fail in the goal itself.. instead we focus >> > more on >> > the journey. >> > >> > Hope the above makes some sort of sense. >> >> It makes sense, but it isn't particularly helpful in answering the >> question. Saying "deal with failure" without having targets to reach >> or miss is as ambiguous as talking about success. If you have some >> goals you think would be measurable, please continue with those. >> > > Well the first thing is figuring out the following: > > 1) What can be measured and how is it measured. > 2) Which things that can be measured mean anything and which ones are > 'fluff'. {A}. > 3) A definition of success that can be used towards each of those > measurements and a mitigation plan for failure. > 4) Who gets the full time job to manage each goal. {B} > > Anyway, what I would like for a goal is a full usability test after each > release with the results incorporated into the next release. This means that > a set of tests are defined as what meets 'usability'... they are tested > using documented methods by a defined 'third' party (mostly to deal with > 'Well the 'Z group' people would of course say this passed usability and I > still can't get this to work). The results are published, discussed and a > plan to work towards fixing whatever 'pain' points came out of the tests are > to be met. Also each year, the tests are meant to be more comprehensive so > that if we only test login/logout release A, then A+1 its login/logout, find > app/launch app/close app etc (overly simplified). So this sounds very desktop-centric. Would you like to see corresponding usability tests for Cloud and Server as well? Is there anything beyond "Fedora is a decently usable Linux distro" you would like to see the Project reach for? I'm asking because while I have no problem with your proposal at face value, it seems very much in line with what we're already doing and have been doing for a long time. Churn out another release, adjust for the next one. > {A} Two ones I can give you as almost complete fluff is active account > numbers and web traffic. Account numbers are filled with lots of ghost > accounts of either people using the account to try and measure how active > Fedora is for their own analytics company, to spammers, to one time > contributors who got interested and never went beyond that. [Or the string > of revenge accounts that seem to have been coming up where some ex-boyfriend > signs up their ex to various places as the equivalent of signing them up for > magazines they didn't want] The second fluff is website traffic. It takes a > lot of work to figure out what is really important and how it is used. We > have a ton of wiki articles which are 'dead' (old releases, content which > doesn't make sense with current release, users who no longer exist, etc) > they actually get a lot of traffic... why? some of it is bots, some of it is > someone set up an archive cron job of webpages they mirror and forgot about > it, etc etc. [Or the helpful new user who wants to show that Fedora is more > popular than Ubuntu and sets up a bunch of amazon machines to push page > views up... that thankfully hasn't happened in a couple of years but used to > when we put up stats.] I agree account numbers as a metric is fluff-ish, but _active_ account numbers as defined by a suitable activity metric can be helpful. E.g. if we can define what it takes for an account to be considered a "Contributor", then we can measure the number and kinds of contributors we have which is important to know for long term success. > {B} This is the one that has pretty much killed this in the past with > boards. We talked about it and then eventually realized that it was going to > take a full-time person for most goals to actively be met. We tried having a > board-member assigned to it, but once it gets over 10-15 hours a week of > work... it falls into the 'I have a real job which needs me to do something > people'. What we could accomplish was the general... 'did we release a > distro that was bootable, workable, and generally were happy with' because > we had people who were already full-time to make sure that goal was met. People don't need to be on the Board to track and drive towards goals. I would argue that the reason this failed in the past is because of how it was approached. josh _______________________________________________ board-discuss mailing list board-discuss@xxxxxxxxxxxxxxxxxxxxxxx https://admin.fedoraproject.org/mailman/listinfo/board-discuss