> On Jul 31, 2020, at 8:47 AM, STARK, BARBARA H <bs7652@xxxxxxx> wrote: > > Going back in the thread to this comment... > >> To hums: the default *appears* to be "piano" from headcount, if you do >> not interact >> therefore, graphing hum volume by headcount includes non-interactors >> and this "moves the dial" >> >> if not, then why is there no clear 'abstain' button. and how is the >> logic behind how the total sum and hum-weighting applied? >> >> APNIC designed a similar (in intent) system called "confer" to show >> dynamic state of the room. Its hard. Its hard to explain and get this >> stuff right. >> >> I did not find the hum indicated anything I felt I trusted. >> Experientally, "hearing" a hum and being told "here is the weighted >> sum of things you cannot see" is completely different. >> >> I would prefer to be told how many hummed. how many explicitly chose >> to say "no view" and how many did not participate, and I would prefer >> to be told how the hum weighting occurs. not in an RFC: on the tool. >> all the time. > > I think it's often difficult to define the right or a meaningful metric for a set of measurements. > In a physical room, WG chairs are able to look at how many people are in the room, and whether the hum is loudly (or softly) emanating from a few people or from the room as a whole. The chair bases their consensus metric on all these visible and audible measurements. > What George said in this email resonated with me -- > I would prefer to get the raw measurements rather than have the tool define a metric that is inconsistent with my metric. > If we could just be provided (1) number of people in the room (which we already have), and (2) number of people humming at each volume level (loud/medium/soft), and trust the chairs not to consider this a vote (which I think should be a reasonable expectation), it would make it easier for the chair to mentally create a consensus metric, rather than have a metric imposed on them by the tool. That is my view as well. Bob > Barbara > >> On Thu, Jul 30, 2020 at 2:33 PM John C Klensin <john-ietf@xxxxxxx> wrote: >>> >>> Without singling out any particular comment, I think there are >>> at least two things have have gotten lost in the discussions and >>> suggestions. I assume that, in at least some cases, people >>> didn't know. >>> >>> First, I don't know whether this should have been made explicit >>> earlier or not, but this is not the first time the IETF has used >>> Meetecho. Many of us have been using it for remote >>> participation for years and a great deal of effort has gone into >>> making it work smoothly for IETF's way of working [1]. I assume >>> we are probably a little more critical than many of their >>> customers but assume "our" changes have become, possibly with >>> small variations, part of their main product offering. I >>> believe their other customers have including many all remote, or >>> all remote other than a very small number of people in a central >>> location, setups, so the assumption in some messages that >>> Meetecho has never been used before in an all-remote situation >>> (or very close to it) is probably incorrect. I don't know if >>> the latter is accurate but, if it is important, I think we >>> should ask rather than jumping to conclusions. >>> >>> Second, many changes have occurred, at least to the user >>> interfaces, between our use for remote participants at IETF 106 >>> and this week. Personally, I like some of the changes but >>> believe others show signs of having been done in haste and with >>> too little thought and/or time for testing and review. I accept >>> Jay's assertion that those changes were not micromanaged by the >>> IETF leadership or staff, but note that Greg Wood indicated >>> during and after one of the test sessions that at least some of >>> those changes had been made at the behest of an IETF design >>> committee and that that the I-D and discussions of the hum >>> feature are quite explicit that the specifications came from the >>> IESG. >>> >>> It is probably helpful to remember something else I learned a >>> half-century ago about UI design. An experimental psychologist >>> colleague I worked with them was fond of staying that, when >>> people tried to evaluate a system, what they already knew was >>> almost always better (obviously, just because they are used to >>> it). For those who did not actively use Meetecho for remote >>> participation during IETF 106 and earlier and who have spent a >>> significant fraction of the last months on Zoom, WebEx, >>> GoToMeeting, and their competitors, and who did not attend the >>> test sessions, Meetecho probably feels very strange and is at a >>> significant disadvantage. I recommend giving it a chance and >>> doing so with an open mind. >>> >>> If, as appears to be the case from the timing of the >>> announcements, all of this was done on relatively short notice. >>> We should be impressed that it works and identifying issues and >>> making suggestions for improvements. If we want to make >>> suggestions about replacing all of it with COTS software, we >>> should consider how much effort has gone into trying to adapt >>> Meetecho for IETF needs and remember that, before Meetecho came >>> along, we tried to do remote participation with WebEx and, well, >>> it didn't work out very well. >>> >>> best, >>> john >>> >>> >>> >>> >>> [1] In the interest of full disclosure, I participated in a few >>> design sessions along with Alexa, Ray, and some Meetecho staff >>> (and maybe others; I don't remember). I think I got sucked in >>> because I started being intermittently involved in research in >>> user interface and usability issues in distributed office >>> teleconferencing systems in the early 1980s, studies that >>> involved real experimental psychologists and controlled >>> comparison of different approaches. And I may have been >>> compensated with some travel reimbursements or a registration >>> fee waiver or two -- not significant enough that I remember. >>> >
Attachment:
signature.asc
Description: Message signed with OpenPGP