Re: Need feedback for Ceph User Survey 2019

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 4 Nov 2019, Mike Perez wrote:
> Here's another draft with the current feedback. Sage, do you have thoughts
> on keeping any of the data protection questions per Lar's suggestions?

No opinion... so sure.

Few small edits:

- Total raw and usable capacity: let them type in a box (TB)
- Which ceph release question should be release(s), w/ checkboxes
- What Ceph Manager modules question: s/did/do/
- RGW auth question: checkboxes, not radio buttons
- number of rgw sites: "Number of federated RGW sites", with an entry box

Then let's ship it!

Thanks!
sage


> 
> On Mon, Oct 28, 2019 at 1:32 PM Sage Weil <sage@xxxxxxxxxxxx> wrote:
> 
> > I did another pass and I think we can simplify.  I suggest we give up on
> > the detailed per-cluster stats from a manual survey and instead rely on
> > telemetry (and/or telemetry backport to mimic/luminous if we *really* want
> > that data), and then consolidate this into a handful of easy questions in
> > the general survey.
> >
> > On the general part,
> >
> > - telemetry 'why' question: make it a "check all that apply" + comment
> >
> > ...the add:
> >
> > - How many clusters do you operate?  [fill in number]
> > - Which Ceph releases do you run?  Check all that apply
> >    (list nautilus -> argonaut)
> > - Total aggregate cluster capacity in TB [fill in number]
> > - Largest cluster capacity in TB [fill in number]
> >
> > and then copy most of the other cluster answers back over to the main
> > one, converting anything that is a selection to a 'check all that a
> > pply'.  e.g.,
> >
> > - Which Ceph packages  (copy/move from cluster survey, but check all that
> > apply)
> > - What operating system(s) are you using on cluster nodes?  (copy/move,
> > but check all that apply)
> >
> > A few things could be dropped to simplify:
> >
> > - how many hosts, osds, osds per node, osd backends, redundancy
> > - number of hours (unless we can simplify this?)
> > - data protection scheme
> > - size/min_size
> > - which osd layout features
> > - what type of NICs
> > - RGW: 'do you use snapshots?' subquestion (not a thing)
> >
> > Change:
> > - What process architectures
> >   - add Power
> > - Typical number of fs clients (per cluster)
> > - number of files ... (for largest cluster, if multiple clusters)
> > - MDS cache size ... (for largest cluster, if multiple clusters)
> > - number of active MDS for largest cluster [just type in value, not a
> > multiple choice]
> >
> > A few of these still fall into the category of things we should capture
> > with telemetry.. I'm not quite sure where to draw the line, but generally
> > think we should lean toward simplicity.  Like, all of those Change items
> > :)
> >
> > Lars, WDYT?
> >
> > sage
> >
> >
> >
> > On Tue, 1 Oct 2019, Mike Perez wrote:
> >
> > >  Hi all,
> > >
> > > We conduct yearly user surveys to better under how our users utilize
> > Ceph.
> > > The Ceph Foundation collects the data under the Community Data License
> > > agreement [0]; which helps the community make more of an informed
> > decision
> > > of where our efforts in the development of future releases should go.
> > >
> > > Back in August, I asked for the community to help draft the next survey
> > > [1]. I'm happy to provide a draft of the user survey for 2019. I'm
> > sending
> > > this to the dev list in hopes of getting feedback before sending it to
> > the
> > > Ceph users list.
> > >
> > > The first question I received was using something other than Survey
> > monkey
> > > due to it not being available in some regions. I have been using another
> > > third-party service for our Ceph Days CFP forms, and luckily they offer a
> > > survey service that isn't blocked.
> > >
> > > A second question that came up was how to layout questions for multiple
> > > cluster deployments. An idea I had was having our general Ceph user
> > survey
> > > [2] separate from the deployment questions [3]. The general questions
> > only
> > > need to be answered once, and the deployment survey can be answered
> > > multiple times to capture the different configurations. I'm looking into
> > a
> > > way to link the answers of both surveys together.
> > >
> > > Any feedback, corrections or ideas?
> > >
> > > [0] - https://cdla.io/sharing-1-0/
> > > [1] -
> > >
> > https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/Q3NCHOJN45DPPZUGDXFRO7A7E2W22YUO/
> > > [2] -
> > > https://ceph.io/wp-content/uploads/2019/10/Ceph-User-Survey-general.pdf
> > > [3] -
> > > https://ceph.io/wp-content/uploads/2019/10/Ceph-User-Survey-Clusters.pdf
> > >
> > > --
> > >
> > > Mike Perez
> > >
> > > he/him
> > >
> > > Ceph Community Manager
> > >
> > >
> > > M: +1-951-572-2633
> > >
> > > 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
> > > @Thingee <https://twitter.com/thingee>  Thingee
> > > <https://www.linkedin.com/thingee> <https://www.facebook.com/RedHatInc>
> > > <https://www.redhat.com>
> > >
> >
> 
> 
> -- 
> 
> Mike Perez
> 
> he/him
> 
> Ceph Community Manager
> 
> 
> M: +1-951-572-2633
> 
> 494C 5D25 2968 D361 65FB 3829 94BC D781 ADA8 8AEA
> @Thingee <https://twitter.com/thingee>  Thingee
> <https://www.linkedin.com/thingee> <https://www.facebook.com/RedHatInc>
> <https://www.redhat.com>
> 
_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx



[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux