Re: cephfs survey results

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/11/14 11:47, Sage Weil wrote:
On Wed, 5 Nov 2014, Mark Kirkwood wrote:
On 04/11/14 22:02, Sage Weil wrote:
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
On 4 November 2014 01:50, Sage Weil <sage@xxxxxxxxxxxx> wrote:
In the Ceph session at the OpenStack summit someone asked what the
CephFS
survey results looked like.

Thanks Sage, that was me!

   Here's the link:

          https://www.surveymonkey.com/results/SM-L5JV7WXL/

In short, people want

fsck
multimds
snapshots
quotas

TBH I'm a bit surprised by a couple of these and hope maybe you guys
will apply a certain amount of filtering on this...

fsck and quotas were there for me, but multimds and snapshots are what
I'd consider "icing" features - they're nice to have but not on the
critical path to using cephfs instead of e.g. nfs in a production
setting. I'd have thought stuff like small file performance and
gateway support was much more relevant to uptake and
positive/pain-free UX. Interested to hear others rationale here.

Yeah, I agree, and am taking the results with a grain of salt.  I
think the results are heavily influenced by the order they were
originally listed (I whish surveymonkey would randomize is for each
person or something).

fsck is a clear #1.  Everybody wants multimds, but I think very few
actually need it at this point.  We'll be merging a soft quota patch
shortly, and things like performance (adding the inline data support to
the kernel client, for instance) will probably compete with getting
snapshots working (as part of a larger subvolume infrastructure).  That's
my guess at least; for now, we're really focused on fsck and hard
usability edges and haven't set priorities beyond that.

We're definitely interested in hearing feedback on this strategy, and on
peoples' experiences with giant so far...


Heh, not necessarily - I put multi mds in there, as we want the cephfs part to
be of similar to the rest of ceph in its availability.

Maybe its because we are looking at plugging it in with an Openstack setup and
for that you want everything to 'just look after itself'. If on the other hand
we were wanting merely an nfs replacement, then sure multi mds not so
important there.

Important clarification: "multimds" == multiple *active* MDS's.  "single
mds" means 1 active MDS and N standy's.  One perfectly valid strategy,
for example, is to run a ceph-mds on *every* node and let the mon pick
whichever one is active.  (That works as long as you have sufficient
memory on all nodes.)


Righty, so I think I've (plus a few others perhaps) misunderstood the nature of the 'promotion mechanism' for 1 active several standby design - I was under the (possibly wrong) impression that you needed to 'do something' to make a standby active? If not then yeah it would be fine, sorry!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux