On 04/11/14 22:02, Sage Weil wrote:
On Tue, 4 Nov 2014, Blair Bethwaite wrote:
On 4 November 2014 01:50, Sage Weil <sage@xxxxxxxxxxxx> wrote:
In the Ceph session at the OpenStack summit someone asked what the CephFS
survey results looked like.
Thanks Sage, that was me!
Here's the link:
https://www.surveymonkey.com/results/SM-L5JV7WXL/
In short, people want
fsck
multimds
snapshots
quotas
TBH I'm a bit surprised by a couple of these and hope maybe you guys
will apply a certain amount of filtering on this...
fsck and quotas were there for me, but multimds and snapshots are what
I'd consider "icing" features - they're nice to have but not on the
critical path to using cephfs instead of e.g. nfs in a production
setting. I'd have thought stuff like small file performance and
gateway support was much more relevant to uptake and
positive/pain-free UX. Interested to hear others rationale here.
Yeah, I agree, and am taking the results with a grain of salt. I
think the results are heavily influenced by the order they were
originally listed (I whish surveymonkey would randomize is for each
person or something).
fsck is a clear #1. Everybody wants multimds, but I think very few
actually need it at this point. We'll be merging a soft quota patch
shortly, and things like performance (adding the inline data support to
the kernel client, for instance) will probably compete with getting
snapshots working (as part of a larger subvolume infrastructure). That's
my guess at least; for now, we're really focused on fsck and hard
usability edges and haven't set priorities beyond that.
We're definitely interested in hearing feedback on this strategy, and on
peoples' experiences with giant so far...
Heh, not necessarily - I put multi mds in there, as we want the cephfs
part to be of similar to the rest of ceph in its availability.
Maybe its because we are looking at plugging it in with an Openstack
setup and for that you want everything to 'just look after itself'. If
on the other hand we were wanting merely an nfs replacement, then sure
multi mds not so important there.
regards
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com