On 5 Mar 2013, at 17:03, Greg Farnum wrote: > This is a companion discussion to the blog post at http://ceph.com/dev-notes/cephfs-mds-status-discussion/ — go read that! > > The short and slightly alternate version: I spent most of about two weeks working on bugs related to snapshots in the MDS, and we started realizing that we could probably do our first supported release of CephFS and the related infrastructure much sooner if we didn't need to support all of the whizbang features. (This isn't to say that the base feature set is stable now, but it's much closer than when you turn on some of the other things.) I'd like to get feedback from you in the community on what minimum supported feature set would prompt or allow you to start using CephFS in real environments — not what you'd *like* to see, but what you *need* to see. This will allow us at Inktank to prioritize more effectively and hopefully get out a supported release much more quickly! :) > > The current proposed feature set is basically what's left over after we've trimmed off everything we can think to split off, but if any of the proposed included features are also particularly important or don't matter, be sure to mention them (NFS export in particular — it works right now but isn't in great shape due to NFS filehandle caching). > fsck would be desirable, even if its just something that tells me that something is 'corrupted' or 'dangling' would be useful. quotas on sub-tree's like how the du feature is currently implemented would be nice. some sort of a smarter exporting of sub-tree's that would nice too e.g. if i mounted /ceph/fileset_1 as /myfs1 on a client, I'd like the /myfs1 to report 100gb when i run df instead of 100tb which is the entire system that /ceph/ has, we're currently using rbd's here to limit what the users should have so we can present a subset of the storage managed by ceph to end users so they don't get excited with seeing 100tb available in cephfs (the numbers here are fictional). managing one cephfs is probably easier than managing lots of rbd's in certain cases. Regards, Jimmy Tang -- Senior Software Engineer, Digital Repository of Ireland (DRI) High Performance & Research Computing, IS Services Lloyd Building, Trinity College Dublin, Dublin 2, Ireland. http://www.tchpc.tcd.ie/ | jtang@xxxxxxxxxxxx Tel: +353-1-896-3847 _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com