Looks like we are going to put a hold on CephFS and use RBD till it is fully supported. Which brings me to my next question. I am trying to remove MDS completely and seem to be having issues I disabled all mounts disabled all the startup scripts // Cleaned the mdsmap ceph mds newfs 0 1 --yes-i-really-mean-it // Then tried but got error ceph osd pool delete metadata metadata --yes-i-really-really-mean-it Error EBUSY: pool 'metadata' is in use by CephFS // Tried and it looks like a bug of sort ceph mds cluster_down // Still get mdsmap e78: 0/0/0 up // Shouldn't it be down? ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74) Do I need to start over and not add the mds to be clean? Thanks for your time On Wed, May 21, 2014 at 12:18 PM, Wido den Hollander <wido at 42on.com> wrote: > On 05/21/2014 09:04 PM, Scottix wrote: >> >> I am setting a CephFS cluster and wondering about MDS setup. >> I know you are still hesitant to put the stable label on it but I have >> a few questions what would be an adequate setup. >> >> I know active active is not developed yet so that is pretty much out >> of the question right now. >> >> What about active standby? How reliable is the standby? or should a >> single active mds be sufficient? >> > > Active/Standby is fairly stable, but I wouldn't recommend putting it into > production right now. > > The general advice is always to run a recent Ceph version and a recent > kernel as well. Like 3.13 in Ubuntu 14.04 > > But the best advice: Test your use-case extensively! The more feedback, the > better. > >> Thanks >> > > > -- > Wido den Hollander > 42on B.V. > Ceph trainer and consultant > > Phone: +31 (0)20 700 9902 > Skype: contact42on > _______________________________________________ > ceph-users mailing list > ceph-users at lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Follow Me: @Scottix http://about.me/scottix Scottix at Gmail.com