how do I destroy cephfs? (interested in cephfs + tiering + erasure coding)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear All,

Please forgive this post if it's naive, I'm trying to familiarise myself with cephfs!

I'm using Scientific Linux 6.6. with Ceph 0.87.1

My first steps with cephfs using a replicated pool worked OK.

Now trying now to test cephfs via a replicated caching tier on top of an erasure pool. I've created an erasure pool, cannot put it under the existing replicated pool.

My thoughts were to delete the existing cephfs, and start again, however I cannot delete the existing cephfs:

errors are as follows:

[root@ceph1 ~]# ceph fs rm cephfs2
Error EINVAL: all MDS daemons must be inactive before removing filesystem

I've tried killing the ceph-mds process, but this does not prevent the above error.

I've also tried this, which also errors:

[root@ceph1 ~]# ceph mds stop 0
Error EBUSY: must decrease max_mds or else MDS will immediately reactivate

This also fail...

[root@ceph1 ~]# ceph-deploy mds destroy
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.21): /usr/bin/ceph-deploy mds destroy
[ceph_deploy.mds][ERROR ] subcommand destroy not implemented

Am I doing the right thing in trying to wipe the original cephfs config before attempting to use an erasure cold tier? Or can I just redefine the cephfs?

many thanks,

Jake Grimmett
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux