CephFS removal.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

 

Having a few problems removing cephfs file systems.

 

I want to remove my current pools (was used for test data) – wiping all current data, and start a fresh file system on my current cluster.

 

I have looked over the documentation but I can’t find anything on this. I have an object store pool, Which I don’t want to remove – but I’d like to remove the cephfs file system pools and remake them.

 

 

My cephfs is called ‘data’.

 

Running ceph fs delete data returns: Error EINVAL: all MDS daemons must be inactive before removing filesystem

 

To make an MDS inactive I believe the command is: ceph mds deactivate 0

 

Which returns: telling mds.0 135.248.53.134:6809/16692 to deactivate

 

Checking the status of the mds using: ceph mds stat  returns: e105: 1/1/0 up {0=node2=up:stopping}

 

This has been sitting at this status for the whole weekend with no change. I don’t have any clients connected currently.

 

When trying to manually just remove the pools, it’s not allowed as there is a cephfs file system on them.

 

I’m happy that all of the failsafe’s to stop someone removing a pool are all working correctly.

 

If this is currently undoable. Is there a way to quickly wipe a cephfs filesystem – using RM from a kernel client is really slow.

 

Many thanks

 

Warren Jeffs

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux