Re: How to remove one of two filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks a lot. It works.
I could delete the filesystem and remove the pools (data and metadata).
But now I am facing another problem which is that the removal of the pools seems to take a incredible time to free the space (the pool I deleted was about 100TB and in 36h I got back only 10TB). In the meantime, the cluster is extremely slow (a rbd extract takes ~30 mn for a 9 GB image and writing 10MB in cephfs takes half a minute !!) which makes the cluster almost unusable. It seems that the removal of deleted pg is done by deep-scrubs according to https://medium.com/opsops/a-very-slow-pool-removal-7089e4ac8301 But I couldn't find a way to speedup the process or to get back the cluster to a decent reactivity ?
Do you have a suggestion ?
F.


Le 22/06/2020 à 16:40, Patrick Donnelly a écrit :
On Mon, Jun 22, 2020 at 7:29 AM Frank Schilder <frans@xxxxxx> wrote:
Use

ceph fs set <fs_name> down true

after this all mdses of fs fs_name will become standbys. Now you can cleanly remove everything.

Wait for the fs to be shown as down in ceph status, the command above is non-blocking but the shutdown takes a long time. Try to disconnect all clients first.
If you're planning to delete the file system, it is faster to just do:

ceph fs fail <fs_name>

which will remove all the MDS and mark the cluster as not joinable.
See also: https://docs.ceph.com/docs/master/cephfs/administration/#taking-the-cluster-down-rapidly-for-deletion-or-disaster-recovery



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux