Re: How to remove one of two filesystems

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Francois,

I have seen reports of poor performance from Nautilus onwards and you might be hit by this. This might require a ticket. There is a hypothesis that a regression occurred that affects the cluster's ability to run background operations properly.

What you observe should not happen and I didn't see any of this on mimic when removing a 120TB file system.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Francois Legrand <fleg@xxxxxxxxxxxxxx>
Sent: 24 June 2020 00:25:03
To: Patrick Donnelly; Frank Schilder
Cc: ceph-users
Subject: Re:  Re: How to remove one of two filesystems

Thanks a lot. It works.
I could delete the filesystem and remove the pools (data and metadata).
But now I am facing another problem which is that the removal of the pools seems to take a incredible time to free the space (the pool I deleted was about 100TB and in 36h I got back only 10TB). In the meantime, the cluster is extremely slow (a rbd extract takes ~30 mn for a 9 GB image and writing 10MB in cephfs takes half a minute !!) which makes the cluster almost unusable.
It seems that the removal of deleted pg is done by deep-scrubs according to  https://medium.com/opsops/a-very-slow-pool-removal-7089e4ac8301
But I couldn't find a way to speedup the process or to get back the cluster to a decent reactivity ?
Do you have a suggestion ?
F.


Le 22/06/2020 à 16:40, Patrick Donnelly a écrit :

On Mon, Jun 22, 2020 at 7:29 AM Frank Schilder <frans@xxxxxx><mailto:frans@xxxxxx> wrote:



Use

ceph fs set <fs_name> down true

after this all mdses of fs fs_name will become standbys. Now you can cleanly remove everything.

Wait for the fs to be shown as down in ceph status, the command above is non-blocking but the shutdown takes a long time. Try to disconnect all clients first.



If you're planning to delete the file system, it is faster to just do:

ceph fs fail <fs_name>

which will remove all the MDS and mark the cluster as not joinable.
See also: https://docs.ceph.com/docs/master/cephfs/administration/#taking-the-cluster-down-rapidly-for-deletion-or-disaster-recovery



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux