Re: Removing pool in nautilus is incredibly slow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the hint.
I tryed but it doesn't seems to change anything...
Moreover, as the osds seems quite loaded I had regularly some osd marked down which triggered some new peering and thus more load !!! I set the osd no down flag, but I still have some osd reported (wrongly) as down (and back up in the minute) which generate peering and remapping. I don't really understand the action of no down parameter ! Is there a way to tell ceph not to peer immediately after an osd is reported down (let say wait for 60s) ? I am thinking about restarting all osd (or maybe the whole cluster) to get osd_op_queue_cut_off changed to high and osd_op_thread_timeout to something higher than 15 (but I don't think it will really improve the situation).
F.


Le 25/06/2020 à 14:26, Wout van Heeswijk a écrit :
Hi Francois,

Have you already looked at the option "osd_delete_sleep"? It will not speed up the process but I will give you some control over your cluster performance.

Something like:

ceph tell osd.\* injectargs '--osd_delete_sleep1'
kind regards,

Wout
42on
On 25-06-2020 09:57, Francois Legrand wrote:
Does someone have an idea ?
F.
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux