Re: Memory usage of OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mark, good news!

Adam, if you need some more information or debug, feel free to contact me on IRC: xelexin
I can confirm that this issue exist in luminous (12.2.12)


Regards,

Rafał Wądołowski

CloudFerro sp. z o.o.
ul. Fabryczna 5A
00-446 Warszawa
www.cloudferro.com<http://www.cloudferro.com>


________________________________
From: Janne Johansson <icepic.dz@xxxxxxxxx>
Sent: Thursday, May 14, 2020 9:39 AM
To: Amudhan P <amudhan83@xxxxxxxxx>
Cc: Mark Nelson <mnelson@xxxxxxxxxx>; Rafał Wądołowski <rwadolowski@xxxxxxxxxxxxxx>; ceph-users@xxxxxxx <ceph-users@xxxxxxx>; Adam Kupczyk <akupczyk@xxxxxxxxxx>
Subject: Re:  Re: Memory usage of OSD



Den tors 14 maj 2020 kl 03:52 skrev Amudhan P <amudhan83@xxxxxxxxx<mailto:amudhan83@xxxxxxxxx>>:
For Ceph  release before nautilus  to effect osd_memory_target changes need
to restart OSD service.
I had similar issue in mimic I did the same in my test setup.
Before restarting OSD service ensure you set osd nodown and osd noout
similar commands to ensure it doesn't trigger OSD down and recovery.

Noout and norebalance seems like a good option to set before rebooting a host or restarting OSDs.

Nodown is kind of evil, since it will make clients send IO against the OSD thinking it is still up which it isn't, so client IO can stall.
Also, with nodown, it will get bad if some failure elsewhere occurs while you are doing maintenance, since the cluster will send IO to that part too.

Noout is ok, that means the cluster waits for it to come back, but sends requests to the other replicas in the meantime without starting to rebuild a new replica, and norebalance to prevent balancing while you are doing this.
The PGs will be degraded (since they are missing one replica) but the cluster goes on.

--
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux