Re: OSDs get killed by OOM when other host goes down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So this means if we are doing some operation which involved recovery, we should not do another one until this trimming not done yet? Let's say I've added new host with full of drives, once the rebalance finished, we should leave the cluster to trim osdmap before I add another host? 

Istvan Szabo
Senior Infrastructure Engineer
---------------------------------------------------
Agoda Services Co., Ltd.
e: istvan.szabo@xxxxxxxxx
---------------------------------------------------

-----Original Message-----
From: Josh Baergen <jbaergen@xxxxxxxxxxxxxxxx> 
Sent: Friday, November 12, 2021 11:13 PM
To: Szabo, Istvan (Agoda) <Istvan.Szabo@xxxxxxxxx>
Cc: Marius Leustean <marius.leus@xxxxxxxxx>; ceph-users@xxxxxxx
Subject: Re:  Re: OSDs get killed by OOM when other host goes down

Email received from the internet. If in doubt, don't click any link nor open any attachment !
________________________________

Hi Istvan,

> What can you do with osdmap in this case?

In modern versions of Ceph, IIRC as long as the number of "up" OSDs equals the number of "in" OSDs, then the mon will trim osdmaps. (It's possible the cluster also needs to be active+clean; I don't recall.) In older versions (prior to a certain Nautilus build; I don't know offhand which one), "up" had to equal the total number of OSDs for the mons to trim osdmaps.

Josh
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux