Re: Safe to move misplaced hosts between failure domains in the crush tree?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Correct, this should only result in misplaced objects. 

> We made a mistake when we moved the servers physically so while the replica 3 is intact the crush tree is not accurate.

Can you elaborate on that? Does this mean after the move, multiple hosts are inside the same physical datacenter? In that case, once you correct the CRUSH layout, you would be running misplaced without a way to rebalance pools that are you using a datacenter crush rule.

Cheers!

--

Matthias Grandl
Head Storage Engineer
matthias.grandl@xxxxxxxx <mailto:matthias.grandl@xxxxxxxx>

Looking for help with your Ceph cluster? Contact us at https://croit <https://croit/>.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx

> On 12. Jun 2024, at 09:13, Torkil Svensgaard <torkil@xxxxxxxx> wrote:
> 
> Hi
> 
> We have 3 servers for replica 3 with failure domain datacenter:
> 
>  -1         4437.29248  root default 
> -33         1467.84814      datacenter 714 
> -69           69.86389          host ceph-flash1 
> -34         1511.25378      datacenter HX1 
> -73           69.86389          host ceph-flash2 
> -36         1458.19067      datacenter UXH 
> -77           69.86389          host ceph-flash3 
> 
> We made a mistake when we moved the servers physically so while the replica 3 is intact the crush tree is not accurate.
> 
> If we just remedy the situation with "ceph osd crush move ceph-flashX datacenter=Y" we will just end up with a lot of misplaced data and some churn, right? Or will the affected pool go degraded/unavailable?
> 
> Mvh.
> 
> Torkil
> -- 
> Torkil Svensgaard
> Sysadmin
> MR-Forskningssektionen, afs. 714
> DRCMR, Danish Research Centre for Magnetic Resonance
> Hvidovre Hospital
> Kettegård Allé 30
> DK-2650 Hvidovre
> Denmark
> Tel: +45 386 22828
> E-mail: torkil@xxxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux