Hi Andre, I think what you really want to look at is stretch mode. There have been long discussions on this list why a crush rule with rep 4 and 2 copies per DC will not handle a DC failure as expected. Stretch mode will make sure writes happen in a way that prevents split brain scenarios. Hand-crafted crush rules for this purpose require 3 or more DCs. Best regards, ================= Frank Schilder AIT Risø Campus Bygning 109, rum S14 ________________________________________ From: Janne Johansson <icepic.dz@xxxxxxxxx> Sent: Wednesday, November 20, 2024 11:30 AM To: Andre Tann Cc: ceph-users@xxxxxxx Subject: Re: Crush rule examples > Sorry, sent too early. So here we go again: > My setup looks like this: > > DC1 > node01 > node02 > node03 > node04 > node05 > DC2 > node06 > node07 > node08 > node09 > node10 > > I want a replicated pool with size=4. Two copies should go in each DC, > and then no two copies on a single node. > How can I describe this in a crush rule? This post seem to show that, except they have their root named "nvme" and they split on rack and not dc, but that is not important. https://unix.stackexchange.com/questions/781250/ceph-crush-rules-explanation-for-multiroom-racks-setup with the answer at the bottom: for example this should work as well, to have 4 replicas in total, distributed across two racks: step take default class nvme step choose firstn 2 type rack step chooseleaf firstn 2 type host -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx