Re: EC Profiles & DR

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

the post linked in the previous message is a good source for different approaches.

To provide some first-hand experience, I was operating a pool with a 6+2 EC profile on 4 hosts for a while (until we got more hosts) and the "subdivide a physical host into 2 crush-buckets" approach is actually working best (I basically tried all the approaches described in the linked post and they all had pitfalls).

Procedure is more or less:

- add second (logical) host bucket for each physical host by suffixing the host name with "-B" (ceph osd crush add-bucket <name> <type> <location>)
- move half the OSDs per host to this new host bucket (ceph osd crush move osd.ID host=HOSTNAME-B)
- make this location persist reboot of the OSDs (ceph config set osd.ID crush_location host=HOSTNAME-B")

This will allow you to move OSDs back easily when you get more hosts and can afford the recommended 1 shard per host. It will also show which and where OSDs are moved to with a simple "ceph config dump | grep crush_location". Bets of all, you don't have to fiddle around with crush maps and hope they do what you want. Just use failure domain host and you are good. No more than 2 host buckets per physical host means no more than 2 shards per physical host with default placement rules.

I was operating this set-up with min_size=6 and feeling bad about it due to the reduced maintainability (risk of data loss during maintenance). Its not great really, but sometimes there is no way around it. I was happy when I got the extra hosts.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Curt <lightspd@xxxxxxxxx>
Sent: Wednesday, December 6, 2023 3:56 PM
To: Patrick Begou
Cc: ceph-users@xxxxxxx
Subject:  Re: EC Profiles & DR

Hi Patrick,

Yes K and M are chunks, but the default crush map is a chunk per host,
which is probably the best way to do it, but I'm no expert. I'm not sure
why you would want to do a crush map with 2 chunks per host and min size 4
as it' s just asking for trouble at some point, in my opinion.  Anyway,
take a look at this post if your interested in doing 2 chunks per host it
will give you an idea of crushmap setup,
https://lists.ceph.io/hyperkitty/list/ceph-users@xxxxxxx/thread/NB3M22GNAC7VNWW7YBVYTH6TBZOYLTWA/
.

Regards,
Curt
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux