Hello,
In some documentation I was reading last night about laying out OSDs, it
was suggested that if more that one OSD uses the same NVMe drive, the
failure-domain should probably be set to node. However, for a small
cluster the inclination is to use EC-pools and failure-domain = OSD.
I was wondering if there is a middle ground - could we define
failure-domain = NVMe? I think the map would need to be defined
manually in the same way that failure-domain = rack requires information
about which nodes are in each rack.
Example: My latest OSD nodes have 8 HDDs and 3 U.2 NVMe. I'd set up
the WAL/DB for with HDDs per OSD (wasted space on the 3rd NVMe).
Across all my OSD nodes I will have 8 HDDs and either 2 or 3 NVMe
devices per node - 15 total NVMe devices. My preferred EC-pool profile
is 8+2. It seems that this profile could be safely dispersed across 15
failure domains, resulting in protection against NVMe failure.
Please let me know if this is worth pursuing.
Thanks.
-Dave
--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
607-760-2328 (Cell)
607-777-4641 (Office)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx