Hi Dave,
El 22/10/20 a las 16:48, Dave Hall escribió:
Hello,
(BTW, Nautilus 14.2.7 on Debian non-container.)
We're about to purchase more OSD nodes for our cluster, but I have a
couple questions about hardware choices. Our original nodes were 8 x
12TB SAS drives and a 1.6TB Samsung NVMe card for WAL, DB, etc.
We chose the NVMe card for performance since it has an 8 lane PCIe
interface. However, we're currently BlueFS spillovers.
The Tyan chassis we are considering has the option of 4 x U.2 NVMe
bays - each with 4 PCIe lanes, (and 8 SAS bays). It has occurred to
me that I might stripe 4 1TB NVMe drives together to get much more
space for WAL/DB and a net performance of 16 PCIe lanes.
Any thoughts on this approach?
Don't stripe them, if one NVMe fails you'll lose all OSDs. Just use 1
NVMe drive for 2 SAS drives and provision 300GB for WAL/DB for each
OSD (see related threads on this mailing list about why that exact size).
This way if a NVMe fails, you'll only lose 2 OSD.
Also, what size of WAL/DB partitions do you have now, and what spillover
size?
Also, any thoughts/recommendations on 12TB OSD drives? For
price/capacity this is a good size for us, but I'm wondering if my
BlueFS spillovers are resulting from using drives that are too big. I
also thought I might have seen some comments about cutting large
drives into multiple OSDs - could that be?
Not using such big disk here, sorry :) (no space needs)
Cheers
--
Eneko Lacunza | +34 943 569 206
| elacunza@xxxxxxxxx
Zuzendari teknikoa | https://www.binovo.es
Director técnico | Astigarragako Bidea, 2 - 2º izda.
BINOVO IT HUMAN PROJECT S.L | oficina 10-11, 20180 Oiartzun
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx