Re: Hardware for new OSD nodes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Eneko,

On 10/22/2020 11:14 AM, Eneko Lacunza wrote:
Hi Dave,

El 22/10/20 a las 16:48, Dave Hall escribió:
Hello,

(BTW, Nautilus 14.2.7 on Debian non-container.)

We're about to purchase more OSD nodes for our cluster, but I have a couple questions about hardware choices.  Our original nodes were 8 x 12TB SAS drives and a 1.6TB Samsung NVMe card for WAL, DB, etc.

We chose the NVMe card for performance since it has an 8 lane PCIe interface.  However, we're currently BlueFS spillovers.

The Tyan chassis we are considering has the option of 4 x U.2 NVMe bays - each with 4 PCIe lanes, (and 8 SAS bays).   It has occurred to me that I might stripe 4 1TB NVMe drives together to get much more space for WAL/DB and a net performance of 16 PCIe lanes.

Any thoughts on this approach?
Don't stripe them, if one NVMe fails you'll lose all OSDs. Just use 1 NVMe drive for 2  SAS drives  and provision 300GB for WAL/DB for each OSD (see related threads on this mailing list about why that exact size).

This way if a NVMe fails, you'll only lose 2 OSD.
I was under the impression that everything that BlueStore puts on the SSD/NVMe could be reconstructed from information on the OSD. Am I mistaken about this?  If so, my single 1.6TB NVMe card is equally vulnerable.

Also, what size of WAL/DB partitions do you have now, and what spillover size?

I recently posted another question to the list on this topic, since I now have spillover on 7 of 24 OSDs.  Since the data layout on the NVMe for BlueStore is not  traditional I've never quite figured out how to get this information.   The current partition size is 1.6TB /12 since we had the possibility to add for more drives to each node.  How that was divided between WAL, DB, etc. is something I'd like to be able to understand.  However, we're not going to add the extra 4 drives, so expanding the LVM partitions is now a possibility.



Also, any thoughts/recommendations on 12TB OSD drives?  For price/capacity this is a good size for us, but I'm wondering if my BlueFS spillovers are resulting from using drives that are too big.  I also thought I might have seen some comments about cutting large drives into multiple OSDs - could that be?

Not using such big disk here, sorry :) (no space needs)

Cheers

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux