Hardware for new OSD nodes.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

(BTW, Nautilus 14.2.7 on Debian non-container.)

We're about to purchase more OSD nodes for our cluster, but I have a couple questions about hardware choices.  Our original nodes were 8 x 12TB SAS drives and a 1.6TB Samsung NVMe card for WAL, DB, etc.

We chose the NVMe card for performance since it has an 8 lane PCIe interface.  However, we're currently BlueFS spillovers.

The Tyan chassis we are considering has the option of 4 x U.2 NVMe bays - each with 4 PCIe lanes, (and 8 SAS bays).   It has occurred to me that I might stripe 4 1TB NVMe drives together to get much more space for WAL/DB and a net performance of 16 PCIe lanes.

Any thoughts on this approach?

Also, any thoughts/recommendations on 12TB OSD drives?  For price/capacity this is a good size for us, but I'm wondering if my BlueFS spillovers are resulting from using drives that are too big.  I also thought I might have seen some comments about cutting large drives into multiple OSDs - could that be?

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdhall@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux