Re: Hardware recommendations for a Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Currently, I have an OpenStack installation with a Ceph cluster consisting of 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second, independent Ceph cluster to provide faster disks for OpenStack VMs.

Indeed, I know from experience that LFF spinners don't cut it for boot drives.  Even with strawberries.

> The idea for this second cluster is to exclusively provide RBD services to OpenStack

Do you strictly need a second cluster?  Or could you just constrain your pools on the existing cluster based on deviceclass?

> For the OSDs, I'm thinking of starting with 3 or 4 servers, specifically Supermicro AS-1114S-WN10RT,

SMCI offers chassis that are NVMe-only I think.  The above I think comes with an HBA you don't need or want.

> each with:
> 
> 1 AMD EPYC 7713P Gen 3 processor (64 Core, 128 Threads, 2.0GHz)
> 256GB of RAM
> 2 x NVME 1TB for the operating system
> 10 x NVME Kingston DC1500M U.2 7.68TB for the OSDs

The Kingstons are cost-effective, but last I looked up the specs they were kinda meh.  Beats spinners though.
This is more CPU and more RAM than you need for 10xNVMe unless you're also going to run RGW or other compute on them.

> Two Intel NIC E810-XXVDA2 25GbE Dual Port (2 x SFP28) PCIe 4.0 x8 cards

Why two?

> Connected to 2 MikroTik CRS518-16XS-2XQ-RM switches at 100GbE per server
> Connection to OpenStack would be via 4 x 10GB to our core switch.

Might 25GE be an alternative?


> 
> I would like to hear opinions about this configuration, recommendations, criticisms, etc.
> 
> If any of you have references or experience with any of the components in this initial configuration, they would be very welcome.
> 
> Thank you very much in advance.
> 
> Gustavo Fahnle
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux