Re: Hardware recommendations for a Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Anthony,

Thank you very much for your comments; they were very helpful.
It made me reconsider some aspects of the configuration,
and it also helped me see that I wasn't too far off in general.

I'll respond to some of your suggestions, explaining my reasons.

>  Indeed, I know from experience that LFF spinners don't cut it for boot drives.  Even with strawberries.

My experience with LFF spinners is the same; when I set up the first cluster, it was the only economically viable option.

> Do you strictly need a second cluster?  Or could you just constrain your pools on the existing cluster based on deviceclass?

I want to set up a second cluster since the first one is on leased hardware, and I want to be prepared for when it expires.

> SMCI offers chassis that are NVMe-only I think.  The above I think comes with an HBA you don't need or want.

The HBA is only for the operating system disks; the rest of the NVMe U.2 drives are connected to the PCIe bus.

> The Kingstons are cost-effective, but last I looked up the specs they were kinda meh.  Beats spinners though.
> This is more CPU and more RAM than you need for 10xNVMe unless you're also going to run RGW or other compute on them.

I know there are better drives, but these U.2 drives are more affordable, just like the server.
I did an exercise with U.3 drives that had double the capacity, and each server cost twice as much.
It's a good option, but with my current budget, it's not feasible.

>> Two Intel NIC E810-XXVDA2 25GbE Dual Port (2 x SFP28) PCIe 4.0 x8 cards

> Why two?

>> Connected to 2 MikroTik CRS518-16XS-2XQ-RM switches at 100GbE per server
>> Connection to OpenStack would be via 4 x 10GB to our core switch.

> Might 25GE be an alternative?

Again, for economic reasons,
I installed 2 NIC 25GB Dual Port to create a LAG and achieve a 100GB connection.
The connection to the core switch is also through another LAG with 4 x 10GB (and if needed, I can add more ports).
This is because our core switch doesn't have any free SFP ports.
For now, I can only purchase Mikrotik switches due to their cost,
but in the future, when the leasing period ends, I'll consider other types of switches.


Thank you so much
Gustavo






________________________________
De: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Enviado: viernes, 6 de octubre de 2023 16:52
Para: Gustavo Fahnle <gfahnle@xxxxxxxxxxx>
Cc: ceph-users@xxxxxxx <ceph-users@xxxxxxx>
Asunto: Re:  Hardware recommendations for a Ceph cluster


> Currently, I have an OpenStack installation with a Ceph cluster consisting of 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a second, independent Ceph cluster to provide faster disks for OpenStack VMs.

Indeed, I know from experience that LFF spinners don't cut it for boot drives.  Even with strawberries.

> The idea for this second cluster is to exclusively provide RBD services to OpenStack

Do you strictly need a second cluster?  Or could you just constrain your pools on the existing cluster based on deviceclass?

> For the OSDs, I'm thinking of starting with 3 or 4 servers, specifically Supermicro AS-1114S-WN10RT,

SMCI offers chassis that are NVMe-only I think.  The above I think comes with an HBA you don't need or want.

> each with:
>
> 1 AMD EPYC 7713P Gen 3 processor (64 Core, 128 Threads, 2.0GHz)
> 256GB of RAM
> 2 x NVME 1TB for the operating system
> 10 x NVME Kingston DC1500M U.2 7.68TB for the OSDs

The Kingstons are cost-effective, but last I looked up the specs they were kinda meh.  Beats spinners though.
This is more CPU and more RAM than you need for 10xNVMe unless you're also going to run RGW or other compute on them.

> Two Intel NIC E810-XXVDA2 25GbE Dual Port (2 x SFP28) PCIe 4.0 x8 cards

Why two?

> Connected to 2 MikroTik CRS518-16XS-2XQ-RM switches at 100GbE per server
> Connection to OpenStack would be via 4 x 10GB to our core switch.

Might 25GE be an alternative?


>
> I would like to hear opinions about this configuration, recommendations, criticisms, etc.
>
> If any of you have references or experience with any of the components in this initial configuration, they would be very welcome.
>
> Thank you very much in advance.
>
> Gustavo Fahnle
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux