Re: Hardware recommendations for a Ceph cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 9 Oct 2023 at 14:24, Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:

>
>
> > AFAIK the standing recommendation for all flash setups is to prefer fewer
> > but faster cores
>
> Hrm, I think this might depend on what you’re solving for.  This is the
> conventional wisdom for MDS for sure.  My sense is that OSDs can use
> multiple cores fairly well, so I might look at the cores * GHz product.
> Especially since this use-case sounds like long-tail performance probably
> isn’t worth thousands.  Only four OSD servers, Neutron, Kingston.  I don’t
> think the OP has stated any performance goals other than being more
> suitable to OpenStack instances than LFF spinners.
>

Well, the 75F3 seems to retail for less than the 7713P, so it should
technically be cheaper but then availability and supplier quotes are always
an important factor.


>
> > so something like a 75F3 might be yielding better latency.
> > Plus you probably want to experiment with partitioning the NVMEs and
> > running multiple OSDs per drive - either 2 or 4.
>
> Mark Nelson has authored a series of blog posts that explore this in great
> detail over a number of releases.  TL;DR: with Quincy or Reef, especially,
> my sense is that multiple OSDs per NVMe device is not the clear win that it
> once was, and just eats more RAM.  Mark has also authored detailed posts
> about OSD performance vs cores per OSD, though IIRC those are for one OSD
> in isolation.  In a real-world cluster, especially one this small, I
> suspect that replication and the network will be bottlenecks before either
> of the factors discussed above.
>
>
Thanks for reminding me of those. One thing I'm missing from
https://ceph.io/en/news/blog/2023/reef-osds-per-nvme/ is the NVMe
utilization - no point in buying NVMe that are blazingly fast (in terms
sustained of random 4k IOPS performance) if you have no chance to actually
utilize it.
In summary it seems - if you have many cores then multiple OSD/NVME would
provide a benefit, with fewer cores not so much. Still, it would also be
good to see the same benchmark with a faster CPU (but less cores) and see
what the actual difference is but I guess duplicating the test setup with a
different CPU is a bit tricky budget-wsie.


> ymmv.
>
>
>
> >
> > On Sat, 7 Oct 2023 at 08:23, Gustavo Fahnle <gfahnle@xxxxxxxxxxx> wrote:
> >
> >> Hi,
> >>
> >> Currently, I have an OpenStack installation with a Ceph cluster
> consisting
> >> of 4 servers for OSD, each with 16TB SATA HDDs. My intention is to add a
> >> second, independent Ceph cluster to provide faster disks for OpenStack
> VMs.
> >> The idea for this second cluster is to exclusively provide RBD services
> to
> >> OpenStack. I plan to start with a cluster composed of 3 mon/mgr nodes
> >> similar to what we currently have (3 virtualized servers with VMware)
> with
> >> 4 cores, 8GB of memory, 80GB disk and 10GB network
> >> each server.
> >> In the current cluster, these nodes have low resource consumption, less
> >> than 10% CPU usage, 40% memory usage, and less than 100Mb/s of network
> >> usage.
> >>
> >> For the OSDs, I'm thinking of starting with 3 or 4 servers, specifically
> >> Supermicro AS-1114S-WN10RT, each with:
> >>
> >> 1 AMD EPYC 7713P Gen 3 processor (64 Core, 128 Threads, 2.0GHz)
> >> 256GB of RAM
> >> 2 x NVME 1TB for the operating system
> >> 10 x NVME Kingston DC1500M U.2 7.68TB for the OSDs
> >> Two Intel NIC E810-XXVDA2 25GbE Dual Port (2 x SFP28) PCIe 4.0 x8 cards
> >> Connected to 2 MikroTik CRS518-16XS-2XQ-RM switches at 100GbE per server
> >> Connection to OpenStack would be via 4 x 10GB to our core switch.
> >>
> >> I would like to hear opinions about this configuration, recommendations,
> >> criticisms, etc.
> >>
> >> If any of you have references or experience with any of the components
> in
> >> this initial configuration, they would be very welcome.
> >>
> >> Thank you very much in advance.
> >>
> >> Gustavo Fahnle
> >>
> >> _______________________________________________
> >> ceph-users mailing list -- ceph-users@xxxxxxx
> >> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >>
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux