Re: CPU requirements

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Frédéric,

Thank you for your explanations and references. I will check them all. In the meantime it turned out that the disks for Ceph will come from SAN and we will have to use Ceph for distributing the replicas across different data centers. Probably in this case the per OSD CPU cores can be lowered to 2 CPU/OSD. But I will try to find some references for this usecase.

Kind regards,
Laszlo


On 18.09.2024 17:43, Frédéric Nass wrote:

Hi Laszlo,

I think it depends on the type of cluster you're trying to build.

If made of HDD/SSD OSDs, then 2 cores per OSD is probably still valid. I believe the 5-6 cores per OSDs recommendation you mentioned relates to all flash (NVMe) clusters where CPUs and especially memory bandwidth can't always keep up with all flash storage and small IO workloads.

"Ceph can easily utilize five or six cores on real clusters and up to about fourteen cores on single OSDs in isolation" [1]

I'd say 'real clusters' here stands for all flash NVMe clusters with multiples OSDs on multiple hosts with x3 replication, while 'a single OSDs in isolation' refers to a single NVMe OSD, maybe on a single host cluster with no replication at all. But this may need further confirmation. At the end of day, if your intention is to build an all flash cluster then I think you should multiply 5-6 cores by the number of NVMe OSDs and chose CPU accordingly (with the highest frequency you can get for the bucks).

You might want to check Mark's talks [2][3] and studies [4][5][6] about all flash Ceph clusters. It explains it all and suggests some modern hardware for all flash storage, if that's what you're building.

Cheers,
Frédéric.

[1]https://github.com/ceph/ceph/pull/44466#discussion_r779650295
[2]https://youtu.be/S2rPA7qlSYY
[3]https://youtu.be/pGwwlaCXfzo
[4]https://ceph.io/en/news/blog/2024/ceph-a-journey-to-1tibps/
[5]https://docs.clyso.com/blog/ceph-a-journey-to-1tibps
[6]https://docs.clyso.com/blog/clyso-enterprise-storage-all-flash-ceph-deployment-guide-preview/

----- Le 18 Sep 24, à 10:15, Laszlo Budailaszlo@xxxxxxxxxxxxxxxx  a écrit :

Hello everyone,

I'm trying to understand the CPU requirements for recent versions of CEPH.
Reading the documentation
(https://docs.ceph.com/en/latest/start/hardware-recommendations/) I cannot get
any conclusion about how to plan CPUs for ceph. There is the following
statement:

"With earlier releases of Ceph, we would make hardware recommendations based on
the number of cores per OSD, but this cores-per-osd metric is no longer as
useful a metric as the number of cycles per IOP and the number of IOPS per OSD.
For example, with NVMe OSD drives, Ceph can easily utilize five or six cores on
real clusters and up to about fourteen cores on single OSDs in isolation. So
cores per OSD are no longer as pressing a concern as they were. When selecting
hardware, select for IOPS per core."

How should I understand this? On one side it's saying "with NVMe OSD drives,
Ceph can easily utilize five or six cores on real clusters"
  and then continues: "and up to about fourteen cores on single OSDs in
  isolation." so should I cont 5/6 cores per NVMe OSD or 14cores?

And after all that there's the next sentence that says: " So cores per OSD are
no longer as pressing a concern as they were.", But if a single OSD can consume
5/6 cpu cures (that could go up to 14) then I would assume that cores per OSD
is still an important concern.


Can anyone explain these CPU requirements? Or point me to some other documents
that describes in more details the resources required for Ceph?


Thank you,
Laszlo
_______________________________________________
ceph-users mailing list --ceph-users@xxxxxxx
To unsubscribe send an email toceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux