Re: Please discuss about Slow Peering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Not with the most recent Ceph releases.

Actually, this depends. If its SSDs for which IOPs profit from higher iodepth, it is very likely to improve performance, because until today each OSD has only one kv_sync_thread and this is typically the bottleneck with heavy IOPs load. Having 2-4 kv_sync_threads per SSD, meaning 2-4 OSDs per disk, will help a lot if this thread is saturating.

For NVMes this is usually not required.

The question still remains, do you have enough CPU? If you have 13 disks with 4 OSDs each, you will need a core-count of at least 50-ish per host. Newer OSDs might be able to utilize even more on fast disks. You will also need 4 times the RAM.

> I suspect your PGs are too few though.

In addition, on these drives you should aim for 150-200 PGs per OSD (another reason to go x4 OSDs - x4 PGs per drive). We have 198PGs/OSD on average and this helps a lot with IO, recovery, everything.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Anthony D'Atri <anthony.datri@xxxxxxxxx>
Sent: Tuesday, May 21, 2024 3:06 PM
To: 서민우
Cc: Frank Schilder; ceph-users@xxxxxxx
Subject: Re:  Please discuss about Slow Peering



I have additional questions,
We use 13 disk (3.2TB NVMe) per server and allocate one OSD to each disk. In other words 1 Node has 13 osds.
Do you think this is inefficient?
Is it better to create more OSD by creating LV on the disk?

Not with the most recent Ceph releases.  I suspect your PGs are too few though.




_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux