Re: ceph cluster iops low

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Peter,

I'm not quite sure if you're cluster is fully backed by NVMe drives based on your description, but you might be interested in the CPU scaling article we posted last fall. It's available here:

https://ceph.io/en/news/blog/2022/ceph-osd-cpu-scaling/


That gives a good overview of what kind of performance you can get out of Ceph in a good environment with all NVMe drives. We also have a tuning article for using QEMU KVM which shows how big of a difference various tuning options in the whole IO pipeline can make:

https://ceph.io/en/news/blog/2022/qemu-kvm-tuning/

The gist of it is that there are a lot of things that can negatively affect performance, but if you can isolate and fix them it's possible to get reasonably high performance in the end.

If you have HDD based OSDs with NVMe only for DB/WAL, you will ultimately be limited by the random IO performance of the HDDs. The WAL can help a little but not like a full tiering solution. We have some ideas regarding how to improve this in the future. If you have a test cluster or are simply experimenting, you could try deploying on top of Intel's OpenCAS or bcache. There have been reports of improvements for HDD backed clusters using these solutions, though they are not currently supported officially by the project afaik.

Mark


On 1/23/23 14:58, petersun@xxxxxxxxxxxx wrote:
I have my ceph IOPS very low with over 48 SSD backed on NVMs for DB/WAL on four physical servers. The whole cluster has only about 20K IO total. Looks the IOs are suppressed over bottleneck somewhere. Dstat shows a lots csw and interrupts over 150K, while I am using FIO bench 4K 128QD test.
I check SSD throughput only about 40M at 250 ios each. Network are total 20G and not full of traffic. CPU are around 50% idle on 2*E5 2950v2 each node.
Is it normal to get that high and how to reduce it? where else could be the bottleneck?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux