Re: Ceph iSCSI GW is too slow when compared with Raw RBD performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 23/06/2023 04:18, Work Ceph wrote:
Hello guys,

We have a Ceph cluster that runs just fine with Ceph Octopus; we use RBD
for some workloads, RadosGW (via S3) for others, and iSCSI for some Windows
clients.

We started noticing some unexpected performance issues with iSCSI. I mean,
an SSD pool is reaching 100MB of write speed for an image, when it can
reach up to 600MB+ of write speed for the same image when mounted and
consumed directly via RBD.

Is that performance degradation expected? We would expect some degradation,
but not as much as this one.

Can't say on ceph-iscsi since we use a kernel based rbd backstore, but generally you should change the Windows iSCSI initiator registry setting

MaxTransferLength from 256KB -> 4MB
(a reboot is required)

This will have a large impact for large block writes performance, as the default in Windows is to chop such writes into 256 KB blocks which is too low for distributed systems where latency is higher than traditional SANs. It will also improve smaller sequential write performance that are buffered such as regular Windows file copy, as the Windows page cache will buffer those to 1MB in size.


Also, we have a question regarding the use of Intel Turbo boost. Should we
disable it? Is it possible that the root cause of the slowness in the iSCSI
GW is caused by the use of Intel Turbo boost feature, which reduces the
clock of some cores?

I would not recommend this, best is to set the highest sustained/steady state performance speced for the cpu. Best is to set the governor to performance and disable c-states

cpupower idle-set -D 0
cpupower frequency-set -g performance

/Maged


Any feedback is much appreciated.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux