Re: rados bench performance in nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 23/09/2019 11:49, Marc Roos wrote:
And I was just about to upgrade. :) How is this even possible with this
change[0] where 50-100% iops lost?


[0]
https://github.com/ceph/ceph/pull/28573



-----Original Message-----
From: 徐蕴 [mailto:yunxu@xxxxxx]
Sent: maandag 23 september 2019 8:28
To: ceph-users@xxxxxxx
Subject:  rados bench performance in nautilus

Hi ceph experts,

I deployed Nautilus (v14.2.4) and Luminous (v12.2.11) on the same
hardware, and made a rough performance comparison. The result seems
Luminous is much better, which is unexpected.


My setup:
3 servers, each has 3 HDD OSDs, 1 SSD as DB, two separated 1G network
for cluster and public.
Pool test has 32 pg and pop numbers, replicated size is 3.
Using "rados -p bench 80 write” to measure write performance.
The result:
Luminous: Average IOPS 36
Nautilus:   Average IOPS 28

Is the difference considered valid for Nautilus?

Br,
Xu Yun
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an
email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

The intent of this change is to increase iops on bluestore, it was implemented in 14.2.4 but it is a general bluestore issue not specific to Nautilus. /Maged
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux