Re: rados bench performance in nautilus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 23/09/2019 08:27, 徐蕴 wrote:
Hi ceph experts,

I deployed Nautilus (v14.2.4) and Luminous (v12.2.11) on the same hardware, and made a rough performance comparison. The result seems Luminous is much better, which is unexpected.


My setup:
3 servers, each has 3 HDD OSDs, 1 SSD as DB, two separated 1G network for cluster and public.
Pool test has 32 pg and pop numbers, replicated size is 3.
Using "rados -p bench 80 write” to measure write performance.
The result:
Luminous: Average IOPS 36
Nautilus:   Average IOPS 28

Is the difference considered valid for Nautilus?

Br,
Xu Yun
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

If you ran "rados -p bench 80 write”without specifying the block size -b option, then you will be using default 4MB block sizes, at such sizes you should be looking at Throughput MB/s rather than iops, the 28 iops x 4M will already saturate your 1G network.

/Maged
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux