why performance difference between 'rados bench seq' and 'rados bench rand' quite significant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
We used 'rados bench' to test 4k object read and write operations.  
Our cluster is pacific, one node, 11 bluestore osd ,db and wal share the block device.  Block device is HDD.

1. testing 4k write with command 'rados bench 120 write -t 16 -b 4K -p rep3datapool --run-name 4kreadwrite --no-cleanup'

2. Before tesing 4k reads, we restarted all OSD daemons.  The perfomance of 'rados bench 120 seq -t 16 -p rep3datapool --run-name 4kreadwrite' was very good, which Average IOPS: 17735; 
using 'ceph daemon osd.1 perf dump rocksdb' , we found the rocksdb:get_latency avgcount: 15189, avgtime: 0.000012947 (12.9us)

3. Before tesing 4k rand reads, we restarted all OSD daemons.  'rados bench 60 rand -t 16 -p rep3datapool --run-name 4kreadwrite' average IOPS: 2071
rocksdb:get_latency avgcount: 8756, avgtime: 0.001761293 (1.7ms)

Q1: Why performance difference between 'rados bench seq' and 'rados bench rand' quite significant? How to explain the rocksdb get_latency perfomance between this two scenario?

4. We write 40w 4k object to the pool, restarted all OSD daemons. running 'rados bench 120 seq -t 16 -p rep3datapool --run-name 4kreadwrite' again. Average IOPS~= 2000. 
rocsdb:get_latency avgtime  also reached milliseconds level
Q2: Why 'rados bench seq' performance decresing extremly after writing some more 4k object to the pool?

Q3: Is there any methods and suggestions to optimized the read performance of this scenario under this hardware configuration.


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux